assistance-engine/research/embeddings/datasets/codexglue/queries.jsonl

19211 lines
6.1 MiB

{"_id": "q_0", "text": "str->list\n Convert XML to URL List.\n From Biligrab."}
{"_id": "q_1", "text": "Downloads Sina videos by URL."}
{"_id": "q_2", "text": "Format text with color or other effects into ANSI escaped string."}
{"_id": "q_3", "text": "Print a log message to standard error."}
{"_id": "q_4", "text": "Print an error log message."}
{"_id": "q_5", "text": "Detect operating system."}
{"_id": "q_6", "text": "str->None"}
{"_id": "q_7", "text": "str->dict\n Information for CKPlayer API content."}
{"_id": "q_8", "text": "str->list of str\n Give you the real URLs."}
{"_id": "q_9", "text": "Converts a string to a valid filename."}
{"_id": "q_10", "text": "Downloads CBS videos by URL."}
{"_id": "q_11", "text": "Override the original one\n Ugly ugly dirty hack"}
{"_id": "q_12", "text": "str, str, str, bool, bool ->None\n\n Download Acfun video by vid.\n\n Call Acfun API, decide which site to use, and pass the job to its\n extractor."}
{"_id": "q_13", "text": "Scans through a string for substrings matched some patterns.\n\n Args:\n text: A string to be scanned.\n patterns: a list of regex pattern.\n\n Returns:\n a list if matched. empty if not."}
{"_id": "q_14", "text": "Parses the query string of a URL and returns the value of a parameter.\n\n Args:\n url: A URL.\n param: A string representing the name of the parameter.\n\n Returns:\n The value of the parameter."}
{"_id": "q_15", "text": "Post the content of a URL via sending a HTTP POST request.\n\n Args:\n url: A URL.\n headers: Request headers used by the client.\n decoded: Whether decode the response body using UTF-8 or the charset specified in Content-Type.\n\n Returns:\n The content as a string."}
{"_id": "q_16", "text": "Parses host name and port number from a string."}
{"_id": "q_17", "text": "str->str"}
{"_id": "q_18", "text": "JSON, int, int, int->str\n \n Get a proper title with courseid+topicID+partID."}
{"_id": "q_19", "text": "int->None\n \n Download a WHOLE course.\n Reuse the API call to save time."}
{"_id": "q_20", "text": "int, int, int->None\n \n Download ONE PART of the course."}
{"_id": "q_21", "text": "Checks if a task is either queued or running in this executor\n\n :param task_instance: TaskInstance\n :return: True if the task is known to this executor"}
{"_id": "q_22", "text": "Returns and flush the event buffer. In case dag_ids is specified\n it will only return and flush events for the given dag_ids. Otherwise\n it returns and flushes all\n\n :param dag_ids: to dag_ids to return events for, if None returns all\n :return: a dict of events"}
{"_id": "q_23", "text": "Returns a snowflake.connection object"}
{"_id": "q_24", "text": "returns aws_access_key_id, aws_secret_access_key\n from extra\n\n intended to be used by external import and export statements"}
{"_id": "q_25", "text": "Executes SQL using psycopg2 copy_expert method.\n Necessary to execute COPY command without access to a superuser.\n\n Note: if this method is called with a \"COPY FROM\" statement and\n the specified input file does not exist, it creates an empty\n file and no data is loaded, but the operation succeeds.\n So if users want to be aware when the input file does not exist,\n they have to check its existence by themselves."}
{"_id": "q_26", "text": "Uploads the file to Google cloud storage"}
{"_id": "q_27", "text": "Runs forever, monitoring the child processes of @gunicorn_master_proc and\n restarting workers occasionally.\n Each iteration of the loop traverses one edge of this state transition\n diagram, where each state (node) represents\n [ num_ready_workers_running / num_workers_running ]. We expect most time to\n be spent in [n / n]. `bs` is the setting webserver.worker_refresh_batch_size.\n The horizontal transition at ? happens after the new worker parses all the\n dags (so it could take a while!)\n V \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n [n / n] \u2500\u2500TTIN\u2500\u2500> [ [n, n+bs) / n + bs ] \u2500\u2500\u2500\u2500?\u2500\u2500\u2500> [n + bs / n + bs] \u2500\u2500TTOU\u2500\u2518\n ^ ^\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500v\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500 [ [0, n) / n ] <\u2500\u2500\u2500 start\n We change the number of workers by sending TTIN and TTOU to the gunicorn\n master process, which increases and decreases the number of child workers\n respectively. Gunicorn guarantees that on TTOU workers are terminated\n gracefully and that the oldest worker is terminated."}
{"_id": "q_28", "text": "Translate a string or list of strings.\n\n See https://cloud.google.com/translate/docs/translating-text\n\n :type values: str or list\n :param values: String or list of strings to translate.\n\n :type target_language: str\n :param target_language: The language to translate results into. This\n is required by the API and defaults to\n the target language of the current instance.\n\n :type format_: str\n :param format_: (Optional) One of ``text`` or ``html``, to specify\n if the input text is plain text or HTML.\n\n :type source_language: str or None\n :param source_language: (Optional) The language of the text to\n be translated.\n\n :type model: str or None\n :param model: (Optional) The model used to translate the text, such\n as ``'base'`` or ``'nmt'``.\n\n :rtype: str or list\n :returns: A list of dictionaries for each queried value. Each\n dictionary typically contains three keys (though not\n all will be present in all cases)\n\n * ``detectedSourceLanguage``: The detected language (as an\n ISO 639-1 language code) of the text.\n * ``translatedText``: The translation of the text into the\n target language.\n * ``input``: The corresponding input value.\n * ``model``: The model used to translate the text.\n\n If only a single value is passed, then only a single\n dictionary will be returned.\n :raises: :class:`~exceptions.ValueError` if the number of\n values and translations differ."}
{"_id": "q_29", "text": "Deletes a Cloud SQL instance.\n\n :param project_id: Project ID of the project that contains the instance. If set\n to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :param instance: Cloud SQL instance ID. This does not include the project ID.\n :type instance: str\n :return: None"}
{"_id": "q_30", "text": "Retrieves a database resource from a Cloud SQL instance.\n\n :param instance: Database instance ID. This does not include the project ID.\n :type instance: str\n :param database: Name of the database in the instance.\n :type database: str\n :param project_id: Project ID of the project that contains the instance. If set\n to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: A Cloud SQL database resource, as described in\n https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/databases#resource.\n :rtype: dict"}
{"_id": "q_31", "text": "Creates a new database inside a Cloud SQL instance.\n\n :param instance: Database instance ID. This does not include the project ID.\n :type instance: str\n :param body: The request body, as described in\n https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/databases/insert#request-body.\n :type body: dict\n :param project_id: Project ID of the project that contains the instance. If set\n to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"}
{"_id": "q_32", "text": "Updates a database resource inside a Cloud SQL instance.\n\n This method supports patch semantics.\n See https://cloud.google.com/sql/docs/mysql/admin-api/how-tos/performance#patch.\n\n :param instance: Database instance ID. This does not include the project ID.\n :type instance: str\n :param database: Name of the database to be updated in the instance.\n :type database: str\n :param body: The request body, as described in\n https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/databases/insert#request-body.\n :type body: dict\n :param project_id: Project ID of the project that contains the instance. If set\n to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"}
{"_id": "q_33", "text": "Exports data from a Cloud SQL instance to a Cloud Storage bucket as a SQL dump\n or CSV file.\n\n :param instance: Database instance ID of the Cloud SQL instance. This does not include the\n project ID.\n :type instance: str\n :param body: The request body, as described in\n https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/instances/export#request-body\n :type body: dict\n :param project_id: Project ID of the project that contains the instance. If set\n to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"}
{"_id": "q_34", "text": "Returns version of the Cloud SQL Proxy."}
{"_id": "q_35", "text": "Create connection in the Connection table, according to whether it uses\n proxy, TCP, UNIX sockets, SSL. Connection ID will be randomly generated.\n\n :param session: Session of the SQL Alchemy ORM (automatically generated with\n decorator)."}
{"_id": "q_36", "text": "Retrieves the dynamically created connection from the Connection table.\n\n :param session: Session of the SQL Alchemy ORM (automatically generated with\n decorator)."}
{"_id": "q_37", "text": "Delete the dynamically created connection from the Connection table.\n\n :param session: Session of the SQL Alchemy ORM (automatically generated with\n decorator)."}
{"_id": "q_38", "text": "Retrieve Cloud SQL Proxy runner. It is used to manage the proxy\n lifecycle per task.\n\n :return: The Cloud SQL Proxy runner.\n :rtype: CloudSqlProxyRunner"}
{"_id": "q_39", "text": "Reserve free TCP port to be used by Cloud SQL Proxy"}
{"_id": "q_40", "text": "Replaces invalid MLEngine job_id characters with '_'.\n\n This also adds a leading 'z' in case job_id starts with an invalid\n character.\n\n Args:\n job_id: A job_id str that may have invalid characters.\n\n Returns:\n A valid job_id representation."}
{"_id": "q_41", "text": "Extract error code from ftp exception"}
{"_id": "q_42", "text": "Remove any existing DAG runs for the perf test DAGs."}
{"_id": "q_43", "text": "Toggle the pause state of the DAGs in the test."}
{"_id": "q_44", "text": "Override the scheduler heartbeat to determine when the test is complete"}
{"_id": "q_45", "text": "Creates the directory specified by path, creating intermediate directories\n as necessary. If directory already exists, this is a no-op.\n\n :param path: The directory to create\n :type path: str\n :param mode: The mode to give to the directory e.g. 0o755, ignores umask\n :type mode: int"}
{"_id": "q_46", "text": "Make a naive datetime.datetime in a given time zone aware.\n\n :param value: datetime\n :param timezone: timezone\n :return: localized datetime in settings.TIMEZONE or timezone"}
{"_id": "q_47", "text": "Make an aware datetime.datetime naive in a given time zone.\n\n :param value: datetime\n :param timezone: timezone\n :return: naive datetime"}
{"_id": "q_48", "text": "Wrapper around datetime.datetime that adds settings.TIMEZONE if tzinfo not specified\n\n :return: datetime.datetime"}
{"_id": "q_49", "text": "Establish a connection to druid broker."}
{"_id": "q_50", "text": "Returns http session for use with requests\n\n :param headers: additional headers to be passed through as a dictionary\n :type headers: dict"}
{"_id": "q_51", "text": "Performs the request\n\n :param endpoint: the endpoint to be called i.e. resource/v1/query?\n :type endpoint: str\n :param data: payload to be uploaded or request parameters\n :type data: dict\n :param headers: additional headers to be passed through as a dictionary\n :type headers: dict\n :param extra_options: additional options to be used when executing the request\n i.e. {'check_response': False} to avoid checking raising exceptions on non\n 2XX or 3XX status codes\n :type extra_options: dict"}
{"_id": "q_52", "text": "Checks the status code and raise an AirflowException exception on non 2XX or 3XX\n status codes\n\n :param response: A requests response object\n :type response: requests.response"}
{"_id": "q_53", "text": "Grabs extra options like timeout and actually runs the request,\n checking for the result\n\n :param session: the session to be used to execute the request\n :type session: requests.Session\n :param prepped_request: the prepared request generated in run()\n :type prepped_request: session.prepare_request\n :param extra_options: additional options to be used when executing the request\n i.e. {'check_response': False} to avoid checking raising exceptions on non 2XX\n or 3XX status codes\n :type extra_options: dict"}
{"_id": "q_54", "text": "Contextmanager that will create and teardown a session."}
{"_id": "q_55", "text": "Parses some DatabaseError to provide a better error message"}
{"_id": "q_56", "text": "Get a pandas dataframe from a sql query."}
{"_id": "q_57", "text": "A generic way to insert a set of tuples into a table.\n\n :param table: Name of the target table\n :type table: str\n :param rows: The rows to insert into the table\n :type rows: iterable of tuples\n :param target_fields: The names of the columns to fill in the table\n :type target_fields: iterable of strings"}
{"_id": "q_58", "text": "Return a cosmos db client."}
{"_id": "q_59", "text": "Checks if a collection exists in CosmosDB."}
{"_id": "q_60", "text": "Creates a new collection in the CosmosDB database."}
{"_id": "q_61", "text": "Creates a new database in CosmosDB."}
{"_id": "q_62", "text": "Deletes an existing database in CosmosDB."}
{"_id": "q_63", "text": "Insert a list of new documents into an existing collection in the CosmosDB database."}
{"_id": "q_64", "text": "Get a list of documents from an existing collection in the CosmosDB database via SQL query."}
{"_id": "q_65", "text": "Returns the Cloud Function with the given name.\n\n :param name: Name of the function.\n :type name: str\n :return: A Cloud Functions object representing the function.\n :rtype: dict"}
{"_id": "q_66", "text": "Creates a new function in Cloud Function in the location specified in the body.\n\n :param location: The location of the function.\n :type location: str\n :param body: The body required by the Cloud Functions insert API.\n :type body: dict\n :param project_id: Optional, Google Cloud Project project_id where the function belongs.\n If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"}
{"_id": "q_67", "text": "Updates Cloud Functions according to the specified update mask.\n\n :param name: The name of the function.\n :type name: str\n :param body: The body required by the cloud function patch API.\n :type body: dict\n :param update_mask: The update mask - array of fields that should be patched.\n :type update_mask: [str]\n :return: None"}
{"_id": "q_68", "text": "Uploads zip file with sources.\n\n :param location: The location where the function is created.\n :type location: str\n :param zip_path: The path of the valid .zip file to upload.\n :type zip_path: str\n :param project_id: Optional, Google Cloud Project project_id where the function belongs.\n If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: The upload URL that was returned by generateUploadUrl method."}
{"_id": "q_69", "text": "Wrapper around the private _get_dep_statuses method that contains some global\n checks for all dependencies.\n\n :param ti: the task instance to get the dependency status for\n :type ti: airflow.models.TaskInstance\n :param session: database session\n :type session: sqlalchemy.orm.session.Session\n :param dep_context: the context for which this dependency should be evaluated for\n :type dep_context: DepContext"}
{"_id": "q_70", "text": "Returns whether or not this dependency is met for a given task instance. A\n dependency is considered met if all of the dependency statuses it reports are\n passing.\n\n :param ti: the task instance to see if this dependency is met for\n :type ti: airflow.models.TaskInstance\n :param session: database session\n :type session: sqlalchemy.orm.session.Session\n :param dep_context: The context this dependency is being checked under that stores\n state that can be used by this dependency.\n :type dep_context: BaseDepContext"}
{"_id": "q_71", "text": "Returns an iterable of strings that explain why this dependency wasn't met.\n\n :param ti: the task instance to see if this dependency is met for\n :type ti: airflow.models.TaskInstance\n :param session: database session\n :type session: sqlalchemy.orm.session.Session\n :param dep_context: The context this dependency is being checked under that stores\n state that can be used by this dependency.\n :type dep_context: BaseDepContext"}
{"_id": "q_72", "text": "Parses a config file for s3 credentials. Can currently\n parse boto, s3cmd.conf and AWS SDK config formats\n\n :param config_file_name: path to the config file\n :type config_file_name: str\n :param config_format: config type. One of \"boto\", \"s3cmd\" or \"aws\".\n Defaults to \"boto\"\n :type config_format: str\n :param profile: profile name in AWS type config file\n :type profile: str"}
{"_id": "q_73", "text": "Ensure all logging output has been flushed"}
{"_id": "q_74", "text": "If the path contains a folder with a .zip suffix, then\n the folder is treated as a zip archive and path to zip is returned."}
{"_id": "q_75", "text": "Traverse a directory and look for Python files.\n\n :param directory: the directory to traverse\n :type directory: unicode\n :param safe_mode: whether to use a heuristic to determine whether a file\n contains Airflow DAG definitions\n :return: a list of paths to Python files in the specified directory\n :rtype: list[unicode]"}
{"_id": "q_76", "text": "Launch DagFileProcessorManager processor and start DAG parsing loop in manager."}
{"_id": "q_77", "text": "Send termination signal to DAG parsing processor manager\n and expect it to terminate all DAG file processors."}
{"_id": "q_78", "text": "Use multiple processes to parse and generate tasks for the\n DAGs in parallel. By processing them in separate processes,\n we can get parallelism and isolation from potentially harmful\n user code."}
{"_id": "q_79", "text": "Parse DAG files repeatedly in a standalone loop."}
{"_id": "q_80", "text": "Refresh file paths from dag dir if we haven't done it for too long."}
{"_id": "q_81", "text": "Occasionally print out stats about how fast the files are getting processed"}
{"_id": "q_82", "text": "Sleeps until all the processors are done."}
{"_id": "q_83", "text": "This should be periodically called by the manager loop. This method will\n kick off new processes to process DAG definition files and read the\n results from the finished processors.\n\n :return: a list of SimpleDags that were produced by processors that\n have finished since the last time this was called\n :rtype: list[airflow.utils.dag_processing.SimpleDag]"}
{"_id": "q_84", "text": "Opens a ssh connection to the remote host.\n\n :rtype: paramiko.client.SSHClient"}
{"_id": "q_85", "text": "Gets the latest state of a long-running operation in Google Storage\n Transfer Service.\n\n :param job_name: (Required) Name of the job to be fetched\n :type job_name: str\n :param project_id: (Optional) the ID of the project that owns the Transfer\n Job. If set to None or missing, the default project_id from the GCP\n connection is used.\n :type project_id: str\n :return: Transfer Job\n :rtype: dict"}
{"_id": "q_86", "text": "Lists long-running operations in Google Storage Transfer\n Service that match the specified filter.\n\n :param filter: (Required) A request filter, as described in\n https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferJobs/list#body.QUERY_PARAMETERS.filter\n :type filter: dict\n :return: List of Transfer Jobs\n :rtype: list[dict]"}
{"_id": "q_87", "text": "Cancels an transfer operation in Google Storage Transfer Service.\n\n :param operation_name: Name of the transfer operation.\n :type operation_name: str\n :rtype: None"}
{"_id": "q_88", "text": "Pauses an transfer operation in Google Storage Transfer Service.\n\n :param operation_name: (Required) Name of the transfer operation.\n :type operation_name: str\n :rtype: None"}
{"_id": "q_89", "text": "Waits until the job reaches the expected state.\n\n :param job: Transfer job\n See:\n https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferJobs#TransferJob\n :type job: dict\n :param expected_statuses: State that is expected\n See:\n https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferOperations#Status\n :type expected_statuses: set[str]\n :param timeout:\n :type timeout: time in which the operation must end in seconds\n :rtype: None"}
{"_id": "q_90", "text": "Returns the number of slots open at the moment"}
{"_id": "q_91", "text": "Runs command and returns stdout"}
{"_id": "q_92", "text": "Remove an option if it exists in config from a file or\n default config. If both of config have the same option, this removes\n the option in both configs unless remove_default=False."}
{"_id": "q_93", "text": "Allocate IDs for incomplete keys.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/rest/v1/projects/allocateIds\n\n :param partial_keys: a list of partial keys.\n :type partial_keys: list\n :return: a list of full keys.\n :rtype: list"}
{"_id": "q_94", "text": "Commit a transaction, optionally creating, deleting or modifying some entities.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/rest/v1/projects/commit\n\n :param body: the body of the commit request.\n :type body: dict\n :return: the response body of the commit request.\n :rtype: dict"}
{"_id": "q_95", "text": "Lookup some entities by key.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/rest/v1/projects/lookup\n\n :param keys: the keys to lookup.\n :type keys: list\n :param read_consistency: the read consistency to use. default, strong or eventual.\n Cannot be used with a transaction.\n :type read_consistency: str\n :param transaction: the transaction to use, if any.\n :type transaction: str\n :return: the response body of the lookup request.\n :rtype: dict"}
{"_id": "q_96", "text": "Roll back a transaction.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/rest/v1/projects/rollback\n\n :param transaction: the transaction to roll back.\n :type transaction: str"}
{"_id": "q_97", "text": "Gets the latest state of a long-running operation.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/data/rest/v1/projects.operations/get\n\n :param name: the name of the operation resource.\n :type name: str\n :return: a resource operation instance.\n :rtype: dict"}
{"_id": "q_98", "text": "Deletes the long-running operation.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/data/rest/v1/projects.operations/delete\n\n :param name: the name of the operation resource.\n :type name: str\n :return: none if successful.\n :rtype: dict"}
{"_id": "q_99", "text": "Poll backup operation state until it's completed.\n\n :param name: the name of the operation resource\n :type name: str\n :param polling_interval_in_seconds: The number of seconds to wait before calling another request.\n :type polling_interval_in_seconds: int\n :return: a resource operation instance.\n :rtype: dict"}
{"_id": "q_100", "text": "Publish a message to a topic or an endpoint.\n\n :param target_arn: either a TopicArn or an EndpointArn\n :type target_arn: str\n :param message: the default message you want to send\n :param message: str"}
{"_id": "q_101", "text": "Retrieves connection to Cloud Natural Language service.\n\n :return: Cloud Natural Language service object\n :rtype: google.cloud.language_v1.LanguageServiceClient"}
{"_id": "q_102", "text": "Finds named entities in the text along with entity types,\n salience, mentions for each entity, and other properties.\n\n :param document: Input document.\n If a dict is provided, it must be of the same form as the protobuf message Document\n :type document: dict or class google.cloud.language_v1.types.Document\n :param encoding_type: The encoding type used by the API to calculate offsets.\n :type encoding_type: google.cloud.language_v1.types.EncodingType\n :param retry: A retry object used to retry requests. If None is specified, requests will not be\n retried.\n :type retry: google.api_core.retry.Retry\n :param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if\n retry is specified, the timeout applies to each individual attempt.\n :type timeout: float\n :param metadata: Additional metadata that is provided to the method.\n :type metadata: sequence[tuple[str, str]]]\n :rtype: google.cloud.language_v1.types.AnalyzeEntitiesResponse"}
{"_id": "q_103", "text": "Classifies a document into categories.\n\n :param document: Input document.\n If a dict is provided, it must be of the same form as the protobuf message Document\n :type document: dict or class google.cloud.language_v1.types.Document\n :param retry: A retry object used to retry requests. If None is specified, requests will not be\n retried.\n :type retry: google.api_core.retry.Retry\n :param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if\n retry is specified, the timeout applies to each individual attempt.\n :type timeout: float\n :param metadata: Additional metadata that is provided to the method.\n :type metadata: sequence[tuple[str, str]]]\n :rtype: google.cloud.language_v1.types.AnalyzeEntitiesResponse"}
{"_id": "q_104", "text": "Gets template fields for specific operator class.\n\n :param fullname: Full path to operator class.\n For example: ``airflow.contrib.operators.gcp_vision_operator.CloudVisionProductSetCreateOperator``\n :return: List of template field\n :rtype: list[str]"}
{"_id": "q_105", "text": "A role that allows you to include a list of template fields in the middle of the text. This is especially\n useful when writing guides describing how to use the operator.\n The result is a list of fields where each field is shorted in the literal block.\n\n Sample usage::\n\n :template-fields:`airflow.contrib.operators.gcp_natural_language_operator.CloudLanguageAnalyzeSentimentOperator`\n\n For further information look at:\n\n * [http://docutils.sourceforge.net/docs/howto/rst-roles.html](Creating reStructuredText Interpreted\n Text Roles)"}
{"_id": "q_106", "text": "Properly close pooled database connections"}
{"_id": "q_107", "text": "Ensures that certain subfolders of AIRFLOW_HOME are on the classpath"}
{"_id": "q_108", "text": "Gets the returned Celery result from the Airflow task\n ID provided to the sensor, and returns True if the\n celery result has been finished execution.\n\n :param context: Airflow's execution context\n :type context: dict\n :return: True if task has been executed, otherwise False\n :rtype: bool"}
{"_id": "q_109", "text": "Return true if the ticket cache contains \"conf\" information as is found\n in ticket caches of Kerberos 1.8.1 or later. This is incompatible with the\n Sun Java Krb5LoginModule in Java6, so we need to take an action to work\n around it."}
{"_id": "q_110", "text": "Transforms a SQLAlchemy model instance into a dictionary"}
{"_id": "q_111", "text": "Reduce the given list of items by splitting it into chunks\n of the given size and passing each chunk through the reducer"}
{"_id": "q_112", "text": "Given a number of tasks, builds a dependency chain.\n\n chain(task_1, task_2, task_3, task_4)\n\n is equivalent to\n\n task_1.set_downstream(task_2)\n task_2.set_downstream(task_3)\n task_3.set_downstream(task_4)"}
{"_id": "q_113", "text": "Returns a pretty ascii table from tuples\n\n If namedtuple are used, the table will have headers"}
{"_id": "q_114", "text": "Returns a Google Cloud Dataproc service object."}
{"_id": "q_115", "text": "Awaits for Google Cloud Dataproc Operation to complete."}
{"_id": "q_116", "text": "Handles the Airflow + Databricks lifecycle logic for a Databricks operator\n\n :param operator: Databricks operator being handled\n :param context: Airflow context"}
{"_id": "q_117", "text": "Run an pig script using the pig cli\n\n >>> ph = PigCliHook()\n >>> result = ph.run_cli(\"ls /;\")\n >>> (\"hdfs://\" in result)\n True"}
{"_id": "q_118", "text": "Fetch and return the state of the given celery task. The scope of this function is\n global so that it can be called by subprocesses in the pool.\n\n :param celery_task: a tuple of the Celery task key and the async Celery object used\n to fetch the task's state\n :type celery_task: tuple(str, celery.result.AsyncResult)\n :return: a tuple of the Celery task key and the Celery state of the task\n :rtype: tuple[str, str]"}
{"_id": "q_119", "text": "How many Celery tasks should each worker process send.\n\n :return: Number of tasks that should be sent per process\n :rtype: int"}
{"_id": "q_120", "text": "How many Celery tasks should be sent to each worker process.\n\n :return: Number of tasks that should be used per process\n :rtype: int"}
{"_id": "q_121", "text": "Like a Python builtin dict object, setdefault returns the current value\n for a key, and if it isn't there, stores the default value and returns it.\n\n :param key: Dict key for this Variable\n :type key: str\n :param default: Default value to set and return if the variable\n isn't already in the DB\n :type default: Mixed\n :param deserialize_json: Store this as a JSON encoded value in the DB\n and un-encode it when retrieving a value\n :return: Mixed"}
{"_id": "q_122", "text": "Launches a MLEngine job and wait for it to reach a terminal state.\n\n :param project_id: The Google Cloud project id within which MLEngine\n job will be launched.\n :type project_id: str\n\n :param job: MLEngine Job object that should be provided to the MLEngine\n API, such as: ::\n\n {\n 'jobId': 'my_job_id',\n 'trainingInput': {\n 'scaleTier': 'STANDARD_1',\n ...\n }\n }\n\n :type job: dict\n\n :param use_existing_job_fn: In case that a MLEngine job with the same\n job_id already exist, this method (if provided) will decide whether\n we should use this existing job, continue waiting for it to finish\n and returning the job object. It should accepts a MLEngine job\n object, and returns a boolean value indicating whether it is OK to\n reuse the existing job. If 'use_existing_job_fn' is not provided,\n we by default reuse the existing MLEngine job.\n :type use_existing_job_fn: function\n\n :return: The MLEngine job object if the job successfully reach a\n terminal state (which might be FAILED or CANCELLED state).\n :rtype: dict"}
{"_id": "q_123", "text": "Gets a MLEngine job based on the job name.\n\n :return: MLEngine job object if succeed.\n :rtype: dict\n\n Raises:\n googleapiclient.errors.HttpError: if HTTP error is returned from server"}
{"_id": "q_124", "text": "Create a Model. Blocks until finished."}
{"_id": "q_125", "text": "Write batch items to dynamodb table with provisioned throughout capacity."}
{"_id": "q_126", "text": "Integrate plugins to the context."}
{"_id": "q_127", "text": "Creates a new instance of the configured executor if none exists and returns it"}
{"_id": "q_128", "text": "Handles error callbacks when using Segment with segment_debug_mode set to True"}
{"_id": "q_129", "text": "Returns a mssql connection object"}
{"_id": "q_130", "text": "Delete all DB records related to the specified Dag."}
{"_id": "q_131", "text": "Returns a JSON with a task's public instance variables."}
{"_id": "q_132", "text": "Get all pools."}
{"_id": "q_133", "text": "Delete pool."}
{"_id": "q_134", "text": "Create a new container group\n\n :param resource_group: the name of the resource group\n :type resource_group: str\n :param name: the name of the container group\n :type name: str\n :param container_group: the properties of the container group\n :type container_group: azure.mgmt.containerinstance.models.ContainerGroup"}
{"_id": "q_135", "text": "Get the state and exitcode of a container group\n\n :param resource_group: the name of the resource group\n :type resource_group: str\n :param name: the name of the container group\n :type name: str\n :return: A tuple with the state, exitcode, and details.\n If the exitcode is unknown 0 is returned.\n :rtype: tuple(state,exitcode,details)"}
{"_id": "q_136", "text": "Builds an ingest query for an HDFS TSV load.\n\n :param static_path: The path on hdfs where the data is\n :type static_path: str\n :param columns: List of all the columns that are available\n :type columns: list"}
{"_id": "q_137", "text": "Check for message on subscribed channels and write to xcom the message with key ``message``\n\n An example of message ``{'type': 'message', 'pattern': None, 'channel': b'test', 'data': b'hello'}``\n\n :param context: the context object\n :type context: dict\n :return: ``True`` if message (with type 'message') is available or ``False`` if not"}
{"_id": "q_138", "text": "Returns a set of dag runs for the given search criteria.\n\n :param dag_id: the dag_id to find dag runs for\n :type dag_id: int, list\n :param run_id: defines the the run id for this dag run\n :type run_id: str\n :param execution_date: the execution date\n :type execution_date: datetime.datetime\n :param state: the state of the dag run\n :type state: airflow.utils.state.State\n :param external_trigger: whether this dag run is externally triggered\n :type external_trigger: bool\n :param no_backfills: return no backfills (True), return all (False).\n Defaults to False\n :type no_backfills: bool\n :param session: database session\n :type session: sqlalchemy.orm.session.Session"}
{"_id": "q_139", "text": "Returns the task instances for this dag run"}
{"_id": "q_140", "text": "Returns the task instance specified by task_id for this dag run\n\n :param task_id: the task id"}
{"_id": "q_141", "text": "The previous DagRun, if there is one"}
{"_id": "q_142", "text": "The previous, SCHEDULED DagRun, if there is one"}
{"_id": "q_143", "text": "Determines the overall state of the DagRun based on the state\n of its TaskInstances.\n\n :return: State"}
{"_id": "q_144", "text": "Verifies the DagRun by checking for removed tasks or tasks that are not in the\n database yet. It will set state to removed or add the task if required."}
{"_id": "q_145", "text": "We need to get the headers in addition to the body answer\n to get the location from them\n This function uses jenkins_request method from python-jenkins library\n with just the return call changed\n\n :param jenkins_server: The server to query\n :param req: The request to execute\n :return: Dict containing the response body (key body)\n and the headers coming along (headers)"}
{"_id": "q_146", "text": "Given a context, this function provides a dictionary of values that can be used to\n externally reconstruct relations between dags, dag_runs, tasks and task_instances.\n Default to abc.def.ghi format and can be made to ABC_DEF_GHI format if\n in_env_var_format is set to True.\n\n :param context: The context for the task_instance of interest.\n :type context: dict\n :param in_env_var_format: If returned vars should be in ABC_DEF_GHI format.\n :type in_env_var_format: bool\n :return: task_instance context as dict."}
{"_id": "q_147", "text": "Queries datadog for a specific metric, potentially with some\n function applied to it and returns the results.\n\n :param query: The datadog query to execute (see datadog docs)\n :type query: str\n :param from_seconds_ago: How many seconds ago to start querying for.\n :type from_seconds_ago: int\n :param to_seconds_ago: Up to how many seconds ago to query for.\n :type to_seconds_ago: int"}
{"_id": "q_148", "text": "Fail given zombie tasks, which are tasks that haven't\n had a heartbeat for too long, in the current DagBag.\n\n :param zombies: zombie task instances to kill.\n :type zombies: airflow.utils.dag_processing.SimpleTaskInstance\n :param session: DB session.\n :type session: sqlalchemy.orm.session.Session"}
{"_id": "q_149", "text": "Adds the DAG into the bag, recurses into sub dags.\n Throws AirflowDagCycleException if a cycle is detected in this dag or its subdags"}
{"_id": "q_150", "text": "Given a file path or a folder, this method looks for python modules,\n imports them and adds them to the dagbag collection.\n\n Note that if a ``.airflowignore`` file is found while processing\n the directory, it will behave much like a ``.gitignore``,\n ignoring files that match any of the regex patterns specified\n in the file.\n\n **Note**: The patterns in .airflowignore are treated as\n un-anchored regexes, not shell-like glob patterns."}
{"_id": "q_151", "text": "Prints a report around DagBag loading stats"}
{"_id": "q_152", "text": "Add or subtract days from a YYYY-MM-DD\n\n :param ds: anchor date in ``YYYY-MM-DD`` format to add to\n :type ds: str\n :param days: number of days to add to the ds, you can use negative values\n :type days: int\n\n >>> ds_add('2015-01-01', 5)\n '2015-01-06'\n >>> ds_add('2015-01-06', -5)\n '2015-01-01'"}
{"_id": "q_153", "text": "Takes an input string and outputs another string\n as specified in the output format\n\n :param ds: input string which contains a date\n :type ds: str\n :param input_format: input string format. E.g. %Y-%m-%d\n :type input_format: str\n :param output_format: output string format E.g. %Y-%m-%d\n :type output_format: str\n\n >>> ds_format('2015-01-01', \"%Y-%m-%d\", \"%m-%d-%y\")\n '01-01-15'\n >>> ds_format('1/5/2015', \"%m/%d/%Y\", \"%Y-%m-%d\")\n '2015-01-05'"}
{"_id": "q_154", "text": "poke matching files in a directory with self.regex\n\n :return: Bool depending on the search criteria"}
{"_id": "q_155", "text": "Clears a set of task instances, but makes sure the running ones\n get killed.\n\n :param tis: a list of task instances\n :param session: current session\n :param activate_dag_runs: flag to check for active dag run\n :param dag: DAG object"}
{"_id": "q_156", "text": "Return the try number that this task number will be when it is actually\n run.\n\n If the TI is currently running, this will match the column in the\n databse, in all othercases this will be incremenetd"}
{"_id": "q_157", "text": "Generates the shell command required to execute this task instance.\n\n :param dag_id: DAG ID\n :type dag_id: unicode\n :param task_id: Task ID\n :type task_id: unicode\n :param execution_date: Execution date for the task\n :type execution_date: datetime\n :param mark_success: Whether to mark the task as successful\n :type mark_success: bool\n :param ignore_all_deps: Ignore all ignorable dependencies.\n Overrides the other ignore_* parameters.\n :type ignore_all_deps: bool\n :param ignore_depends_on_past: Ignore depends_on_past parameter of DAGs\n (e.g. for Backfills)\n :type ignore_depends_on_past: bool\n :param ignore_task_deps: Ignore task-specific dependencies such as depends_on_past\n and trigger rule\n :type ignore_task_deps: bool\n :param ignore_ti_state: Ignore the task instance's previous failure/success\n :type ignore_ti_state: bool\n :param local: Whether to run the task locally\n :type local: bool\n :param pickle_id: If the DAG was serialized to the DB, the ID\n associated with the pickled DAG\n :type pickle_id: unicode\n :param file_path: path to the file containing the DAG definition\n :param raw: raw mode (needs more details)\n :param job_id: job ID (needs more details)\n :param pool: the Airflow pool that the task should run in\n :type pool: unicode\n :param cfg_path: the Path to the configuration file\n :type cfg_path: basestring\n :return: shell command that can be used to run the task instance"}
{"_id": "q_158", "text": "Get the very latest state from the database, if a session is passed,\n we use and looking up the state becomes part of the session, otherwise\n a new session is used."}
{"_id": "q_159", "text": "Forces the task instance's state to FAILED in the database."}
{"_id": "q_160", "text": "Clears all XCom data from the database for the task instance"}
{"_id": "q_161", "text": "Returns a tuple that identifies the task instance uniquely"}
{"_id": "q_162", "text": "Pull XComs that optionally meet certain criteria.\n\n The default value for `key` limits the search to XComs\n that were returned by other tasks (as opposed to those that were pushed\n manually). To remove this filter, pass key=None (or any desired value).\n\n If a single task_id string is provided, the result is the value of the\n most recent matching XCom from that task_id. If multiple task_ids are\n provided, a tuple of matching values is returned. None is returned\n whenever no matches are found.\n\n :param key: A key for the XCom. If provided, only XComs with matching\n keys will be returned. The default key is 'return_value', also\n available as a constant XCOM_RETURN_KEY. This key is automatically\n given to XComs returned by tasks (as opposed to being pushed\n manually). To remove the filter, pass key=None.\n :type key: str\n :param task_ids: Only XComs from tasks with matching ids will be\n pulled. Can pass None to remove the filter.\n :type task_ids: str or iterable of strings (representing task_ids)\n :param dag_id: If provided, only pulls XComs from this DAG.\n If None (default), the DAG of the calling task is used.\n :type dag_id: str\n :param include_prior_dates: If False, only XComs from the current\n execution_date are returned. If True, XComs from previous dates\n are returned as well.\n :type include_prior_dates: bool"}
{"_id": "q_163", "text": "Sets the log context."}
{"_id": "q_164", "text": "Retrieves connection to Google Compute Engine.\n\n :return: Google Compute Engine services object\n :rtype: dict"}
{"_id": "q_165", "text": "Sets machine type of an instance defined by project_id, zone and resource_id.\n Must be called with keyword arguments rather than positional.\n\n :param zone: Google Cloud Platform zone where the instance exists.\n :type zone: str\n :param resource_id: Name of the Compute Engine instance resource\n :type resource_id: str\n :param body: Body required by the Compute Engine setMachineType API,\n as described in\n https://cloud.google.com/compute/docs/reference/rest/v1/instances/setMachineType\n :type body: dict\n :param project_id: Optional, Google Cloud Platform project ID where the\n Compute Engine Instance exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"}
{"_id": "q_166", "text": "Retrieves instance template by project_id and resource_id.\n Must be called with keyword arguments rather than positional.\n\n :param resource_id: Name of the instance template\n :type resource_id: str\n :param project_id: Optional, Google Cloud Platform project ID where the\n Compute Engine Instance exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :return: Instance template representation as object according to\n https://cloud.google.com/compute/docs/reference/rest/v1/instanceTemplates\n :rtype: dict"}
{"_id": "q_167", "text": "Inserts instance template using body specified\n Must be called with keyword arguments rather than positional.\n\n :param body: Instance template representation as object according to\n https://cloud.google.com/compute/docs/reference/rest/v1/instanceTemplates\n :type body: dict\n :param request_id: Optional, unique request_id that you might add to achieve\n full idempotence (for example when client call times out repeating the request\n with the same request id will not create a new instance template again)\n It should be in UUID format as defined in RFC 4122\n :type request_id: str\n :param project_id: Optional, Google Cloud Platform project ID where the\n Compute Engine Instance exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"}
{"_id": "q_168", "text": "Retrieves Instance Group Manager by project_id, zone and resource_id.\n Must be called with keyword arguments rather than positional.\n\n :param zone: Google Cloud Platform zone where the Instance Group Manager exists\n :type zone: str\n :param resource_id: Name of the Instance Group Manager\n :type resource_id: str\n :param project_id: Optional, Google Cloud Platform project ID where the\n Compute Engine Instance exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :return: Instance group manager representation as object according to\n https://cloud.google.com/compute/docs/reference/rest/beta/instanceGroupManagers\n :rtype: dict"}
{"_id": "q_169", "text": "Patches Instance Group Manager with the specified body.\n Must be called with keyword arguments rather than positional.\n\n :param zone: Google Cloud Platform zone where the Instance Group Manager exists\n :type zone: str\n :param resource_id: Name of the Instance Group Manager\n :type resource_id: str\n :param body: Instance Group Manager representation as json-merge-patch object\n according to\n https://cloud.google.com/compute/docs/reference/rest/beta/instanceTemplates/patch\n :type body: dict\n :param request_id: Optional, unique request_id that you might add to achieve\n full idempotence (for example when client call times out repeating the request\n with the same request id will not create a new instance template again).\n It should be in UUID format as defined in RFC 4122\n :type request_id: str\n :param project_id: Optional, Google Cloud Platform project ID where the\n Compute Engine Instance exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"}
{"_id": "q_170", "text": "Waits for the named operation to complete - checks status of the async call.\n\n :param operation_name: name of the operation\n :type operation_name: str\n :param zone: optional region of the request (might be None for global operations)\n :type zone: str\n :return: None"}
{"_id": "q_171", "text": "Check if bucket_name exists.\n\n :param bucket_name: the name of the bucket\n :type bucket_name: str"}
{"_id": "q_172", "text": "Checks that a prefix exists in a bucket\n\n :param bucket_name: the name of the bucket\n :type bucket_name: str\n :param prefix: a key prefix\n :type prefix: str\n :param delimiter: the delimiter marks key hierarchy.\n :type delimiter: str"}
{"_id": "q_173", "text": "Lists prefixes in a bucket under prefix\n\n :param bucket_name: the name of the bucket\n :type bucket_name: str\n :param prefix: a key prefix\n :type prefix: str\n :param delimiter: the delimiter marks key hierarchy.\n :type delimiter: str\n :param page_size: pagination size\n :type page_size: int\n :param max_items: maximum items to return\n :type max_items: int"}
{"_id": "q_174", "text": "Returns a boto3.s3.Object\n\n :param key: the path to the key\n :type key: str\n :param bucket_name: the name of the bucket\n :type bucket_name: str"}
{"_id": "q_175", "text": "Reads a key from S3\n\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which the file is stored\n :type bucket_name: str"}
{"_id": "q_176", "text": "Loads a local file to S3\n\n :param filename: name of the file to load.\n :type filename: str\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which to store the file\n :type bucket_name: str\n :param replace: A flag to decide whether or not to overwrite the key\n if it already exists. If replace is False and the key exists, an\n error will be raised.\n :type replace: bool\n :param encrypt: If True, the file will be encrypted on the server-side\n by S3 and will be stored in an encrypted form while at rest in S3.\n :type encrypt: bool"}
{"_id": "q_177", "text": "Loads a string to S3\n\n This is provided as a convenience to drop a string in S3. It uses the\n boto infrastructure to ship a file to s3.\n\n :param string_data: str to set as content for the key.\n :type string_data: str\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which to store the file\n :type bucket_name: str\n :param replace: A flag to decide whether or not to overwrite the key\n if it already exists\n :type replace: bool\n :param encrypt: If True, the file will be encrypted on the server-side\n by S3 and will be stored in an encrypted form while at rest in S3.\n :type encrypt: bool"}
{"_id": "q_178", "text": "Loads bytes to S3\n\n This is provided as a convenience to drop a string in S3. It uses the\n boto infrastructure to ship a file to s3.\n\n :param bytes_data: bytes to set as content for the key.\n :type bytes_data: bytes\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which to store the file\n :type bucket_name: str\n :param replace: A flag to decide whether or not to overwrite the key\n if it already exists\n :type replace: bool\n :param encrypt: If True, the file will be encrypted on the server-side\n by S3 and will be stored in an encrypted form while at rest in S3.\n :type encrypt: bool"}
{"_id": "q_179", "text": "Loads a file object to S3\n\n :param file_obj: The file-like object to set as the content for the S3 key.\n :type file_obj: file-like object\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which to store the file\n :type bucket_name: str\n :param replace: A flag that indicates whether to overwrite the key\n if it already exists.\n :type replace: bool\n :param encrypt: If True, S3 encrypts the file on the server,\n and the file is stored in encrypted form at rest in S3.\n :type encrypt: bool"}
{"_id": "q_180", "text": "Creates a copy of an object that is already stored in S3.\n\n Note: the S3 connection used here needs to have access to both\n source and destination bucket/key.\n\n :param source_bucket_key: The key of the source object.\n\n It can be either full s3:// style url or relative path from root level.\n\n When it's specified as a full s3:// url, please omit source_bucket_name.\n :type source_bucket_key: str\n :param dest_bucket_key: The key of the object to copy to.\n\n The convention to specify `dest_bucket_key` is the same\n as `source_bucket_key`.\n :type dest_bucket_key: str\n :param source_bucket_name: Name of the S3 bucket where the source object is in.\n\n It should be omitted when `source_bucket_key` is provided as a full s3:// url.\n :type source_bucket_name: str\n :param dest_bucket_name: Name of the S3 bucket to where the object is copied.\n\n It should be omitted when `dest_bucket_key` is provided as a full s3:// url.\n :type dest_bucket_name: str\n :param source_version_id: Version ID of the source object (OPTIONAL)\n :type source_version_id: str"}
{"_id": "q_181", "text": "Queries cassandra and returns a cursor to the results."}
{"_id": "q_182", "text": "Send an email with html content using sendgrid.\n\n To use this plugin:\n 0. include sendgrid subpackage as part of your Airflow installation, e.g.,\n pip install 'apache-airflow[sendgrid]'\n 1. update [email] backend in airflow.cfg, i.e.,\n [email]\n email_backend = airflow.contrib.utils.sendgrid.send_email\n 2. configure Sendgrid specific environment variables at all Airflow instances:\n SENDGRID_MAIL_FROM={your-mail-from}\n SENDGRID_API_KEY={your-sendgrid-api-key}."}
{"_id": "q_183", "text": "Recognizes audio input\n\n :param config: information to the recognizer that specifies how to process the request.\n https://googleapis.github.io/google-cloud-python/latest/speech/gapic/v1/types.html#google.cloud.speech_v1.types.RecognitionConfig\n :type config: dict or google.cloud.speech_v1.types.RecognitionConfig\n :param audio: audio data to be recognized\n https://googleapis.github.io/google-cloud-python/latest/speech/gapic/v1/types.html#google.cloud.speech_v1.types.RecognitionAudio\n :type audio: dict or google.cloud.speech_v1.types.RecognitionAudio\n :param retry: (Optional) A retry object used to retry requests. If None is specified,\n requests will not be retried.\n :type retry: google.api_core.retry.Retry\n :param timeout: (Optional) The amount of time, in seconds, to wait for the request to complete.\n Note that if retry is specified, the timeout applies to each individual attempt.\n :type timeout: float"}
{"_id": "q_184", "text": "Check whether a potential object is a subclass of\n the AirflowPlugin class.\n\n :param plugin_obj: potential subclass of AirflowPlugin\n :param existing_plugins: Existing list of AirflowPlugin subclasses\n :return: Whether or not the obj is a valid subclass of\n AirflowPlugin"}
{"_id": "q_185", "text": "Sets tasks instances to skipped from the same dag run.\n\n :param dag_run: the DagRun for which to set the tasks to skipped\n :param execution_date: execution_date\n :param tasks: tasks to skip (not task_ids)\n :param session: db session to use"}
{"_id": "q_186", "text": "Upload a file to Azure Data Lake.\n\n :param local_path: local path. Can be single file, directory (in which case,\n upload recursively) or glob pattern. Recursive glob patterns using `**`\n are not supported.\n :type local_path: str\n :param remote_path: Remote path to upload to; if multiple files, this is the\n directory root to write within.\n :type remote_path: str\n :param nthreads: Number of threads to use. If None, uses the number of cores.\n :type nthreads: int\n :param overwrite: Whether to forcibly overwrite existing files/directories.\n If False and remote path is a directory, will quit regardless if any files\n would be overwritten or not. If True, only matching filenames are actually\n overwritten.\n :type overwrite: bool\n :param buffersize: int [2**22]\n Number of bytes for internal buffer. This block cannot be bigger than\n a chunk and cannot be smaller than a block.\n :type buffersize: int\n :param blocksize: int [2**22]\n Number of bytes for a block. Within each chunk, we write a smaller\n block for each API call. This block cannot be bigger than a chunk.\n :type blocksize: int"}
{"_id": "q_187", "text": "List files in Azure Data Lake Storage\n\n :param path: full path/globstring to use to list files in ADLS\n :type path: str"}
{"_id": "q_188", "text": "Uncompress gz and bz2 files"}
{"_id": "q_189", "text": "Decorates function to execute function at the same time submitting action_logging\n but in CLI context. It will call action logger callbacks twice,\n one for pre-execution and the other one for post-execution.\n\n Action logger will be called with below keyword parameters:\n sub_command : name of sub-command\n start_datetime : start datetime instance by utc\n end_datetime : end datetime instance by utc\n full_command : full command line arguments\n user : current user\n log : airflow.models.log.Log ORM instance\n dag_id : dag id (optional)\n task_id : task_id (optional)\n execution_date : execution date (optional)\n error : exception instance if there's an exception\n\n :param f: function instance\n :return: wrapped function"}
{"_id": "q_190", "text": "Builds metrics dict from function args\n It assumes that function arguments is from airflow.bin.cli module's function\n and has Namespace instance where it optionally contains \"dag_id\", \"task_id\",\n and \"execution_date\".\n\n :param func_name: name of function\n :param namespace: Namespace instance from argparse\n :return: dict with metrics"}
{"_id": "q_191", "text": "Create the specified cgroup.\n\n :param path: The path of the cgroup to create.\n E.g. cpu/mygroup/mysubgroup\n :return: the Node associated with the created cgroup.\n :rtype: cgroupspy.nodes.Node"}
{"_id": "q_192", "text": "The purpose of this function is to be robust to improper connections\n settings provided by users, specifically in the host field.\n\n For example -- when users supply ``https://xx.cloud.databricks.com`` as the\n host, we must strip out the protocol to get the host.::\n\n h = DatabricksHook()\n assert h._parse_host('https://xx.cloud.databricks.com') == \\\n 'xx.cloud.databricks.com'\n\n In the case where users supply the correct ``xx.cloud.databricks.com`` as the\n host, this function is a no-op.::\n\n assert h._parse_host('xx.cloud.databricks.com') == 'xx.cloud.databricks.com'"}
{"_id": "q_193", "text": "Utility function to perform an API call with retries\n\n :param endpoint_info: Tuple of method and endpoint\n :type endpoint_info: tuple[string, string]\n :param json: Parameters for this API call.\n :type json: dict\n :return: If the api call returns a OK status code,\n this function returns the response in JSON. Otherwise,\n we throw an AirflowException.\n :rtype: dict"}
{"_id": "q_194", "text": "Sign into Salesforce, only if we are not already signed in."}
{"_id": "q_195", "text": "Make a query to Salesforce.\n\n :param query: The query to make to Salesforce.\n :type query: str\n :return: The query result.\n :rtype: dict"}
{"_id": "q_196", "text": "Get the description of an object from Salesforce.\n This description is the object's schema and\n some extra metadata that Salesforce stores for each object.\n\n :param obj: The name of the Salesforce object that we are getting a description of.\n :type obj: str\n :return: the description of the Salesforce object.\n :rtype: dict"}
{"_id": "q_197", "text": "Get all instances of the `object` from Salesforce.\n For each model, only get the fields specified in fields.\n\n All we really do underneath the hood is run:\n SELECT <fields> FROM <obj>;\n\n :param obj: The object name to get from Salesforce.\n :type obj: str\n :param fields: The fields to get from the object.\n :type fields: iterable\n :return: all instances of the object from Salesforce.\n :rtype: dict"}
{"_id": "q_198", "text": "Convert a column of a dataframe to UNIX timestamps if applicable\n\n :param column: A Series object representing a column of a dataframe.\n :type column: pd.Series\n :return: a new series that maintains the same index as the original\n :rtype: pd.Series"}
{"_id": "q_199", "text": "Write query results to file.\n\n Acceptable formats are:\n - csv:\n comma-separated-values file. This is the default format.\n - json:\n JSON array. Each element in the array is a different row.\n - ndjson:\n JSON array but each element is new-line delimited instead of comma delimited like in `json`\n\n This requires a significant amount of cleanup.\n Pandas doesn't handle output to CSV and json in a uniform way.\n This is especially painful for datetime types.\n Pandas wants to write them as strings in CSV, but as millisecond Unix timestamps.\n\n By default, this function will try and leave all values as they are represented in Salesforce.\n You use the `coerce_to_timestamp` flag to force all datetimes to become Unix timestamps (UTC).\n This is can be greatly beneficial as it will make all of your datetime fields look the same,\n and makes it easier to work with in other database environments\n\n :param query_results: the results from a SQL query\n :type query_results: list of dict\n :param filename: the name of the file where the data should be dumped to\n :type filename: str\n :param fmt: the format you want the output in. Default: 'csv'\n :type fmt: str\n :param coerce_to_timestamp: True if you want all datetime fields to be converted into Unix timestamps.\n False if you want them to be left in the same format as they were in Salesforce.\n Leaving the value as False will result in datetimes being strings. Default: False\n :type coerce_to_timestamp: bool\n :param record_time_added: True if you want to add a Unix timestamp field\n to the resulting data that marks when the data was fetched from Salesforce. Default: False\n :type record_time_added: bool\n :return: the dataframe that gets written to the file.\n :rtype: pd.Dataframe"}
{"_id": "q_200", "text": "Fetches a mongo collection object for querying.\n\n Uses connection schema as DB unless specified."}
{"_id": "q_201", "text": "Retrieves mail's attachments in the mail folder by its name.\n\n :param name: The name of the attachment that will be downloaded.\n :type name: str\n :param mail_folder: The mail folder where to look at.\n :type mail_folder: str\n :param check_regex: Checks the name for a regular expression.\n :type check_regex: bool\n :param latest_only: If set to True it will only retrieve\n the first matched attachment.\n :type latest_only: bool\n :param not_found_mode: Specify what should happen if no attachment has been found.\n Supported values are 'raise', 'warn' and 'ignore'.\n If it is set to 'raise' it will raise an exception,\n if set to 'warn' it will only print a warning and\n if set to 'ignore' it won't notify you at all.\n :type not_found_mode: str\n :returns: a list of tuple each containing the attachment filename and its payload.\n :rtype: a list of tuple"}
{"_id": "q_202", "text": "Downloads mail's attachments in the mail folder by its name to the local directory.\n\n :param name: The name of the attachment that will be downloaded.\n :type name: str\n :param local_output_directory: The output directory on the local machine\n where the files will be downloaded to.\n :type local_output_directory: str\n :param mail_folder: The mail folder where to look at.\n :type mail_folder: str\n :param check_regex: Checks the name for a regular expression.\n :type check_regex: bool\n :param latest_only: If set to True it will only download\n the first matched attachment.\n :type latest_only: bool\n :param not_found_mode: Specify what should happen if no attachment has been found.\n Supported values are 'raise', 'warn' and 'ignore'.\n If it is set to 'raise' it will raise an exception,\n if set to 'warn' it will only print a warning and\n if set to 'ignore' it won't notify you at all.\n :type not_found_mode: str"}
{"_id": "q_203", "text": "Gets all attachments by name for the mail.\n\n :param name: The name of the attachment to look for.\n :type name: str\n :param check_regex: Checks the name for a regular expression.\n :type check_regex: bool\n :param find_first: If set to True it will only find the first match and then quit.\n :type find_first: bool\n :returns: a list of tuples each containing name and payload\n where the attachments name matches the given name.\n :rtype: list of tuple"}
{"_id": "q_204", "text": "Write batch records to Kinesis Firehose"}
{"_id": "q_205", "text": "Determines whether a task is ready to be rescheduled. Only tasks in\n NONE state with at least one row in task_reschedule table are\n handled by this dependency class, otherwise this dependency is\n considered as passed. This dependency fails if the latest reschedule\n request's reschedule date is still in future."}
{"_id": "q_206", "text": "Send an email with html content\n\n >>> send_email('test@example.com', 'foo', '<b>Foo</b> bar', ['/dev/null'], dryrun=True)"}
{"_id": "q_207", "text": "Check if a blob exists on Azure Blob Storage.\n\n :param container_name: Name of the container.\n :type container_name: str\n :param blob_name: Name of the blob.\n :type blob_name: str\n :param kwargs: Optional keyword arguments that\n `BlockBlobService.exists()` takes.\n :type kwargs: object\n :return: True if the blob exists, False otherwise.\n :rtype: bool"}
{"_id": "q_208", "text": "Check if a prefix exists on Azure Blob storage.\n\n :param container_name: Name of the container.\n :type container_name: str\n :param prefix: Prefix of the blob.\n :type prefix: str\n :param kwargs: Optional keyword arguments that\n `BlockBlobService.list_blobs()` takes.\n :type kwargs: object\n :return: True if blobs matching the prefix exist, False otherwise.\n :rtype: bool"}
{"_id": "q_209", "text": "Delete a file from Azure Blob Storage.\n\n :param container_name: Name of the container.\n :type container_name: str\n :param blob_name: Name of the blob.\n :type blob_name: str\n :param is_prefix: If blob_name is a prefix, delete all matching files\n :type is_prefix: bool\n :param ignore_if_missing: if True, then return success even if the\n blob does not exist.\n :type ignore_if_missing: bool\n :param kwargs: Optional keyword arguments that\n `BlockBlobService.create_blob_from_path()` takes.\n :type kwargs: object"}
{"_id": "q_210", "text": "Transfers the remote file to a local location.\n\n If local_full_path_or_buffer is a string path, the file will be put\n at that location; if it is a file-like buffer, the file will\n be written to the buffer but not closed.\n\n :param remote_full_path: full path to the remote file\n :type remote_full_path: str\n :param local_full_path_or_buffer: full path to the local file or a\n file-like buffer\n :type local_full_path_or_buffer: str or file-like buffer\n :param callback: callback which is called each time a block of data\n is read. if you do not use a callback, these blocks will be written\n to the file or buffer passed in. if you do pass in a callback, note\n that writing to a file or buffer will need to be handled inside the\n callback.\n [default: output_handle.write()]\n :type callback: callable\n\n :Example::\n\n hook = FTPHook(ftp_conn_id='my_conn')\n\n remote_path = '/path/to/remote/file'\n local_path = '/path/to/local/file'\n\n # with a custom callback (in this case displaying progress on each read)\n def print_progress(percent_progress):\n self.log.info('Percent Downloaded: %s%%' % percent_progress)\n\n total_downloaded = 0\n total_file_size = hook.get_size(remote_path)\n output_handle = open(local_path, 'wb')\n def write_to_file_with_progress(data):\n total_downloaded += len(data)\n output_handle.write(data)\n percent_progress = (total_downloaded / total_file_size) * 100\n print_progress(percent_progress)\n hook.retrieve_file(remote_path, None, callback=write_to_file_with_progress)\n\n # without a custom callback data is written to the local_path\n hook.retrieve_file(remote_path, local_path)"}
{"_id": "q_211", "text": "Transfers a local file to the remote location.\n\n If local_full_path_or_buffer is a string path, the file will be read\n from that location; if it is a file-like buffer, the file will\n be read from the buffer but not closed.\n\n :param remote_full_path: full path to the remote file\n :type remote_full_path: str\n :param local_full_path_or_buffer: full path to the local file or a\n file-like buffer\n :type local_full_path_or_buffer: str or file-like buffer"}
{"_id": "q_212", "text": "Returns a datetime object representing the last time the file was modified\n\n :param path: remote file path\n :type path: string"}
{"_id": "q_213", "text": "Call the DiscordWebhookHook to post message"}
{"_id": "q_214", "text": "Return the FileService object."}
{"_id": "q_215", "text": "Check if a directory exists on Azure File Share.\n\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.exists()` takes.\n :type kwargs: object\n :return: True if the file exists, False otherwise.\n :rtype: bool"}
{"_id": "q_216", "text": "Check if a file exists on Azure File Share.\n\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param file_name: Name of the file.\n :type file_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.exists()` takes.\n :type kwargs: object\n :return: True if the file exists, False otherwise.\n :rtype: bool"}
{"_id": "q_217", "text": "Return the list of directories and files stored on a Azure File Share.\n\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.list_directories_and_files()` takes.\n :type kwargs: object\n :return: A list of files and directories\n :rtype: list"}
{"_id": "q_218", "text": "Create a new directory on a Azure File Share.\n\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.create_directory()` takes.\n :type kwargs: object\n :return: A list of files and directories\n :rtype: list"}
{"_id": "q_219", "text": "Upload a file to Azure File Share.\n\n :param file_path: Path to the file to load.\n :type file_path: str\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param file_name: Name of the file.\n :type file_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.create_file_from_path()` takes.\n :type kwargs: object"}
{"_id": "q_220", "text": "Upload a string to Azure File Share.\n\n :param string_data: String to load.\n :type string_data: str\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param file_name: Name of the file.\n :type file_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.create_file_from_text()` takes.\n :type kwargs: object"}
{"_id": "q_221", "text": "Upload a stream to Azure File Share.\n\n :param stream: Opened file/stream to upload as the file content.\n :type stream: file-like\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param file_name: Name of the file.\n :type file_name: str\n :param count: Size of the stream in bytes\n :type count: int\n :param kwargs: Optional keyword arguments that\n `FileService.create_file_from_stream()` takes.\n :type kwargs: object"}
{"_id": "q_222", "text": "Returns a Google Cloud Storage service object."}
{"_id": "q_223", "text": "Copies an object from a bucket to another, with renaming if requested.\n\n destination_bucket or destination_object can be omitted, in which case\n source bucket/object is used, but not both.\n\n :param source_bucket: The bucket of the object to copy from.\n :type source_bucket: str\n :param source_object: The object to copy.\n :type source_object: str\n :param destination_bucket: The destination of the object to copied to.\n Can be omitted; then the same bucket is used.\n :type destination_bucket: str\n :param destination_object: The (renamed) path of the object if given.\n Can be omitted; then the same name is used.\n :type destination_object: str"}
{"_id": "q_224", "text": "Get a file from Google Cloud Storage.\n\n :param bucket_name: The bucket to fetch from.\n :type bucket_name: str\n :param object_name: The object to fetch.\n :type object_name: str\n :param filename: If set, a local file path where the file should be written to.\n :type filename: str"}
{"_id": "q_225", "text": "Uploads a local file to Google Cloud Storage.\n\n :param bucket_name: The bucket to upload to.\n :type bucket_name: str\n :param object_name: The object name to set when uploading the local file.\n :type object_name: str\n :param filename: The local file path to the file to be uploaded.\n :type filename: str\n :param mime_type: The MIME type to set when uploading the file.\n :type mime_type: str\n :param gzip: Option to compress file for upload\n :type gzip: bool"}
{"_id": "q_226", "text": "Deletes an object from the bucket.\n\n :param bucket_name: name of the bucket, where the object resides\n :type bucket_name: str\n :param object_name: name of the object to delete\n :type object_name: str"}
{"_id": "q_227", "text": "List all objects from the bucket with the give string prefix in name\n\n :param bucket_name: bucket name\n :type bucket_name: str\n :param versions: if true, list all versions of the objects\n :type versions: bool\n :param max_results: max count of items to return in a single page of responses\n :type max_results: int\n :param prefix: prefix string which filters objects whose name begin with\n this prefix\n :type prefix: str\n :param delimiter: filters objects based on the delimiter (for e.g '.csv')\n :type delimiter: str\n :return: a stream of object names matching the filtering criteria"}
{"_id": "q_228", "text": "Gets the size of a file in Google Cloud Storage.\n\n :param bucket_name: The Google cloud storage bucket where the blob_name is.\n :type bucket_name: str\n :param object_name: The name of the object to check in the Google\n cloud storage bucket_name.\n :type object_name: str"}
{"_id": "q_229", "text": "Gets the MD5 hash of an object in Google Cloud Storage.\n\n :param bucket_name: The Google cloud storage bucket where the blob_name is.\n :type bucket_name: str\n :param object_name: The name of the object to check in the Google cloud\n storage bucket_name.\n :type object_name: str"}
{"_id": "q_230", "text": "Creates a new bucket. Google Cloud Storage uses a flat namespace, so\n you can't create a bucket with a name that is already in use.\n\n .. seealso::\n For more information, see Bucket Naming Guidelines:\n https://cloud.google.com/storage/docs/bucketnaming.html#requirements\n\n :param bucket_name: The name of the bucket.\n :type bucket_name: str\n :param resource: An optional dict with parameters for creating the bucket.\n For information on available parameters, see Cloud Storage API doc:\n https://cloud.google.com/storage/docs/json_api/v1/buckets/insert\n :type resource: dict\n :param storage_class: This defines how objects in the bucket are stored\n and determines the SLA and the cost of storage. Values include\n\n - ``MULTI_REGIONAL``\n - ``REGIONAL``\n - ``STANDARD``\n - ``NEARLINE``\n - ``COLDLINE``.\n\n If this value is not specified when the bucket is\n created, it will default to STANDARD.\n :type storage_class: str\n :param location: The location of the bucket.\n Object data for objects in the bucket resides in physical storage\n within this region. Defaults to US.\n\n .. seealso::\n https://developers.google.com/storage/docs/bucket-locations\n\n :type location: str\n :param project_id: The ID of the GCP Project.\n :type project_id: str\n :param labels: User-provided labels, in key/value pairs.\n :type labels: dict\n :return: If successful, it returns the ``id`` of the bucket."}
{"_id": "q_231", "text": "Returns true if training job's secondary status message has changed.\n\n :param current_job_description: Current job description, returned from DescribeTrainingJob call.\n :type current_job_description: dict\n :param prev_job_description: Previous job description, returned from DescribeTrainingJob call.\n :type prev_job_description: dict\n\n :return: Whether the secondary status message of a training job changed or not."}
{"_id": "q_232", "text": "Tar the local file or directory and upload to s3\n\n :param path: local file or directory\n :type path: str\n :param key: s3 key\n :type key: str\n :param bucket: s3 bucket\n :type bucket: str\n :return: None"}
{"_id": "q_233", "text": "Extract the S3 operations from the configuration and execute them.\n\n :param config: config of SageMaker operation\n :type config: dict\n :rtype: dict"}
{"_id": "q_234", "text": "Check if an S3 URL exists\n\n :param s3url: S3 url\n :type s3url: str\n :rtype: bool"}
{"_id": "q_235", "text": "Establish an AWS connection for retrieving logs during training\n\n :rtype: CloudWatchLogs.Client"}
{"_id": "q_236", "text": "Return the training job info associated with job_name and print CloudWatch logs"}
{"_id": "q_237", "text": "Check status of a SageMaker job\n\n :param job_name: name of the job to check status\n :type job_name: str\n :param key: the key of the response dict\n that points to the state\n :type key: str\n :param describe_function: the function used to retrieve the status\n :type describe_function: python callable\n :param args: the arguments for the function\n :param check_interval: the time interval in seconds which the operator\n will check the status of any SageMaker job\n :type check_interval: int\n :param max_ingestion_time: the maximum ingestion time in seconds. Any\n SageMaker jobs that run longer than this will fail. Setting this to\n None implies no timeout for any SageMaker job.\n :type max_ingestion_time: int\n :param non_terminal_states: the set of nonterminal states\n :type non_terminal_states: set\n :return: response of describe call after job is done"}
{"_id": "q_238", "text": "Execute the python dataflow job."}
{"_id": "q_239", "text": "Run migrations in 'offline' mode.\n\n This configures the context with just a URL\n and not an Engine, though an Engine is acceptable\n here as well. By skipping the Engine creation\n we don't even need a DBAPI to be available.\n\n Calls to context.execute() here emit the given string to the\n script output."}
{"_id": "q_240", "text": "Deletes the specified Cloud Bigtable instance.\n Raises google.api_core.exceptions.NotFound if the Cloud Bigtable instance does\n not exist.\n\n :param project_id: Optional, Google Cloud Platform project ID where the\n BigTable exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :param instance_id: The ID of the Cloud Bigtable instance.\n :type instance_id: str"}
{"_id": "q_241", "text": "Updates number of nodes in the specified Cloud Bigtable cluster.\n Raises google.api_core.exceptions.NotFound if the cluster does not exist.\n\n :type instance: Instance\n :param instance: The Cloud Bigtable instance that owns the cluster.\n :type cluster_id: str\n :param cluster_id: The ID of the cluster.\n :type nodes: int\n :param nodes: The desired number of nodes."}
{"_id": "q_242", "text": "Loads a pandas DataFrame into hive.\n\n Hive data types will be inferred if not passed but column names will\n not be sanitized.\n\n :param df: DataFrame to load into a Hive table\n :type df: pandas.DataFrame\n :param table: target Hive table, use dot notation to target a\n specific database\n :type table: str\n :param field_dict: mapping from column name to hive data type.\n Note that it must be OrderedDict so as to keep columns' order.\n :type field_dict: collections.OrderedDict\n :param delimiter: field delimiter in the file\n :type delimiter: str\n :param encoding: str encoding to use when writing DataFrame to file\n :type encoding: str\n :param pandas_kwargs: passed to DataFrame.to_csv\n :type pandas_kwargs: dict\n :param kwargs: passed to self.load_file"}
{"_id": "q_243", "text": "Loads a local file into Hive\n\n Note that the table generated in Hive uses ``STORED AS textfile``\n which isn't the most efficient serialization format. If a\n large amount of data is loaded and/or if the tables gets\n queried considerably, you may want to use this operator only to\n stage the data into a temporary table before loading it into its\n final destination using a ``HiveOperator``.\n\n :param filepath: local filepath of the file to load\n :type filepath: str\n :param table: target Hive table, use dot notation to target a\n specific database\n :type table: str\n :param delimiter: field delimiter in the file\n :type delimiter: str\n :param field_dict: A dictionary of the fields name in the file\n as keys and their Hive types as values.\n Note that it must be OrderedDict so as to keep columns' order.\n :type field_dict: collections.OrderedDict\n :param create: whether to create the table if it doesn't exist\n :type create: bool\n :param overwrite: whether to overwrite the data in table or partition\n :type overwrite: bool\n :param partition: target partition as a dict of partition columns\n and values\n :type partition: dict\n :param recreate: whether to drop and recreate the table at every\n execution\n :type recreate: bool\n :param tblproperties: TBLPROPERTIES of the hive table being created\n :type tblproperties: dict"}
{"_id": "q_244", "text": "Returns a Hive thrift client."}
{"_id": "q_245", "text": "Checks whether a partition with a given name exists\n\n :param schema: Name of hive schema (database) @table belongs to\n :type schema: str\n :param table: Name of hive table @partition belongs to\n :type schema: str\n :partition: Name of the partitions to check for (eg `a=b/c=d`)\n :type schema: str\n :rtype: bool\n\n >>> hh = HiveMetastoreHook()\n >>> t = 'static_babynames_partitioned'\n >>> hh.check_for_named_partition('airflow', t, \"ds=2015-01-01\")\n True\n >>> hh.check_for_named_partition('airflow', t, \"ds=xxx\")\n False"}
{"_id": "q_246", "text": "Check if table exists\n\n >>> hh = HiveMetastoreHook()\n >>> hh.table_exists(db='airflow', table_name='static_babynames')\n True\n >>> hh.table_exists(db='airflow', table_name='does_not_exist')\n False"}
{"_id": "q_247", "text": "Returns a Hive connection object."}
{"_id": "q_248", "text": "Get a set of records from a Hive query.\n\n :param hql: hql to be executed.\n :type hql: str or list\n :param schema: target schema, default to 'default'.\n :type schema: str\n :param hive_conf: hive_conf to execute alone with the hql.\n :type hive_conf: dict\n :return: result of hive execution\n :rtype: list\n\n >>> hh = HiveServer2Hook()\n >>> sql = \"SELECT * FROM airflow.static_babynames LIMIT 100\"\n >>> len(hh.get_records(sql))\n 100"}
{"_id": "q_249", "text": "Retrieves connection to Cloud Vision.\n\n :return: Google Cloud Vision client object.\n :rtype: google.cloud.vision_v1.ProductSearchClient"}
{"_id": "q_250", "text": "Send Dingding message"}
{"_id": "q_251", "text": "Helper method that binds parameters to a SQL query."}
{"_id": "q_252", "text": "Helper method that escapes parameters to a SQL query."}
{"_id": "q_253", "text": "Helper method that casts a BigQuery row to the appropriate data types.\n This is useful because BigQuery returns all fields as strings."}
{"_id": "q_254", "text": "function to check expected type and raise\n error if type is not correct"}
{"_id": "q_255", "text": "Returns a BigQuery PEP 249 connection object."}
{"_id": "q_256", "text": "Returns a BigQuery service object."}
{"_id": "q_257", "text": "Checks for the existence of a table in Google BigQuery.\n\n :param project_id: The Google cloud project in which to look for the\n table. The connection supplied to the hook must provide access to\n the specified project.\n :type project_id: str\n :param dataset_id: The name of the dataset in which to look for the\n table.\n :type dataset_id: str\n :param table_id: The name of the table to check the existence of.\n :type table_id: str"}
{"_id": "q_258", "text": "Creates a new, empty table in the dataset.\n To create a view, which is defined by a SQL query, parse a dictionary to 'view' kwarg\n\n :param project_id: The project to create the table into.\n :type project_id: str\n :param dataset_id: The dataset to create the table into.\n :type dataset_id: str\n :param table_id: The Name of the table to be created.\n :type table_id: str\n :param schema_fields: If set, the schema field list as defined here:\n https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.schema\n :type schema_fields: list\n :param labels: a dictionary containing labels for the table, passed to BigQuery\n :type labels: dict\n\n **Example**: ::\n\n schema_fields=[{\"name\": \"emp_name\", \"type\": \"STRING\", \"mode\": \"REQUIRED\"},\n {\"name\": \"salary\", \"type\": \"INTEGER\", \"mode\": \"NULLABLE\"}]\n\n :param time_partitioning: configure optional time partitioning fields i.e.\n partition by field, type and expiration as per API specifications.\n\n .. seealso::\n https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#timePartitioning\n :type time_partitioning: dict\n :param cluster_fields: [Optional] The fields used for clustering.\n Must be specified with time_partitioning, data in the table will be first\n partitioned and subsequently clustered.\n https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#clustering.fields\n :type cluster_fields: list\n :param view: [Optional] A dictionary containing definition for the view.\n If set, it will create a view instead of a table:\n https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#view\n :type view: dict\n\n **Example**: ::\n\n view = {\n \"query\": \"SELECT * FROM `test-project-id.test_dataset_id.test_table_prefix*` LIMIT 1000\",\n \"useLegacySql\": False\n }\n\n :return: None"}
{"_id": "q_259", "text": "Grant authorized view access of a dataset to a view table.\n If this view has already been granted access to the dataset, do nothing.\n This method is not atomic. Running it may clobber a simultaneous update.\n\n :param source_dataset: the source dataset\n :type source_dataset: str\n :param view_dataset: the dataset that the view is in\n :type view_dataset: str\n :param view_table: the table of the view\n :type view_table: str\n :param source_project: the project of the source dataset. If None,\n self.project_id will be used.\n :type source_project: str\n :param view_project: the project that the view is in. If None,\n self.project_id will be used.\n :type view_project: str\n :return: the datasets resource of the source dataset."}
{"_id": "q_260", "text": "Method returns dataset_resource if dataset exist\n and raised 404 error if dataset does not exist\n\n :param dataset_id: The BigQuery Dataset ID\n :type dataset_id: str\n :param project_id: The GCP Project ID\n :type project_id: str\n :return: dataset_resource\n\n .. seealso::\n For more information, see Dataset Resource content:\n https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#resource"}
{"_id": "q_261", "text": "Executes a BigQuery query, and returns the job ID.\n\n :param operation: The query to execute.\n :type operation: str\n :param parameters: Parameters to substitute into the query.\n :type parameters: dict"}
{"_id": "q_262", "text": "Queries Postgres and returns a cursor to the results."}
{"_id": "q_263", "text": "Create all the intermediate directories in a remote host\n\n :param sftp_client: A Paramiko SFTP client.\n :param remote_directory: Absolute Path of the directory containing the file\n :return:"}
{"_id": "q_264", "text": "Send message to the queue\n\n :param queue_url: queue url\n :type queue_url: str\n :param message_body: the contents of the message\n :type message_body: str\n :param delay_seconds: seconds to delay the message\n :type delay_seconds: int\n :param message_attributes: additional attributes for the message (default: None)\n For details of the attributes parameter see :py:meth:`botocore.client.SQS.send_message`\n :type message_attributes: dict\n\n :return: dict with the information about the message sent\n For details of the returned value see :py:meth:`botocore.client.SQS.send_message`\n :rtype: dict"}
{"_id": "q_265", "text": "Run the task command.\n\n :param run_with: list of tokens to run the task command with e.g. ``['bash', '-c']``\n :type run_with: list\n :param join_args: whether to concatenate the list of command tokens e.g. ``['airflow', 'run']`` vs\n ``['airflow run']``\n :param join_args: bool\n :return: the process that was run\n :rtype: subprocess.Popen"}
{"_id": "q_266", "text": "Parse options and process commands"}
{"_id": "q_267", "text": "generate HTML header content"}
{"_id": "q_268", "text": "generate HTML div"}
{"_id": "q_269", "text": "Create X-axis"}
{"_id": "q_270", "text": "Decorator to make a view compressed"}
{"_id": "q_271", "text": "Creates a dag run from this dag including the tasks associated with this dag.\n Returns the dag run.\n\n :param run_id: defines the the run id for this dag run\n :type run_id: str\n :param execution_date: the execution date of this dag run\n :type execution_date: datetime.datetime\n :param state: the state of the dag run\n :type state: airflow.utils.state.State\n :param start_date: the date this dag run should be evaluated\n :type start_date: datetime.datetime\n :param external_trigger: whether this dag run is externally triggered\n :type external_trigger: bool\n :param session: database session\n :type session: sqlalchemy.orm.session.Session"}
{"_id": "q_272", "text": "Publish the message to SQS queue\n\n :param context: the context object\n :type context: dict\n :return: dict with information about the message sent\n For details of the returned dict see :py:meth:`botocore.client.SQS.send_message`\n :rtype: dict"}
{"_id": "q_273", "text": "returns a json response from a json serializable python object"}
{"_id": "q_274", "text": "Opens the given file. If the path contains a folder with a .zip suffix, then\n the folder is treated as a zip archive, opening the file inside the archive.\n\n :return: a file object, as in `open`, or as in `ZipFile.open`."}
{"_id": "q_275", "text": "Get Opsgenie api_key for creating alert"}
{"_id": "q_276", "text": "Overwrite HttpHook get_conn because this hook just needs base_url\n and headers, and does not need generic params\n\n :param headers: additional headers to be passed through as a dictionary\n :type headers: dict"}
{"_id": "q_277", "text": "Execute the Opsgenie Alert call\n\n :param payload: Opsgenie API Create Alert payload values\n See https://docs.opsgenie.com/docs/alert-api#section-create-alert\n :type payload: dict"}
{"_id": "q_278", "text": "Construct the Opsgenie JSON payload. All relevant parameters are combined here\n to a valid Opsgenie JSON payload.\n\n :return: Opsgenie payload (dict) to send"}
{"_id": "q_279", "text": "Call the OpsgenieAlertHook to post message"}
{"_id": "q_280", "text": "Fetch the status of submitted athena query. Returns None or one of valid query states.\n\n :param query_execution_id: Id of submitted athena query\n :type query_execution_id: str\n :return: str"}
{"_id": "q_281", "text": "Call Zendesk API and return results\n\n :param path: The Zendesk API to call\n :param query: Query parameters\n :param get_all_pages: Accumulate results over all pages before\n returning. Due to strict rate limiting, this can often timeout.\n Waits for recommended period between tries after a timeout.\n :param side_loading: Retrieve related records as part of a single\n request. In order to enable side-loading, add an 'include'\n query parameter containing a comma-separated list of resources\n to load. For more information on side-loading see\n https://developer.zendesk.com/rest_api/docs/core/side_loading"}
{"_id": "q_282", "text": "Retrieves the partition values for a table.\n\n :param database_name: The name of the catalog database where the partitions reside.\n :type database_name: str\n :param table_name: The name of the partitions' table.\n :type table_name: str\n :param expression: An expression filtering the partitions to be returned.\n Please see official AWS documentation for further information.\n https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-catalog-partitions.html#aws-glue-api-catalog-partitions-GetPartitions\n :type expression: str\n :param page_size: pagination size\n :type page_size: int\n :param max_items: maximum items to return\n :type max_items: int\n :return: set of partition values where each value is a tuple since\n a partition may be composed of multiple columns. For example:\n ``{('2018-01-01','1'), ('2018-01-01','2')}``"}
{"_id": "q_283", "text": "Get the information of the table\n\n :param database_name: Name of hive database (schema) @table belongs to\n :type database_name: str\n :param table_name: Name of hive table\n :type table_name: str\n :rtype: dict\n\n >>> hook = AwsGlueCatalogHook()\n >>> r = hook.get_table('db', 'table_foo')\n >>> r['Name'] = 'table_foo'"}
{"_id": "q_284", "text": "Get the physical location of the table\n\n :param database_name: Name of hive database (schema) @table belongs to\n :type database_name: str\n :param table_name: Name of hive table\n :type table_name: str\n :return: str"}
{"_id": "q_285", "text": "Return status of a cluster\n\n :param cluster_identifier: unique identifier of a cluster\n :type cluster_identifier: str"}
{"_id": "q_286", "text": "Delete a cluster and optionally create a snapshot\n\n :param cluster_identifier: unique identifier of a cluster\n :type cluster_identifier: str\n :param skip_final_cluster_snapshot: determines cluster snapshot creation\n :type skip_final_cluster_snapshot: bool\n :param final_cluster_snapshot_identifier: name of final cluster snapshot\n :type final_cluster_snapshot_identifier: str"}
{"_id": "q_287", "text": "Restores a cluster from its snapshot\n\n :param cluster_identifier: unique identifier of a cluster\n :type cluster_identifier: str\n :param snapshot_identifier: unique identifier for a snapshot of a cluster\n :type snapshot_identifier: str"}
{"_id": "q_288", "text": "SlackAPIOperator calls will not fail even if the call is not unsuccessful.\n It should not prevent a DAG from completing in success"}
{"_id": "q_289", "text": "Will test the filepath result and test if its size is at least self.filesize\n\n :param result: a list of dicts returned by Snakebite ls\n :param size: the file size in MB a file should be at least to trigger True\n :return: (bool) depending on the matching criteria"}
{"_id": "q_290", "text": "Will filter if instructed to do so the result to remove matching criteria\n\n :param result: list of dicts returned by Snakebite ls\n :type result: list[dict]\n :param ignored_ext: list of ignored extensions\n :type ignored_ext: list\n :param ignore_copying: shall we ignore ?\n :type ignore_copying: bool\n :return: list of dicts which were not removed\n :rtype: list[dict]"}
{"_id": "q_291", "text": "Create a pool with a given parameters."}
{"_id": "q_292", "text": "Delete pool by a given name."}
{"_id": "q_293", "text": "Given an operation, continuously fetches the status from Google Cloud until either\n completion or an error occurring\n\n :param operation: The Operation to wait for\n :type operation: google.cloud.container_V1.gapic.enums.Operation\n :param project_id: Google Cloud Platform project ID\n :type project_id: str\n :return: A new, updated operation fetched from Google Cloud"}
{"_id": "q_294", "text": "Creates a cluster, consisting of the specified number and type of Google Compute\n Engine instances.\n\n :param cluster: A Cluster protobuf or dict. If dict is provided, it must\n be of the same form as the protobuf message\n :class:`google.cloud.container_v1.types.Cluster`\n :type cluster: dict or google.cloud.container_v1.types.Cluster\n :param project_id: Google Cloud Platform project ID\n :type project_id: str\n :param retry: A retry object (``google.api_core.retry.Retry``) used to\n retry requests.\n If None is specified, requests will not be retried.\n :type retry: google.api_core.retry.Retry\n :param timeout: The amount of time, in seconds, to wait for the request to\n complete. Note that if retry is specified, the timeout applies to each\n individual attempt.\n :type timeout: float\n :return: The full url to the new, or existing, cluster\n :raises:\n ParseError: On JSON parsing problems when trying to convert dict\n AirflowException: cluster is not dict type nor Cluster proto type"}
{"_id": "q_295", "text": "Gets details of specified cluster\n\n :param name: The name of the cluster to retrieve\n :type name: str\n :param project_id: Google Cloud Platform project ID\n :type project_id: str\n :param retry: A retry object used to retry requests. If None is specified,\n requests will not be retried.\n :type retry: google.api_core.retry.Retry\n :param timeout: The amount of time, in seconds, to wait for the request to\n complete. Note that if retry is specified, the timeout applies to each\n individual attempt.\n :type timeout: float\n :return: google.cloud.container_v1.types.Cluster"}
{"_id": "q_296", "text": "Construct the Discord JSON payload. All relevant parameters are combined here\n to a valid Discord JSON payload.\n\n :return: Discord payload (str) to send"}
{"_id": "q_297", "text": "Encrypts a plaintext message using Google Cloud KMS.\n\n :param key_name: The Resource Name for the key (or key version)\n to be used for encyption. Of the form\n ``projects/*/locations/*/keyRings/*/cryptoKeys/**``\n :type key_name: str\n :param plaintext: The message to be encrypted.\n :type plaintext: bytes\n :param authenticated_data: Optional additional authenticated data that\n must also be provided to decrypt the message.\n :type authenticated_data: bytes\n :return: The base 64 encoded ciphertext of the original message.\n :rtype: str"}
{"_id": "q_298", "text": "Imports table from remote location to target dir. Arguments are\n copies of direct sqoop command line arguments\n\n :param table: Table to read\n :param target_dir: HDFS destination dir\n :param append: Append data to an existing dataset in HDFS\n :param file_type: \"avro\", \"sequence\", \"text\" or \"parquet\".\n Imports data to into the specified format. Defaults to text.\n :param columns: <col,col,col\u2026> Columns to import from table\n :param split_by: Column of the table used to split work units\n :param where: WHERE clause to use during import\n :param direct: Use direct connector if exists for the database\n :param driver: Manually specify JDBC driver class to use\n :param extra_import_options: Extra import options to pass as dict.\n If a key doesn't have a value, just pass an empty string to it.\n Don't include prefix of -- for sqoop options."}
{"_id": "q_299", "text": "Retrieves connection to Cloud Text to Speech.\n\n :return: Google Cloud Text to Speech client object.\n :rtype: google.cloud.texttospeech_v1.TextToSpeechClient"}
{"_id": "q_300", "text": "Close and upload local log file to remote storage S3."}
{"_id": "q_301", "text": "When using git to retrieve the DAGs, use the GitSync Init Container"}
{"_id": "q_302", "text": "Defines any necessary environment variables for the pod executor"}
{"_id": "q_303", "text": "Defines any necessary secrets for the pod executor"}
{"_id": "q_304", "text": "Defines the security context"}
{"_id": "q_305", "text": "Heartbeats update the job's entry in the database with a timestamp\n for the latest_heartbeat and allows for the job to be killed\n externally. This allows at the system level to monitor what is\n actually active.\n\n For instance, an old heartbeat for SchedulerJob would mean something\n is wrong.\n\n This also allows for any job to be killed externally, regardless\n of who is running it or on which machine it is running.\n\n Note that if your heartbeat is set to 60 seconds and you call this\n method after 10 seconds of processing since the last heartbeat, it\n will sleep 50 seconds to complete the 60 seconds and keep a steady\n heart rate. If you go over 60 seconds before calling it, it won't\n sleep at all."}
{"_id": "q_306", "text": "Launch the process and start processing the DAG."}
{"_id": "q_307", "text": "Check if the process launched to process this file is done.\n\n :return: whether the process is finished running\n :rtype: bool"}
{"_id": "q_308", "text": "Helper method to clean up processor_agent to avoid leaving orphan processes."}
{"_id": "q_309", "text": "For the DAGs in the given DagBag, record any associated import errors and clears\n errors for files that no longer have them. These are usually displayed through the\n Airflow UI so that users know that there are issues parsing DAGs.\n\n :param session: session for ORM operations\n :type session: sqlalchemy.orm.session.Session\n :param dagbag: DagBag containing DAGs with import errors\n :type dagbag: airflow.models.DagBag"}
{"_id": "q_310", "text": "Get the concurrency maps.\n\n :param states: List of states to query for\n :type states: list[airflow.utils.state.State]\n :return: A map from (dag_id, task_id) to # of task instances and\n a map from (dag_id, task_id) to # of task instances in the given state list\n :rtype: dict[tuple[str, str], int]"}
{"_id": "q_311", "text": "Changes the state of task instances in the list with one of the given states\n to QUEUED atomically, and returns the TIs changed in SimpleTaskInstance format.\n\n :param task_instances: TaskInstances to change the state of\n :type task_instances: list[airflow.models.TaskInstance]\n :param acceptable_states: Filters the TaskInstances updated to be in these states\n :type acceptable_states: Iterable[State]\n :rtype: list[airflow.utils.dag_processing.SimpleTaskInstance]"}
{"_id": "q_312", "text": "Takes task_instances, which should have been set to queued, and enqueues them\n with the executor.\n\n :param simple_task_instances: TaskInstances to enqueue\n :type simple_task_instances: list[SimpleTaskInstance]\n :param simple_dag_bag: Should contains all of the task_instances' dags\n :type simple_dag_bag: airflow.utils.dag_processing.SimpleDagBag"}
{"_id": "q_313", "text": "Attempts to execute TaskInstances that should be executed by the scheduler.\n\n There are three steps:\n 1. Pick TIs by priority with the constraint that they are in the expected states\n and that we do exceed max_active_runs or pool limits.\n 2. Change the state for the TIs above atomically.\n 3. Enqueue the TIs in the executor.\n\n :param simple_dag_bag: TaskInstances associated with DAGs in the\n simple_dag_bag will be fetched from the DB and executed\n :type simple_dag_bag: airflow.utils.dag_processing.SimpleDagBag\n :param states: Execute TaskInstances in these states\n :type states: tuple[airflow.utils.state.State]\n :return: Number of task instance with state changed."}
{"_id": "q_314", "text": "If there are tasks left over in the executor,\n we set them back to SCHEDULED to avoid creating hanging tasks.\n\n :param session: session for ORM operations"}
{"_id": "q_315", "text": "Process a Python file containing Airflow DAGs.\n\n This includes:\n\n 1. Execute the file and look for DAG objects in the namespace.\n 2. Pickle the DAG and save it to the DB (if necessary).\n 3. For each DAG, see what tasks should run and create appropriate task\n instances in the DB.\n 4. Record any errors importing the file into ORM\n 5. Kill (in ORM) any task instances belonging to the DAGs that haven't\n issued a heartbeat in a while.\n\n Returns a list of SimpleDag objects that represent the DAGs found in\n the file\n\n :param file_path: the path to the Python file that should be executed\n :type file_path: unicode\n :param zombies: zombie task instances to kill.\n :type zombies: list[airflow.utils.dag_processing.SimpleTaskInstance]\n :param pickle_dags: whether serialize the DAGs found in the file and\n save them to the db\n :type pickle_dags: bool\n :return: a list of SimpleDags made from the Dags found in the file\n :rtype: list[airflow.utils.dag_processing.SimpleDagBag]"}
{"_id": "q_316", "text": "Updates the counters per state of the tasks that were running. Can re-add\n to tasks to run in case required.\n\n :param ti_status: the internal status of the backfill job tasks\n :type ti_status: BackfillJob._DagRunTaskStatus"}
{"_id": "q_317", "text": "Returns a dag run for the given run date, which will be matched to an existing\n dag run if available or create a new dag run otherwise. If the max_active_runs\n limit is reached, this function will return None.\n\n :param run_date: the execution date for the dag run\n :type run_date: datetime.datetime\n :param session: the database session object\n :type session: sqlalchemy.orm.session.Session\n :return: a DagRun in state RUNNING or None"}
{"_id": "q_318", "text": "Returns a map of task instance key to task instance object for the tasks to\n run in the given dag run.\n\n :param dag_run: the dag run to get the tasks from\n :type dag_run: airflow.models.DagRun\n :param session: the database session object\n :type session: sqlalchemy.orm.session.Session"}
{"_id": "q_319", "text": "Computes the dag runs and their respective task instances for\n the given run dates and executes the task instances.\n Returns a list of execution dates of the dag runs that were executed.\n\n :param run_dates: Execution dates for dag runs\n :type run_dates: list\n :param ti_status: internal BackfillJob status structure to tis track progress\n :type ti_status: BackfillJob._DagRunTaskStatus\n :param executor: the executor to use, it must be previously started\n :type executor: BaseExecutor\n :param pickle_id: numeric id of the pickled dag, None if not pickled\n :type pickle_id: int\n :param start_date: backfill start date\n :type start_date: datetime.datetime\n :param session: the current session object\n :type session: sqlalchemy.orm.session.Session"}
{"_id": "q_320", "text": "Self destruct task if state has been moved away from running externally"}
{"_id": "q_321", "text": "Provides a client for interacting with the Cloud Spanner API.\n\n :param project_id: The ID of the GCP project.\n :type project_id: str\n :return: google.cloud.spanner_v1.client.Client\n :rtype: object"}
{"_id": "q_322", "text": "Gets information about a particular instance.\n\n :param project_id: Optional, The ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :return: google.cloud.spanner_v1.instance.Instance\n :rtype: object"}
{"_id": "q_323", "text": "Invokes a method on a given instance by applying a specified Callable.\n\n :param project_id: The ID of the GCP project that owns the Cloud Spanner\n database.\n :type project_id: str\n :param instance_id: The ID of the instance.\n :type instance_id: str\n :param configuration_name: Name of the instance configuration defining how the\n instance will be created. Required for instances which do not yet exist.\n :type configuration_name: str\n :param node_count: (Optional) Number of nodes allocated to the instance.\n :type node_count: int\n :param display_name: (Optional) The display name for the instance in the Cloud\n Console UI. (Must be between 4 and 30 characters.) If this value is not set\n in the constructor, will fall back to the instance ID.\n :type display_name: str\n :param func: Method of the instance to be called.\n :type func: Callable"}
{"_id": "q_324", "text": "Creates a new Cloud Spanner instance.\n\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param configuration_name: The name of the instance configuration defining how the\n instance will be created. Possible configuration values can be retrieved via\n https://cloud.google.com/spanner/docs/reference/rest/v1/projects.instanceConfigs/list\n :type configuration_name: str\n :param node_count: (Optional) The number of nodes allocated to the Cloud Spanner\n instance.\n :type node_count: int\n :param display_name: (Optional) The display name for the instance in the GCP\n Console. Must be between 4 and 30 characters. If this value is not set in\n the constructor, the name falls back to the instance ID.\n :type display_name: str\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"}
{"_id": "q_325", "text": "Updates an existing Cloud Spanner instance.\n\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param configuration_name: The name of the instance configuration defining how the\n instance will be created. Possible configuration values can be retrieved via\n https://cloud.google.com/spanner/docs/reference/rest/v1/projects.instanceConfigs/list\n :type configuration_name: str\n :param node_count: (Optional) The number of nodes allocated to the Cloud Spanner\n instance.\n :type node_count: int\n :param display_name: (Optional) The display name for the instance in the GCP\n Console. Must be between 4 and 30 characters. If this value is not set in\n the constructor, the name falls back to the instance ID.\n :type display_name: str\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"}
{"_id": "q_326", "text": "Deletes an existing Cloud Spanner instance.\n\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"}
{"_id": "q_327", "text": "Retrieves a database in Cloud Spanner. If the database does not exist\n in the specified instance, it returns None.\n\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param database_id: The ID of the database in Cloud Spanner.\n :type database_id: str\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: Database object or None if database does not exist\n :rtype: google.cloud.spanner_v1.database.Database or None"}
{"_id": "q_328", "text": "Creates a new database in Cloud Spanner.\n\n :type project_id: str\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param database_id: The ID of the database to create in Cloud Spanner.\n :type database_id: str\n :param ddl_statements: The string list containing DDL for the new database.\n :type ddl_statements: list[str]\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :return: None"}
{"_id": "q_329", "text": "Updates DDL of a database in Cloud Spanner.\n\n :type project_id: str\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param database_id: The ID of the database in Cloud Spanner.\n :type database_id: str\n :param ddl_statements: The string list containing DDL for the new database.\n :type ddl_statements: list[str]\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :param operation_id: (Optional) The unique per database operation ID that can be\n specified to implement idempotency check.\n :type operation_id: str\n :return: None"}
{"_id": "q_330", "text": "Returns a cassandra Session object"}
{"_id": "q_331", "text": "Checks if a record exists in Cassandra\n\n :param table: Target Cassandra table.\n Use dot notation to target a specific keyspace.\n :type table: str\n :param keys: The keys and their values to check the existence.\n :type keys: dict"}
{"_id": "q_332", "text": "Construct the command to poll the driver status.\n\n :return: full command to be executed"}
{"_id": "q_333", "text": "Remote Popen to execute the spark-submit job\n\n :param application: Submitted application, jar or py file\n :type application: str\n :param kwargs: extra arguments to Popen (see subprocess.Popen)"}
{"_id": "q_334", "text": "Processes the log files and extracts useful information out of it.\n\n If the deploy-mode is 'client', log the output of the submit command as those\n are the output logs of the Spark worker directly.\n\n Remark: If the driver needs to be tracked for its status, the log-level of the\n spark deploy needs to be at least INFO (log4j.logger.org.apache.spark.deploy=INFO)\n\n :param itr: An iterator which iterates over the input of the subprocess"}
{"_id": "q_335", "text": "parses the logs of the spark driver status query process\n\n :param itr: An iterator which iterates over the input of the subprocess"}
{"_id": "q_336", "text": "Get the task runner that can be used to run the given job.\n\n :param local_task_job: The LocalTaskJob associated with the TaskInstance\n that needs to be executed.\n :type local_task_job: airflow.jobs.LocalTaskJob\n :return: The task runner to use to run the task.\n :rtype: airflow.task.task_runner.base_task_runner.BaseTaskRunner"}
{"_id": "q_337", "text": "Try to use a waiter from the below pull request\n\n * https://github.com/boto/botocore/pull/1307\n\n If the waiter is not available apply a exponential backoff\n\n * docs.aws.amazon.com/general/latest/gr/api-retries.html"}
{"_id": "q_338", "text": "Queries mysql and returns a cursor to the results."}
{"_id": "q_339", "text": "Configure a csv writer with the file_handle and write schema\n as headers for the new file."}
{"_id": "q_340", "text": "Return a dict of column name and column type based on self.schema if not None."}
{"_id": "q_341", "text": "Execute sqoop job"}
{"_id": "q_342", "text": "Returns the extra property by deserializing json."}
{"_id": "q_343", "text": "Get a set of dates as a list based on a start, end and delta, delta\n can be something that can be added to `datetime.datetime`\n or a cron expression as a `str`\n\n :Example::\n\n date_range(datetime(2016, 1, 1), datetime(2016, 1, 3), delta=timedelta(1))\n [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 1, 2, 0, 0),\n datetime.datetime(2016, 1, 3, 0, 0)]\n date_range(datetime(2016, 1, 1), datetime(2016, 1, 3), delta='0 0 * * *')\n [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 1, 2, 0, 0),\n datetime.datetime(2016, 1, 3, 0, 0)]\n date_range(datetime(2016, 1, 1), datetime(2016, 3, 3), delta=\"0 0 0 * *\")\n [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 2, 1, 0, 0),\n datetime.datetime(2016, 3, 1, 0, 0)]\n\n :param start_date: anchor date to start the series from\n :type start_date: datetime.datetime\n :param end_date: right boundary for the date range\n :type end_date: datetime.datetime\n :param num: alternatively to end_date, you can specify the number of\n number of entries you want in the range. This number can be negative,\n output will always be sorted regardless\n :type num: int"}
{"_id": "q_344", "text": "Get a datetime object representing `n` days ago. By default the time is\n set to midnight."}
{"_id": "q_345", "text": "Initialize the role with the permissions and related view-menus.\n\n :param role_name:\n :param role_vms:\n :param role_perms:\n :return:"}
{"_id": "q_346", "text": "Delete the given Role\n\n :param role_name: the name of a role in the ab_role table"}
{"_id": "q_347", "text": "Get all the roles associated with the user.\n\n :param user: the ab_user in FAB model.\n :return: a list of roles associated with the user."}
{"_id": "q_348", "text": "Returns a set of tuples with the perm name and view menu name"}
{"_id": "q_349", "text": "Whether the user has this role name"}
{"_id": "q_350", "text": "Whether the user has this perm"}
{"_id": "q_351", "text": "FAB leaves faulty permissions that need to be cleaned up"}
{"_id": "q_352", "text": "Add the new permission , view_menu to ab_permission_view_role if not exists.\n It will add the related entry to ab_permission\n and ab_view_menu two meta tables as well.\n\n :param permission_name: Name of the permission.\n :type permission_name: str\n :param view_menu_name: Name of the view-menu\n :type view_menu_name: str\n :return:"}
{"_id": "q_353", "text": "Create perm-vm if not exist and insert into FAB security model for all-dags."}
{"_id": "q_354", "text": "Deferred load of Fernet key.\n\n This function could fail either because Cryptography is not installed\n or because the Fernet key is invalid.\n\n :return: Fernet object\n :raises: airflow.exceptions.AirflowException if there's a problem trying to load Fernet"}
{"_id": "q_355", "text": "Checks for existence of the partition in the AWS Glue Catalog table"}
{"_id": "q_356", "text": "Check for message on subscribed queue and write to xcom the message with key ``messages``\n\n :param context: the context object\n :type context: dict\n :return: ``True`` if message is available or ``False``"}
{"_id": "q_357", "text": "Returns a snakebite HDFSClient object."}
{"_id": "q_358", "text": "Establishes a connection depending on the security mode set via config or environment variable.\n\n :return: a hdfscli InsecureClient or KerberosClient object.\n :rtype: hdfs.InsecureClient or hdfs.ext.kerberos.KerberosClient"}
{"_id": "q_359", "text": "Check for the existence of a path in HDFS by querying FileStatus.\n\n :param hdfs_path: The path to check.\n :type hdfs_path: str\n :return: True if the path exists and False if not.\n :rtype: bool"}
{"_id": "q_360", "text": "r\"\"\"\n Uploads a file to HDFS.\n\n :param source: Local path to file or folder.\n If it's a folder, all the files inside of it will be uploaded.\n .. note:: This implies that folders empty of files will not be created remotely.\n\n :type source: str\n :param destination: PTarget HDFS path.\n If it already exists and is a directory, files will be uploaded inside.\n :type destination: str\n :param overwrite: Overwrite any existing file or directory.\n :type overwrite: bool\n :param parallelism: Number of threads to use for parallelization.\n A value of `0` (or negative) uses as many threads as there are files.\n :type parallelism: int\n :param \\**kwargs: Keyword arguments forwarded to :meth:`hdfs.client.Client.upload`."}
{"_id": "q_361", "text": "Establish a connection to pinot broker through pinot dbqpi."}
{"_id": "q_362", "text": "Get the connection uri for pinot broker.\n\n e.g: http://localhost:9000/pql"}
{"_id": "q_363", "text": "Convert native python ``datetime.time`` object to a format supported by the API"}
{"_id": "q_364", "text": "Executes the sql and returns a pandas dataframe\n\n :param sql: the sql statement to be executed (str) or a list of\n sql statements to execute\n :type sql: str or list\n :param parameters: The parameters to render the SQL query with.\n :type parameters: mapping or iterable"}
{"_id": "q_365", "text": "A generic way to insert a set of tuples into a table,\n a new transaction is created every commit_every rows\n\n :param table: Name of the target table\n :type table: str\n :param rows: The rows to insert into the table\n :type rows: iterable of tuples\n :param target_fields: The names of the columns to fill in the table\n :type target_fields: iterable of strings\n :param commit_every: The maximum number of rows to insert in one\n transaction. Set to 0 to insert all rows in one transaction.\n :type commit_every: int\n :param replace: Whether to replace instead of insert\n :type replace: bool"}
{"_id": "q_366", "text": "An endpoint helping check the health status of the Airflow instance,\n including metadatabase and scheduler."}
{"_id": "q_367", "text": "A restful endpoint that returns external links for a given Operator\n\n It queries the operator that sent the request for the links it wishes\n to provide for a given external link name.\n\n API: GET\n Args: dag_id: The id of the dag containing the task in question\n task_id: The id of the task in question\n execution_date: The date of execution of the task\n link_name: The name of the link reference to find the actual URL for\n\n Returns:\n 200: {url: <url of link>, error: None} - returned when there was no problem\n finding the URL\n 404: {url: None, error: <error message>} - returned when the operator does\n not return a URL"}
{"_id": "q_368", "text": "Opens a connection to the cloudant service and closes it automatically if used as context manager.\n\n .. note::\n In the connection form:\n - 'host' equals the 'Account' (optional)\n - 'login' equals the 'Username (or API Key)' (required)\n - 'password' equals the 'Password' (required)\n\n :return: an authorized cloudant session context manager object.\n :rtype: cloudant"}
{"_id": "q_369", "text": "Call the SlackWebhookHook to post the provided Slack message"}
{"_id": "q_370", "text": "A list of states indicating that a task either has not completed\n a run or has not even started."}
{"_id": "q_371", "text": "Save model to a pickle located at `path`"}
{"_id": "q_372", "text": "CNN from Nature paper."}
{"_id": "q_373", "text": "convolutions-only net\n\n Parameters:\n ----------\n\n conv: list of triples (filter_number, filter_size, stride) specifying parameters for each layer.\n\n Returns:\n\n function that takes tensorflow tensor as input and returns the output of the last convolutional layer"}
{"_id": "q_374", "text": "Create a wrapped, monitored SubprocVecEnv for Atari and MuJoCo."}
{"_id": "q_375", "text": "Create placeholder to feed observations into of the size appropriate to the observation space\n\n Parameters:\n ----------\n\n ob_space: gym.Space observation space\n\n batch_size: int size of the batch to be fed into input. Can be left None in most cases.\n\n name: str name of the placeholder\n\n Returns:\n -------\n\n tensorflow placeholder tensor"}
{"_id": "q_376", "text": "Create placeholder to feed observations into of the size appropriate to the observation space, and add input\n encoder of the appropriate type."}
{"_id": "q_377", "text": "Deep-copy an observation dict."}
{"_id": "q_378", "text": "Calculates q_retrace targets\n\n :param R: Rewards\n :param D: Dones\n :param q_i: Q values for actions taken\n :param v: V values\n :param rho_i: Importance weight for each action\n :return: Q_retrace values"}
{"_id": "q_379", "text": "See Schedule.value"}
{"_id": "q_380", "text": "Control a single environment instance using IPC and\n shared memory."}
{"_id": "q_381", "text": "Main entrypoint for A2C algorithm. Train a policy with given network architecture on a given environment using a2c algorithm.\n\n Parameters:\n -----------\n\n network: policy network architecture. Either string (mlp, lstm, lnlstm, cnn_lstm, cnn, cnn_small, conv_only - see baselines.common/models.py for full list)\n specifying the standard network architecture, or a function that takes tensorflow tensor as input and returns\n tuple (output_tensor, extra_feed) where output tensor is the last network layer output, extra_feed is None for feed-forward\n neural nets, and extra_feed is a dictionary describing how to feed state into the network for recurrent neural nets.\n See baselines.common/policies.py/lstm for more details on using recurrent nets in policies\n\n\n env: RL environment. Should implement interface similar to VecEnv (baselines.common/vec_env) or be wrapped with DummyVecEnv (baselines.common/vec_env/dummy_vec_env.py)\n\n\n seed: seed to make random number sequence in the alorightm reproducible. By default is None which means seed from system noise generator (not reproducible)\n\n nsteps: int, number of steps of the vectorized environment per update (i.e. batch size is nsteps * nenv where\n nenv is number of environment copies simulated in parallel)\n\n total_timesteps: int, total number of timesteps to train on (default: 80M)\n\n vf_coef: float, coefficient in front of value function loss in the total loss function (default: 0.5)\n\n ent_coef: float, coeffictiant in front of the policy entropy in the total loss function (default: 0.01)\n\n max_gradient_norm: float, gradient is clipped to have global L2 norm no more than this value (default: 0.5)\n\n lr: float, learning rate for RMSProp (current implementation has RMSProp hardcoded in) (default: 7e-4)\n\n lrschedule: schedule of learning rate. Can be 'linear', 'constant', or a function [0..1] -> [0..1] that takes fraction of the training progress as input and\n returns fraction of the learning rate (specified as lr) as output\n\n epsilon: float, RMSProp epsilon (stabilizes square root computation in denominator of RMSProp update) (default: 1e-5)\n\n alpha: float, RMSProp decay parameter (default: 0.99)\n\n gamma: float, reward discounting parameter (default: 0.99)\n\n log_interval: int, specifies how frequently the logs are printed out (default: 100)\n\n **network_kwargs: keyword arguments to the policy / network builder. See baselines.common/policies.py/build_policy and arguments to a particular type of network\n For instance, 'mlp' network architecture has arguments num_hidden and num_layers."}
{"_id": "q_382", "text": "swap and then flatten axes 0 and 1"}
{"_id": "q_383", "text": "Print the number of seconds in human readable format.\n\n Examples:\n 2 days\n 2 hours and 37 minutes\n less than a minute\n\n Paramters\n ---------\n seconds_left: int\n Number of seconds to be converted to the ETA\n Returns\n -------\n eta: str\n String representing the pretty ETA."}
{"_id": "q_384", "text": "Add a boolean flag to argparse parser.\n\n Parameters\n ----------\n parser: argparse.Parser\n parser to add the flag to\n name: str\n --<name> will enable the flag, while --no-<name> will disable it\n default: bool or None\n default value of the flag\n help: str\n help string for the flag"}
{"_id": "q_385", "text": "Stores provided method args as instance attributes."}
{"_id": "q_386", "text": "Flattens a variables and their gradients."}
{"_id": "q_387", "text": "Creates a simple neural network"}
{"_id": "q_388", "text": "Re-launches the current script with workers\n Returns \"parent\" for original parent, \"child\" for MPI children"}
{"_id": "q_389", "text": "Get default session or create one with a given config"}
{"_id": "q_390", "text": "Initialize all the uninitialized variables in the global scope."}
{"_id": "q_391", "text": "adjust shape of the data to the shape of the placeholder if possible.\n If shape is incompatible, AssertionError is thrown\n\n Parameters:\n placeholder tensorflow input placeholder\n\n data input data to be (potentially) reshaped to be fed into placeholder\n\n Returns:\n reshaped data"}
{"_id": "q_392", "text": "Configure environment for DeepMind-style Atari."}
{"_id": "q_393", "text": "Count the GPUs on this machine."}
{"_id": "q_394", "text": "Set CUDA_VISIBLE_DEVICES to MPI rank if not already set"}
{"_id": "q_395", "text": "Copies the file from rank 0 to all other ranks\n Puts it in the same place on all machines"}
{"_id": "q_396", "text": "computes discounted sums along 0th dimension of x.\n\n inputs\n ------\n x: ndarray\n gamma: float\n\n outputs\n -------\n y: ndarray with same shape as x, satisfying\n\n y[t] = x[t] + gamma*x[t+1] + gamma^2*x[t+2] + ... + gamma^k x[t+k],\n where k = len(x) - t - 1"}
{"_id": "q_397", "text": "See ReplayBuffer.store_effect"}
{"_id": "q_398", "text": "Update priorities of sampled transitions.\n\n sets priority of transition at index idxes[i] in buffer\n to priorities[i].\n\n Parameters\n ----------\n idxes: [int]\n List of idxes of sampled transitions\n priorities: [float]\n List of updated priorities corresponding to\n transitions at the sampled idxes denoted by\n variable `idxes`."}
{"_id": "q_399", "text": "Configure environment for retro games, using config similar to DeepMind-style Atari in wrap_deepmind"}
{"_id": "q_400", "text": "Creates a sample function that can be used for HER experience replay.\n\n Args:\n replay_strategy (in ['future', 'none']): the HER replay strategy; if set to 'none',\n regular DDPG experience replay is used\n replay_k (int): the ratio between HER replays and regular replays (e.g. k = 4 -> 4 times\n as many HER replays as regular replays are used)\n reward_fun (function): function to re-compute the reward with substituted goals"}
{"_id": "q_401", "text": "Estimate the geometric median of points in 2D.\n\n Code from https://stackoverflow.com/a/30305181\n\n Parameters\n ----------\n X : (N,2) ndarray\n Points in 2D. Second axis must be given in xy-form.\n\n eps : float, optional\n Distance threshold when to return the median.\n\n Returns\n -------\n (2,) ndarray\n Geometric median as xy-coordinate."}
{"_id": "q_402", "text": "Project the keypoint onto a new position on a new image.\n\n E.g. if the keypoint is on its original image at x=(10 of 100 pixels)\n and y=(20 of 100 pixels) and is projected onto a new image with\n size (width=200, height=200), its new position will be (20, 40).\n\n This is intended for cases where the original image is resized.\n It cannot be used for more complex changes (e.g. padding, cropping).\n\n Parameters\n ----------\n from_shape : tuple of int\n Shape of the original image. (Before resize.)\n\n to_shape : tuple of int\n Shape of the new image. (After resize.)\n\n Returns\n -------\n imgaug.Keypoint\n Keypoint object with new coordinates."}
{"_id": "q_403", "text": "Move the keypoint around on an image.\n\n Parameters\n ----------\n x : number, optional\n Move by this value on the x axis.\n\n y : number, optional\n Move by this value on the y axis.\n\n Returns\n -------\n imgaug.Keypoint\n Keypoint object with new coordinates."}
{"_id": "q_404", "text": "Draw the keypoint onto a given image.\n\n The keypoint is drawn as a square.\n\n Parameters\n ----------\n image : (H,W,3) ndarray\n The image onto which to draw the keypoint.\n\n color : int or list of int or tuple of int or (3,) ndarray, optional\n The RGB color of the keypoint. If a single int ``C``, then that is\n equivalent to ``(C,C,C)``.\n\n alpha : float, optional\n The opacity of the drawn keypoint, where ``1.0`` denotes a fully\n visible keypoint and ``0.0`` an invisible one.\n\n size : int, optional\n The size of the keypoint. If set to ``S``, each square will have\n size ``S x S``.\n\n copy : bool, optional\n Whether to copy the image before drawing the keypoint.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an exception if the keypoint is outside of the\n image.\n\n Returns\n -------\n image : (H,W,3) ndarray\n Image with drawn keypoint."}
{"_id": "q_405", "text": "Create a shallow copy of the Keypoint object.\n\n Parameters\n ----------\n x : None or number, optional\n Coordinate of the keypoint on the x axis.\n If ``None``, the instance's value will be copied.\n\n y : None or number, optional\n Coordinate of the keypoint on the y axis.\n If ``None``, the instance's value will be copied.\n\n Returns\n -------\n imgaug.Keypoint\n Shallow copy."}
{"_id": "q_406", "text": "Create a deep copy of the Keypoint object.\n\n Parameters\n ----------\n x : None or number, optional\n Coordinate of the keypoint on the x axis.\n If ``None``, the instance's value will be copied.\n\n y : None or number, optional\n Coordinate of the keypoint on the y axis.\n If ``None``, the instance's value will be copied.\n\n Returns\n -------\n imgaug.Keypoint\n Deep copy."}
{"_id": "q_407", "text": "Project keypoints from one image to a new one.\n\n Parameters\n ----------\n image : ndarray or tuple of int\n New image onto which the keypoints are to be projected.\n May also simply be that new image's shape tuple.\n\n Returns\n -------\n keypoints : imgaug.KeypointsOnImage\n Object containing all projected keypoints."}
{"_id": "q_408", "text": "Move the keypoints around on an image.\n\n Parameters\n ----------\n x : number, optional\n Move each keypoint by this value on the x axis.\n\n y : number, optional\n Move each keypoint by this value on the y axis.\n\n Returns\n -------\n out : KeypointsOnImage\n Keypoints after moving them."}
{"_id": "q_409", "text": "Create a shallow copy of the KeypointsOnImage object.\n\n Parameters\n ----------\n keypoints : None or list of imgaug.Keypoint, optional\n List of keypoints on the image. If ``None``, the instance's\n keypoints will be copied.\n\n shape : tuple of int, optional\n The shape of the image on which the keypoints are placed.\n If ``None``, the instance's shape will be copied.\n\n Returns\n -------\n imgaug.KeypointsOnImage\n Shallow copy."}
{"_id": "q_410", "text": "Compute the intersection bounding box of this bounding box and another one.\n\n Note that in extreme cases, the intersection can be a single point, meaning that the intersection bounding box\n will exist, but then also has a height and width of zero.\n\n Parameters\n ----------\n other : imgaug.BoundingBox\n Other bounding box with which to generate the intersection.\n\n default : any, optional\n Default value to return if there is no intersection.\n\n Returns\n -------\n imgaug.BoundingBox or any\n Intersection bounding box of the two bounding boxes if there is an intersection.\n If there is no intersection, the default value will be returned, which can by anything."}
{"_id": "q_411", "text": "Compute the union bounding box of this bounding box and another one.\n\n This is equivalent to drawing a bounding box around all corners points of both\n bounding boxes.\n\n Parameters\n ----------\n other : imgaug.BoundingBox\n Other bounding box with which to generate the union.\n\n Returns\n -------\n imgaug.BoundingBox\n Union bounding box of the two bounding boxes."}
{"_id": "q_412", "text": "Estimate whether the bounding box is at least partially inside the image area.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use.\n If an ndarray, its shape will be used.\n If a tuple, it is assumed to represent the image shape\n and must contain at least two integers.\n\n Returns\n -------\n bool\n True if the bounding box is at least partially inside the image area. False otherwise."}
{"_id": "q_413", "text": "Estimate whether the bounding box is partially or fully outside of the image area.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use. If an ndarray, its shape will be used. If a tuple, it is\n assumed to represent the image shape and must contain at least two integers.\n\n fully : bool, optional\n Whether to return True if the bounding box is fully outside fo the image area.\n\n partly : bool, optional\n Whether to return True if the bounding box is at least partially outside fo the\n image area.\n\n Returns\n -------\n bool\n True if the bounding box is partially/fully outside of the image area, depending\n on defined parameters. False otherwise."}
{"_id": "q_414", "text": "Clip off all parts of the bounding box that are outside of the image.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use for the clipping of the bounding box.\n If an ndarray, its shape will be used.\n If a tuple, it is assumed to represent the image shape and must contain at least two integers.\n\n Returns\n -------\n result : imgaug.BoundingBox\n Bounding box, clipped to fall within the image dimensions."}
{"_id": "q_415", "text": "Draw the bounding box on an image.\n\n Parameters\n ----------\n image : (H,W,C) ndarray(uint8)\n The image onto which to draw the bounding box.\n\n color : iterable of int, optional\n The color to use, corresponding to the channel layout of the image. Usually RGB.\n\n alpha : float, optional\n The transparency of the drawn bounding box, where 1.0 denotes no transparency and\n 0.0 is invisible.\n\n size : int, optional\n The thickness of the bounding box in pixels. If the value is larger than 1, then\n additional pixels will be added around the bounding box (i.e. extension towards the\n outside).\n\n copy : bool, optional\n Whether to copy the input image or change it in-place.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the bounding box is fully outside of the\n image. If set to False, no error will be raised and only the parts inside the image\n will be drawn.\n\n thickness : None or int, optional\n Deprecated.\n\n Returns\n -------\n result : (H,W,C) ndarray(uint8)\n Image with bounding box drawn on it."}
{"_id": "q_416", "text": "Extract the image pixels within the bounding box.\n\n This function will zero-pad the image if the bounding box is partially/fully outside of\n the image.\n\n Parameters\n ----------\n image : (H,W) ndarray or (H,W,C) ndarray\n The image from which to extract the pixels within the bounding box.\n\n pad : bool, optional\n Whether to zero-pad the image if the object is partially/fully\n outside of it.\n\n pad_max : None or int, optional\n The maximum number of pixels that may be zero-paded on any side,\n i.e. if this has value ``N`` the total maximum of added pixels\n is ``4*N``.\n This option exists to prevent extremely large images as a result of\n single points being moved very far away during augmentation.\n\n prevent_zero_size : bool, optional\n Whether to prevent height or width of the extracted image from becoming zero.\n If this is set to True and height or width of the bounding box is below 1, the height/width will\n be increased to 1. This can be useful to prevent problems, e.g. with image saving or plotting.\n If it is set to False, images will be returned as ``(H', W')`` or ``(H', W', 3)`` with ``H`` or\n ``W`` potentially being 0.\n\n Returns\n -------\n image : (H',W') ndarray or (H',W',C) ndarray\n Pixels within the bounding box. Zero-padded if the bounding box is partially/fully\n outside of the image. If prevent_zero_size is activated, it is guarantueed that ``H'>0``\n and ``W'>0``, otherwise only ``H'>=0`` and ``W'>=0``."}
{"_id": "q_417", "text": "Remove all bounding boxes that are fully or partially outside of the image.\n\n Parameters\n ----------\n fully : bool, optional\n Whether to remove bounding boxes that are fully outside of the image.\n\n partly : bool, optional\n Whether to remove bounding boxes that are partially outside of the image.\n\n Returns\n -------\n imgaug.BoundingBoxesOnImage\n Reduced set of bounding boxes, with those that were fully/partially outside of\n the image removed."}
{"_id": "q_418", "text": "Clip off all parts from all bounding boxes that are outside of the image.\n\n Returns\n -------\n imgaug.BoundingBoxesOnImage\n Bounding boxes, clipped to fall within the image dimensions."}
{"_id": "q_419", "text": "Augmenter that embosses images and overlays the result with the original\n image.\n\n The embossed version pronounces highlights and shadows,\n letting the image look as if it was recreated on a metal plate (\"embossed\").\n\n dtype support::\n\n See ``imgaug.augmenters.convolutional.Convolve``.\n\n Parameters\n ----------\n alpha : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Visibility of the sharpened image. At 0, only the original image is\n visible, at 1.0 only its sharpened version is visible.\n\n * If an int or float, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n strength : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Parameter that controls the strength of the embossing.\n Sane values are somewhere in the range ``(0, 2)`` with 1 being the standard\n embossing effect. Default value is 1.\n\n * If an int or float, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = Emboss(alpha=(0.0, 1.0), strength=(0.5, 1.5))\n\n embosses an image with a variable strength in the range ``0.5 <= x <= 1.5``\n and overlays the result with a variable alpha in the range ``0.0 <= a <= 1.0``\n over the old image."}
{"_id": "q_420", "text": "Augmenter that detects edges that have certain directions and marks them\n in a black and white image and then overlays the result with the original\n image.\n\n dtype support::\n\n See ``imgaug.augmenters.convolutional.Convolve``.\n\n Parameters\n ----------\n alpha : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Visibility of the sharpened image. At 0, only the original image is\n visible, at 1.0 only its sharpened version is visible.\n\n * If an int or float, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n direction : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Angle of edges to pronounce, where 0 represents 0 degrees and 1.0\n represents 360 degrees (both clockwise, starting at the top).\n Default value is ``(0.0, 1.0)``, i.e. pick a random angle per image.\n\n * If an int or float, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = DirectedEdgeDetect(alpha=1.0, direction=0)\n\n turns input images into edge images in which edges are detected from\n top side of the image (i.e. the top sides of horizontal edges are\n added to the output).\n\n >>> aug = DirectedEdgeDetect(alpha=1.0, direction=90/360)\n\n same as before, but detecting edges from the right (right side of each\n vertical edge).\n\n >>> aug = DirectedEdgeDetect(alpha=1.0, direction=(0.0, 1.0))\n\n same as before, but detecting edges from a variable direction (anything\n between 0 and 1.0, i.e. 0 degrees and 360 degrees, starting from the\n top and moving clockwise).\n\n >>> aug = DirectedEdgeDetect(alpha=(0.0, 0.3), direction=0)\n\n generates edge images (edges detected from the top) and overlays them\n with the input images by a variable amount between 0 and 30 percent\n (e.g. for 0.3 then ``0.7*old_image + 0.3*edge_image``)."}
{"_id": "q_421", "text": "Normalize a shape tuple or array to a shape tuple.\n\n Parameters\n ----------\n shape : tuple of int or ndarray\n The input to normalize. May optionally be an array.\n\n Returns\n -------\n tuple of int\n Shape tuple."}
{"_id": "q_422", "text": "Project coordinates from one image shape to another.\n\n This performs a relative projection, e.g. a point at 60% of the old\n image width will be at 60% of the new image width after projection.\n\n Parameters\n ----------\n coords : ndarray or tuple of number\n Coordinates to project. Either a ``(N,2)`` numpy array or a tuple\n of `(x,y)` coordinates.\n\n from_shape : tuple of int or ndarray\n Old image shape.\n\n to_shape : tuple of int or ndarray\n New image shape.\n\n Returns\n -------\n ndarray\n Projected coordinates as ``(N,2)`` ``float32`` numpy array."}
{"_id": "q_423", "text": "Create an augmenter to add poisson noise to images.\n\n Poisson noise is comparable to gaussian noise as in ``AdditiveGaussianNoise``, but the values are sampled from\n a poisson distribution instead of a gaussian distribution. As poisson distributions produce only positive numbers,\n the sign of the sampled values are here randomly flipped.\n\n Values of around ``10.0`` for `lam` lead to visible noise (for uint8).\n Values of around ``20.0`` for `lam` lead to very visible noise (for uint8).\n It is recommended to usually set `per_channel` to True.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.AddElementwise``.\n\n Parameters\n ----------\n lam : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Lambda parameter of the poisson distribution. Recommended values are around ``0.0`` to ``10.0``.\n\n * If a number, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n per_channel : bool or float, optional\n Whether to use the same noise value per pixel for all channels (False)\n or to sample a new value for each channel (True).\n If this value is a float ``p``, then for ``p`` percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.AdditivePoissonNoise(lam=5.0)\n\n Adds poisson noise sampled from ``Poisson(5.0)`` to images.\n\n >>> aug = iaa.AdditivePoissonNoise(lam=(0.0, 10.0))\n\n Adds poisson noise sampled from ``Poisson(x)`` to images, where ``x`` is randomly sampled per image from the\n interval ``[0.0, 10.0]``.\n\n >>> aug = iaa.AdditivePoissonNoise(lam=5.0, per_channel=True)\n\n Adds poisson noise sampled from ``Poisson(5.0)`` to images,\n where the values are different per pixel *and* channel (e.g. a\n different one for red, green and blue channels for the same pixel).\n\n >>> aug = iaa.AdditivePoissonNoise(lam=(0.0, 10.0), per_channel=True)\n\n Adds poisson noise sampled from ``Poisson(x)`` to images,\n with ``x`` being sampled from ``uniform(0.0, 10.0)`` per image, pixel and channel.\n This is the *recommended* configuration.\n\n >>> aug = iaa.AdditivePoissonNoise(lam=2, per_channel=0.5)\n\n Adds poisson noise sampled from the distribution ``Poisson(2)`` to images,\n where the values are sometimes (50 percent of all cases) the same\n per pixel for all channels and sometimes different (other 50 percent)."}
{"_id": "q_424", "text": "Augmenter that sets a certain fraction of pixels in images to zero.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.MultiplyElementwise``.\n\n Parameters\n ----------\n p : float or tuple of float or imgaug.parameters.StochasticParameter, optional\n The probability of any pixel being dropped (i.e. set to zero).\n\n * If a float, then that value will be used for all images. A value\n of 1.0 would mean that all pixels will be dropped and 0.0 that\n no pixels would be dropped. A value of 0.05 corresponds to 5\n percent of all pixels dropped.\n * If a tuple ``(a, b)``, then a value p will be sampled from the\n range ``a <= p <= b`` per image and be used as the pixel's dropout\n probability.\n * If a StochasticParameter, then this parameter will be used to\n determine per pixel whether it should be dropped (sampled value\n of 0) or shouldn't (sampled value of 1).\n If you instead want to provide the probability as a stochastic\n parameter, you can usually do ``imgaug.parameters.Binomial(1-p)``\n to convert parameter `p` to a 0/1 representation.\n\n per_channel : bool or float, optional\n Whether to use the same value (is dropped / is not dropped)\n for all channels of a pixel (False) or to sample a new value for each\n channel (True).\n If this value is a float p, then for p percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.Dropout(0.02)\n\n drops 2 percent of all pixels.\n\n >>> aug = iaa.Dropout((0.0, 0.05))\n\n drops in each image a random fraction of all pixels, where the fraction\n is in the range ``0.0 <= x <= 0.05``.\n\n >>> aug = iaa.Dropout(0.02, per_channel=True)\n\n drops 2 percent of all pixels in a channel-wise fashion, i.e. it is unlikely\n for any pixel to have all channels set to zero (black pixels).\n\n >>> aug = iaa.Dropout(0.02, per_channel=0.5)\n\n same as previous example, but the `per_channel` feature is only active\n for 50 percent of all images."}
{"_id": "q_425", "text": "Creates an augmenter to apply impulse noise to an image.\n\n This is identical to ``SaltAndPepper``, except that per_channel is always set to True.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.SaltAndPepper``."}
{"_id": "q_426", "text": "Adds salt and pepper noise to an image, i.e. some white-ish and black-ish pixels.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.ReplaceElementwise``.\n\n Parameters\n ----------\n p : float or tuple of float or list of float or imgaug.parameters.StochasticParameter, optional\n Probability of changing a pixel to salt/pepper noise.\n\n * If a float, then that value will be used for all images as the\n probability.\n * If a tuple ``(a, b)``, then a probability will be sampled per image\n from the range ``a <= x <= b``.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, then this parameter will be used as\n the *mask*, i.e. it is expected to contain values between\n 0.0 and 1.0, where 1.0 means that salt/pepper is to be added\n at that location.\n\n per_channel : bool or float, optional\n Whether to use the same value for all channels (False)\n or to sample a new value for each channel (True).\n If this value is a float ``p``, then for ``p`` percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.SaltAndPepper(0.05)\n\n Replaces 5 percent of all pixels with salt/pepper."}
{"_id": "q_427", "text": "Adds pepper noise to an image, i.e. black-ish pixels.\n\n This is similar to dropout, but slower and the black pixels are not uniformly black.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.ReplaceElementwise``.\n\n Parameters\n ----------\n p : float or tuple of float or list of float or imgaug.parameters.StochasticParameter, optional\n Probability of changing a pixel to pepper noise.\n\n * If a float, then that value will be used for all images as the\n probability.\n * If a tuple ``(a, b)``, then a probability will be sampled per image\n from the range ``a <= x <= b``.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, then this parameter will be used as\n the *mask*, i.e. it is expected to contain values between\n 0.0 and 1.0, where 1.0 means that pepper is to be added\n at that location.\n\n per_channel : bool or float, optional\n Whether to use the same value for all channels (False)\n or to sample a new value for each channel (True).\n If this value is a float ``p``, then for ``p`` percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.Pepper(0.05)\n\n Replaces 5 percent of all pixels with pepper."}
{"_id": "q_428", "text": "Adds coarse pepper noise to an image, i.e. rectangles that contain noisy black-ish pixels.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.ReplaceElementwise``.\n\n Parameters\n ----------\n p : float or tuple of float or list of float or imgaug.parameters.StochasticParameter, optional\n Probability of changing a pixel to pepper noise.\n\n * If a float, then that value will be used for all images as the\n probability.\n * If a tuple ``(a, b)``, then a probability will be sampled per image\n from the range ``a <= x <= b.``\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, then this parameter will be used as\n the *mask*, i.e. it is expected to contain values between\n 0.0 and 1.0, where 1.0 means that pepper is to be added\n at that location.\n\n size_px : int or tuple of int or imgaug.parameters.StochasticParameter, optional\n The size of the lower resolution image from which to sample the noise\n mask in absolute pixel dimensions.\n\n * If an integer, then that size will be used for both height and\n width. E.g. a value of 3 would lead to a ``3x3`` mask, which is then\n upsampled to ``HxW``, where ``H`` is the image size and W the image width.\n * If a tuple ``(a, b)``, then two values ``M``, ``N`` will be sampled from the\n range ``[a..b]`` and the mask will be generated at size ``MxN``, then\n upsampled to ``HxW``.\n * If a StochasticParameter, then this parameter will be used to\n determine the sizes. It is expected to be discrete.\n\n size_percent : float or tuple of float or imgaug.parameters.StochasticParameter, optional\n The size of the lower resolution image from which to sample the noise\n mask *in percent* of the input image.\n\n * If a float, then that value will be used as the percentage of the\n height and width (relative to the original size). E.g. for value\n p, the mask will be sampled from ``(p*H)x(p*W)`` and later upsampled\n to ``HxW``.\n * If a tuple ``(a, b)``, then two values ``m``, ``n`` will be sampled from the\n interval ``(a, b)`` and used as the percentages, i.e the mask size\n will be ``(m*H)x(n*W)``.\n * If a StochasticParameter, then this parameter will be used to\n sample the percentage values. It is expected to be continuous.\n\n per_channel : bool or float, optional\n Whether to use the same value (is dropped / is not dropped)\n for all channels of a pixel (False) or to sample a new value for each\n channel (True).\n If this value is a float ``p``, then for ``p`` percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n min_size : int, optional\n Minimum size of the low resolution mask, both width and height. If\n `size_percent` or `size_px` leads to a lower value than this, `min_size`\n will be used instead. This should never have a value of less than 2,\n otherwise one may end up with a 1x1 low resolution mask, leading easily\n to the whole image being replaced.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.CoarsePepper(0.05, size_percent=(0.01, 0.1))\n\n Replaces 5 percent of all pixels with pepper in an image that has\n 1 to 10 percent of the input image size, then upscales the results\n to the input image size, leading to large rectangular areas being replaced."}
{"_id": "q_429", "text": "Augmenter that changes the contrast of images.\n\n dtype support:\n\n See ``imgaug.augmenters.contrast.LinearContrast``.\n\n Parameters\n ----------\n alpha : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Strength of the contrast normalization. Higher values than 1.0\n lead to higher contrast, lower values decrease the contrast.\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value will be sampled per image from\n the range ``a <= x <= b`` and be used as the alpha value.\n * If a list, then a random value will be sampled per image from\n that list.\n * If a StochasticParameter, then this parameter will be used to\n sample the alpha value per image.\n\n per_channel : bool or float, optional\n Whether to use the same value for all channels (False)\n or to sample a new value for each channel (True).\n If this value is a float ``p``, then for ``p`` percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> iaa.ContrastNormalization((0.5, 1.5))\n\n Decreases oder improves contrast per image by a random factor between\n 0.5 and 1.5. The factor 0.5 means that any difference from the center value\n (i.e. 128) will be halved, leading to less contrast.\n\n >>> iaa.ContrastNormalization((0.5, 1.5), per_channel=0.5)\n\n Same as before, but for 50 percent of all images the normalization is done\n independently per channel (i.e. factors can vary per channel for the same\n image). In the other 50 percent of all images, the factor is the same for\n all channels."}
{"_id": "q_430", "text": "Checks whether a variable is a numpy integer array.\n\n Parameters\n ----------\n val\n The variable to check.\n\n Returns\n -------\n bool\n True if the variable is a numpy integer array. Otherwise False."}
{"_id": "q_431", "text": "Checks whether a variable is a numpy float array.\n\n Parameters\n ----------\n val\n The variable to check.\n\n Returns\n -------\n bool\n True if the variable is a numpy float array. Otherwise False."}
{"_id": "q_432", "text": "Creates a copy of a random state.\n\n Parameters\n ----------\n random_state : numpy.random.RandomState\n The random state to copy.\n\n force_copy : bool, optional\n If True, this function will always create a copy of every random\n state. If False, it will not copy numpy's default random state,\n but all other random states.\n\n Returns\n -------\n rs_copy : numpy.random.RandomState\n The copied random state."}
{"_id": "q_433", "text": "Create N new random states based on an existing random state or seed.\n\n Parameters\n ----------\n random_state : numpy.random.RandomState\n Random state or seed from which to derive new random states.\n\n n : int, optional\n Number of random states to derive.\n\n Returns\n -------\n list of numpy.random.RandomState\n Derived random states."}
{"_id": "q_434", "text": "Generate a normalized rectangle to be extract from the standard quokka image.\n\n Parameters\n ----------\n extract : 'square' or tuple of number or imgaug.BoundingBox or imgaug.BoundingBoxesOnImage\n Unnormalized representation of the image subarea to be extracted.\n\n * If string ``square``, then a squared area ``(x: 0 to max 643, y: 0 to max 643)``\n will be extracted from the image.\n * If a tuple, then expected to contain four numbers denoting ``x1``, ``y1``, ``x2``\n and ``y2``.\n * If a BoundingBox, then that bounding box's area will be extracted from the image.\n * If a BoundingBoxesOnImage, then expected to contain exactly one bounding box\n and a shape matching the full image dimensions (i.e. (643, 960, *)). Then the\n one bounding box will be used similar to BoundingBox.\n\n Returns\n -------\n bb : imgaug.BoundingBox\n Normalized representation of the area to extract from the standard quokka image."}
{"_id": "q_435", "text": "Computes the intended new shape of an image-like array after resizing.\n\n Parameters\n ----------\n from_shape : tuple or ndarray\n Old shape of the array. Usually expected to be a tuple of form ``(H, W)`` or ``(H, W, C)`` or\n alternatively an array with two or three dimensions.\n\n to_shape : None or tuple of ints or tuple of floats or int or float or ndarray\n New shape of the array.\n\n * If None, then `from_shape` will be used as the new shape.\n * If an int ``V``, then the new shape will be ``(V, V, [C])``, where ``C`` will be added if it\n is part of `from_shape`.\n * If a float ``V``, then the new shape will be ``(H*V, W*V, [C])``, where ``H`` and ``W`` are the old\n height/width.\n * If a tuple ``(H', W', [C'])`` of ints, then ``H'`` and ``W'`` will be used as the new height\n and width.\n * If a tuple ``(H', W', [C'])`` of floats (except ``C``), then ``H'`` and ``W'`` will\n be used as the new height and width.\n * If a numpy array, then the array's shape will be used.\n\n Returns\n -------\n to_shape_computed : tuple of int\n New shape."}
{"_id": "q_436", "text": "Returns an image of a quokka as a numpy array.\n\n Parameters\n ----------\n size : None or float or tuple of int, optional\n Size of the output image. Input into :func:`imgaug.imgaug.imresize_single_image`.\n Usually expected to be a tuple ``(H, W)``, where ``H`` is the desired height\n and ``W`` is the width. If None, then the image will not be resized.\n\n extract : None or 'square' or tuple of number or imgaug.BoundingBox or imgaug.BoundingBoxesOnImage\n Subarea of the quokka image to extract:\n\n * If None, then the whole image will be used.\n * If string ``square``, then a squared area ``(x: 0 to max 643, y: 0 to max 643)`` will\n be extracted from the image.\n * If a tuple, then expected to contain four numbers denoting ``x1``, ``y1``, ``x2``\n and ``y2``.\n * If a BoundingBox, then that bounding box's area will be extracted from the image.\n * If a BoundingBoxesOnImage, then expected to contain exactly one bounding box\n and a shape matching the full image dimensions (i.e. ``(643, 960, *)``). Then the\n one bounding box will be used similar to BoundingBox.\n\n Returns\n -------\n img : (H,W,3) ndarray\n The image array of dtype uint8."}
{"_id": "q_437", "text": "Returns a segmentation map for the standard example quokka image.\n\n Parameters\n ----------\n size : None or float or tuple of int, optional\n See :func:`imgaug.quokka`.\n\n extract : None or 'square' or tuple of number or imgaug.BoundingBox or imgaug.BoundingBoxesOnImage\n See :func:`imgaug.quokka`.\n\n Returns\n -------\n result : imgaug.SegmentationMapOnImage\n Segmentation map object."}
{"_id": "q_438", "text": "Returns example keypoints on the standard example quokke image.\n\n The keypoints cover the eyes, ears, nose and paws.\n\n Parameters\n ----------\n size : None or float or tuple of int or tuple of float, optional\n Size of the output image on which the keypoints are placed. If None, then the keypoints\n are not projected to any new size (positions on the original image are used).\n Floats lead to relative size changes, ints to absolute sizes in pixels.\n\n extract : None or 'square' or tuple of number or imgaug.BoundingBox or imgaug.BoundingBoxesOnImage\n Subarea to extract from the image. See :func:`imgaug.quokka`.\n\n Returns\n -------\n kpsoi : imgaug.KeypointsOnImage\n Example keypoints on the quokka image."}
{"_id": "q_439", "text": "Returns example bounding boxes on the standard example quokke image.\n\n Currently only a single bounding box is returned that covers the quokka.\n\n Parameters\n ----------\n size : None or float or tuple of int or tuple of float, optional\n Size of the output image on which the BBs are placed. If None, then the BBs\n are not projected to any new size (positions on the original image are used).\n Floats lead to relative size changes, ints to absolute sizes in pixels.\n\n extract : None or 'square' or tuple of number or imgaug.BoundingBox or imgaug.BoundingBoxesOnImage\n Subarea to extract from the image. See :func:`imgaug.quokka`.\n\n Returns\n -------\n bbsoi : imgaug.BoundingBoxesOnImage\n Example BBs on the quokka image."}
{"_id": "q_440", "text": "Returns example polygons on the standard example quokke image.\n\n The result contains one polygon, covering the quokka's outline.\n\n Parameters\n ----------\n size : None or float or tuple of int or tuple of float, optional\n Size of the output image on which the polygons are placed. If None,\n then the polygons are not projected to any new size (positions on the\n original image are used). Floats lead to relative size changes, ints\n to absolute sizes in pixels.\n\n extract : None or 'square' or tuple of number or imgaug.BoundingBox or \\\n imgaug.BoundingBoxesOnImage\n Subarea to extract from the image. See :func:`imgaug.quokka`.\n\n Returns\n -------\n psoi : imgaug.PolygonsOnImage\n Example polygons on the quokka image."}
{"_id": "q_441", "text": "Returns the angle in radians between vectors `v1` and `v2`.\n\n From http://stackoverflow.com/questions/2827393/angles-between-two-n-dimensional-vectors-in-python\n\n Parameters\n ----------\n v1 : (N,) ndarray\n First vector.\n\n v2 : (N,) ndarray\n Second vector.\n\n Returns\n -------\n out : float\n Angle in radians.\n\n Examples\n --------\n >>> angle_between_vectors(np.float32([1, 0, 0]), np.float32([0, 1, 0]))\n 1.570796...\n\n >>> angle_between_vectors(np.float32([1, 0, 0]), np.float32([1, 0, 0]))\n 0.0\n\n >>> angle_between_vectors(np.float32([1, 0, 0]), np.float32([-1, 0, 0]))\n 3.141592..."}
{"_id": "q_442", "text": "Compute the intersection point of two lines.\n\n Taken from https://stackoverflow.com/a/20679579 .\n\n Parameters\n ----------\n x1 : number\n x coordinate of the first point on line 1. (The lines extends beyond this point.)\n\n y1 : number\n y coordinate of the first point on line 1. (The lines extends beyond this point.)\n\n x2 : number\n x coordinate of the second point on line 1. (The lines extends beyond this point.)\n\n y2 : number\n y coordinate of the second point on line 1. (The lines extends beyond this point.)\n\n x3 : number\n x coordinate of the first point on line 2. (The lines extends beyond this point.)\n\n y3 : number\n y coordinate of the first point on line 2. (The lines extends beyond this point.)\n\n x4 : number\n x coordinate of the second point on line 2. (The lines extends beyond this point.)\n\n y4 : number\n y coordinate of the second point on line 2. (The lines extends beyond this point.)\n\n Returns\n -------\n tuple of number or bool\n The coordinate of the intersection point as a tuple ``(x, y)``.\n If the lines are parallel (no intersection point or an infinite number of them), the result is False."}
{"_id": "q_443", "text": "Resizes a single image.\n\n\n dtype support::\n\n See :func:`imgaug.imgaug.imresize_many_images`.\n\n Parameters\n ----------\n image : (H,W,C) ndarray or (H,W) ndarray\n Array of the image to resize.\n Usually recommended to be of dtype uint8.\n\n sizes : float or iterable of int or iterable of float\n See :func:`imgaug.imgaug.imresize_many_images`.\n\n interpolation : None or str or int, optional\n See :func:`imgaug.imgaug.imresize_many_images`.\n\n Returns\n -------\n out : (H',W',C) ndarray or (H',W') ndarray\n The resized image."}
{"_id": "q_444", "text": "Compute the amount of pixels by which an array has to be padded to fulfill an aspect ratio.\n\n The aspect ratio is given as width/height.\n Depending on which dimension is smaller (height or width), only the corresponding\n sides (left/right or top/bottom) will be padded. In each case, both of the sides will\n be padded equally.\n\n Parameters\n ----------\n arr : (H,W) ndarray or (H,W,C) ndarray\n Image-like array for which to compute pad amounts.\n\n aspect_ratio : float\n Target aspect ratio, given as width/height. E.g. 2.0 denotes the image having twice\n as much width as height.\n\n Returns\n -------\n result : tuple of int\n Required paddign amounts to reach the target aspect ratio, given as a tuple\n of the form ``(top, right, bottom, left)``."}
{"_id": "q_445", "text": "Resize an array by pooling values within blocks.\n\n dtype support::\n\n * ``uint8``: yes; fully tested\n * ``uint16``: yes; tested\n * ``uint32``: yes; tested (2)\n * ``uint64``: no (1)\n * ``int8``: yes; tested\n * ``int16``: yes; tested\n * ``int32``: yes; tested (2)\n * ``int64``: no (1)\n * ``float16``: yes; tested\n * ``float32``: yes; tested\n * ``float64``: yes; tested\n * ``float128``: yes; tested (2)\n * ``bool``: yes; tested\n\n - (1) results too inaccurate (at least when using np.average as func)\n - (2) Note that scikit-image documentation says that the wrapped pooling function converts\n inputs to float64. Actual tests showed no indication of that happening (at least when\n using preserve_dtype=True).\n\n Parameters\n ----------\n arr : (H,W) ndarray or (H,W,C) ndarray\n Image-like array to pool. Ideally of datatype ``numpy.float64``.\n\n block_size : int or tuple of int\n Spatial size of each group of values to pool, aka kernel size.\n If a single integer, then a symmetric block of that size along height and width will be used.\n If a tuple of two values, it is assumed to be the block size along height and width of the image-like,\n with pooling happening per channel.\n If a tuple of three values, it is assumed to be the block size along height, width and channels.\n\n func : callable\n Function to apply to a given block in order to convert it to a single number,\n e.g. :func:`numpy.average`, :func:`numpy.min`, :func:`numpy.max`.\n\n cval : number, optional\n Value to use in order to pad the array along its border if the array cannot be divided\n by `block_size` without remainder.\n\n preserve_dtype : bool, optional\n Whether to convert the array back to the input datatype if it is changed away from\n that in the pooling process.\n\n Returns\n -------\n arr_reduced : (H',W') ndarray or (H',W',C') ndarray\n Array after pooling."}
{"_id": "q_446", "text": "Resize an array using average pooling.\n\n dtype support::\n\n See :func:`imgaug.imgaug.pool`.\n\n Parameters\n ----------\n arr : (H,W) ndarray or (H,W,C) ndarray\n Image-like array to pool. See :func:`imgaug.pool` for details.\n\n block_size : int or tuple of int or tuple of int\n Size of each block of values to pool. See :func:`imgaug.pool` for details.\n\n cval : number, optional\n Padding value. See :func:`imgaug.pool` for details.\n\n preserve_dtype : bool, optional\n Whether to preserve the input array dtype. See :func:`imgaug.pool` for details.\n\n Returns\n -------\n arr_reduced : (H',W') ndarray or (H',W',C') ndarray\n Array after average pooling."}
{"_id": "q_447", "text": "Resize an array using max-pooling.\n\n dtype support::\n\n See :func:`imgaug.imgaug.pool`.\n\n Parameters\n ----------\n arr : (H,W) ndarray or (H,W,C) ndarray\n Image-like array to pool. See :func:`imgaug.pool` for details.\n\n block_size : int or tuple of int or tuple of int\n Size of each block of values to pool. See `imgaug.pool` for details.\n\n cval : number, optional\n Padding value. See :func:`imgaug.pool` for details.\n\n preserve_dtype : bool, optional\n Whether to preserve the input array dtype. See :func:`imgaug.pool` for details.\n\n Returns\n -------\n arr_reduced : (H',W') ndarray or (H',W',C') ndarray\n Array after max-pooling."}
{"_id": "q_448", "text": "Converts the input images to a grid image and shows it in a new window.\n\n dtype support::\n\n minimum of (\n :func:`imgaug.imgaug.draw_grid`,\n :func:`imgaug.imgaug.imshow`\n )\n\n Parameters\n ----------\n images : (N,H,W,3) ndarray or iterable of (H,W,3) array\n See :func:`imgaug.draw_grid`.\n\n rows : None or int, optional\n See :func:`imgaug.draw_grid`.\n\n cols : None or int, optional\n See :func:`imgaug.draw_grid`."}
{"_id": "q_449", "text": "Shows an image in a window.\n\n dtype support::\n\n * ``uint8``: yes; not tested\n * ``uint16``: ?\n * ``uint32``: ?\n * ``uint64``: ?\n * ``int8``: ?\n * ``int16``: ?\n * ``int32``: ?\n * ``int64``: ?\n * ``float16``: ?\n * ``float32``: ?\n * ``float64``: ?\n * ``float128``: ?\n * ``bool``: ?\n\n Parameters\n ----------\n image : (H,W,3) ndarray\n Image to show.\n\n backend : {'matplotlib', 'cv2'}, optional\n Library to use to show the image. May be either matplotlib or OpenCV ('cv2').\n OpenCV tends to be faster, but apparently causes more technical issues."}
{"_id": "q_450", "text": "Generate a non-silent deprecation warning with stacktrace.\n\n The used warning is ``imgaug.imgaug.DeprecationWarning``.\n\n Parameters\n ----------\n msg : str\n The message of the warning.\n\n stacklevel : int, optional\n How many steps above this function to \"jump\" in the stacktrace for\n the displayed file and line number of the error message.\n Usually 2."}
{"_id": "q_451", "text": "Returns whether an augmenter may be executed.\n\n Returns\n -------\n bool\n If True, the augmenter may be executed. If False, it may not be executed."}
{"_id": "q_452", "text": "A function to be called after the augmentation of images was\n performed.\n\n Returns\n -------\n (N,H,W,C) ndarray or (N,H,W) ndarray or list of (H,W,C) ndarray or list of (H,W) ndarray\n The input images, optionally modified."}
{"_id": "q_453", "text": "Augment batches asynchonously.\n\n Parameters\n ----------\n batches : list of imgaug.augmentables.batches.Batch\n The batches to augment.\n\n chunksize : None or int, optional\n Rough indicator of how many tasks should be sent to each worker. Increasing this number can improve\n performance.\n\n callback : None or callable, optional\n Function to call upon finish. See `multiprocessing.Pool`.\n\n error_callback : None or callable, optional\n Function to call upon errors. See `multiprocessing.Pool`.\n\n Returns\n -------\n multiprocessing.MapResult\n Asynchonous result. See `multiprocessing.Pool`."}
{"_id": "q_454", "text": "Augment batches from a generator.\n\n Parameters\n ----------\n batches : generator of imgaug.augmentables.batches.Batch\n The batches to augment, provided as a generator. Each call to the generator should yield exactly one\n batch.\n\n chunksize : None or int, optional\n Rough indicator of how many tasks should be sent to each worker. Increasing this number can improve\n performance.\n\n Yields\n ------\n imgaug.augmentables.batches.Batch\n Augmented batch."}
{"_id": "q_455", "text": "Terminate the pool immediately."}
{"_id": "q_456", "text": "Returns a batch from the queue of augmented batches.\n\n If workers are still running and there are no batches in the queue,\n it will automatically wait for the next batch.\n\n Returns\n -------\n out : None or imgaug.Batch\n One batch or None if all workers have finished."}
{"_id": "q_457", "text": "Converts another parameter's results to negative values.\n\n Parameters\n ----------\n other_param : imgaug.parameters.StochasticParameter\n Other parameter which's sampled values are to be\n modified.\n\n mode : {'invert', 'reroll'}, optional\n How to change the signs. Valid values are ``invert`` and ``reroll``.\n ``invert`` means that wrong signs are simply flipped.\n ``reroll`` means that all samples with wrong signs are sampled again,\n optionally many times, until they randomly end up having the correct\n sign.\n\n reroll_count_max : int, optional\n If `mode` is set to ``reroll``, this determines how often values may\n be rerolled before giving up and simply flipping the sign (as in\n ``mode=\"invert\"``). This shouldn't be set too high, as rerolling is\n expensive.\n\n Examples\n --------\n >>> param = Negative(Normal(0, 1), mode=\"reroll\")\n\n Generates a normal distribution that has only negative values."}
{"_id": "q_458", "text": "Estimate the area of the polygon.\n\n Returns\n -------\n number\n Area of the polygon."}
{"_id": "q_459", "text": "Project the polygon onto an image with different shape.\n\n The relative coordinates of all points remain the same.\n E.g. a point at (x=20, y=20) on an image (width=100, height=200) will be\n projected on a new image (width=200, height=100) to (x=40, y=10).\n\n This is intended for cases where the original image is resized.\n It cannot be used for more complex changes (e.g. padding, cropping).\n\n Parameters\n ----------\n from_shape : tuple of int\n Shape of the original image. (Before resize.)\n\n to_shape : tuple of int\n Shape of the new image. (After resize.)\n\n Returns\n -------\n imgaug.Polygon\n Polygon object with new coordinates."}
{"_id": "q_460", "text": "Find the index of the point within the exterior that is closest to the given coordinates.\n\n \"Closeness\" is here defined based on euclidean distance.\n This method will raise an AssertionError if the exterior contains no points.\n\n Parameters\n ----------\n x : number\n X-coordinate around which to search for close points.\n\n y : number\n Y-coordinate around which to search for close points.\n\n return_distance : bool, optional\n Whether to also return the distance of the closest point.\n\n Returns\n -------\n int\n Index of the closest point.\n\n number\n Euclidean distance to the closest point.\n This value is only returned if `return_distance` was set to True."}
{"_id": "q_461", "text": "Estimate whether the polygon is fully inside the image area.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use.\n If an ndarray, its shape will be used.\n If a tuple, it is assumed to represent the image shape and must contain at least two integers.\n\n Returns\n -------\n bool\n True if the polygon is fully inside the image area.\n False otherwise."}
{"_id": "q_462", "text": "Estimate whether the polygon is at least partially inside the image area.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use.\n If an ndarray, its shape will be used.\n If a tuple, it is assumed to represent the image shape and must contain at least two integers.\n\n Returns\n -------\n bool\n True if the polygon is at least partially inside the image area.\n False otherwise."}
{"_id": "q_463", "text": "Estimate whether the polygon is partially or fully outside of the image area.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use.\n If an ndarray, its shape will be used.\n If a tuple, it is assumed to represent the image shape and must contain at least two integers.\n\n fully : bool, optional\n Whether to return True if the polygon is fully outside of the image area.\n\n partly : bool, optional\n Whether to return True if the polygon is at least partially outside fo the image area.\n\n Returns\n -------\n bool\n True if the polygon is partially/fully outside of the image area, depending\n on defined parameters. False otherwise."}
{"_id": "q_464", "text": "Extract the image pixels within the polygon.\n\n This function will zero-pad the image if the polygon is partially/fully outside of\n the image.\n\n Parameters\n ----------\n image : (H,W) ndarray or (H,W,C) ndarray\n The image from which to extract the pixels within the polygon.\n\n Returns\n -------\n result : (H',W') ndarray or (H',W',C) ndarray\n Pixels within the polygon. Zero-padded if the polygon is partially/fully\n outside of the image."}
{"_id": "q_465", "text": "Set the first point of the exterior to the given point based on its index.\n\n Note: This method does *not* work in-place.\n\n Parameters\n ----------\n point_idx : int\n Index of the desired starting point.\n\n Returns\n -------\n imgaug.Polygon\n Copy of this polygon with the new point order."}
{"_id": "q_466", "text": "Convert this polygon to a Shapely polygon.\n\n Returns\n -------\n shapely.geometry.Polygon\n The Shapely polygon matching this polygon's exterior."}
{"_id": "q_467", "text": "Convert this polygon to a Shapely LineString object.\n\n Parameters\n ----------\n closed : bool, optional\n Whether to return the line string with the last point being identical to the first point.\n\n interpolate : int, optional\n Number of points to interpolate between any pair of two consecutive points. These points are added\n to the final line string.\n\n Returns\n -------\n shapely.geometry.LineString\n The Shapely LineString matching the polygon's exterior."}
{"_id": "q_468", "text": "Convert this polygon's `exterior` to a ``LineString`` instance.\n\n Parameters\n ----------\n closed : bool, optional\n Whether to close the line string, i.e. to add the first point of\n the `exterior` also as the last point at the end of the line string.\n This has no effect if the polygon has a single point or zero\n points.\n\n Returns\n -------\n imgaug.augmentables.lines.LineString\n Exterior of the polygon as a line string."}
{"_id": "q_469", "text": "Estimate if this and other polygon's exterior are almost identical.\n\n The two exteriors can have different numbers of points, but any point\n randomly sampled on the exterior of one polygon should be close to the\n closest point on the exterior of the other polygon.\n\n Note that this method works approximately. One can come up with\n polygons with fairly different shapes that will still be estimated as\n equal by this method. In practice however this should be unlikely to be\n the case. The probability for something like that goes down as the\n interpolation parameter is increased.\n\n Parameters\n ----------\n other : imgaug.Polygon or (N,2) ndarray or list of tuple\n The other polygon with which to compare the exterior.\n If this is an ndarray, it is assumed to represent an exterior.\n It must then have dtype ``float32`` and shape ``(N,2)`` with the\n second dimension denoting xy-coordinates.\n If this is a list of tuples, it is assumed to represent an exterior.\n Each tuple then must contain exactly two numbers, denoting\n xy-coordinates.\n\n max_distance : number, optional\n The maximum euclidean distance between a point on one polygon and\n the closest point on the other polygon. If the distance is exceeded\n for any such pair, the two exteriors are not viewed as equal. The\n points are other the points contained in the polygon's exterior\n ndarray or interpolated points between these.\n\n points_per_edge : int, optional\n How many points to interpolate on each edge.\n\n Returns\n -------\n bool\n Whether the two polygon's exteriors can be viewed as equal\n (approximate test)."}
{"_id": "q_470", "text": "Create a shallow copy of the Polygon object.\n\n Parameters\n ----------\n exterior : list of imgaug.Keypoint or list of tuple or (N,2) ndarray, optional\n List of points defining the polygon. See :func:`imgaug.Polygon.__init__` for details.\n\n label : None or str, optional\n If not None, then the label of the copied object will be set to this value.\n\n Returns\n -------\n imgaug.Polygon\n Shallow copy."}
{"_id": "q_471", "text": "Create a deep copy of the Polygon object.\n\n Parameters\n ----------\n exterior : list of Keypoint or list of tuple or (N,2) ndarray, optional\n List of points defining the polygon. See `imgaug.Polygon.__init__` for details.\n\n label : None or str\n If not None, then the label of the copied object will be set to this value.\n\n Returns\n -------\n imgaug.Polygon\n Deep copy."}
{"_id": "q_472", "text": "Remove all polygons that are fully or partially outside of the image.\n\n Parameters\n ----------\n fully : bool, optional\n Whether to remove polygons that are fully outside of the image.\n\n partly : bool, optional\n Whether to remove polygons that are partially outside of the image.\n\n Returns\n -------\n imgaug.PolygonsOnImage\n Reduced set of polygons, with those that were fully/partially\n outside of the image removed."}
{"_id": "q_473", "text": "Clip off all parts from all polygons that are outside of the image.\n\n NOTE: The result can contain less polygons than the input did. That\n happens when a polygon is fully outside of the image plane.\n\n NOTE: The result can also contain *more* polygons than the input\n did. That happens when distinct parts of a polygon are only\n connected by areas that are outside of the image plane and hence will\n be clipped off, resulting in two or more unconnected polygon parts that\n are left in the image plane.\n\n Returns\n -------\n imgaug.PolygonsOnImage\n Polygons, clipped to fall within the image dimensions. Count of\n output polygons may differ from the input count."}
{"_id": "q_474", "text": "Create a deep copy of the PolygonsOnImage object.\n\n Returns\n -------\n imgaug.PolygonsOnImage\n Deep copy."}
{"_id": "q_475", "text": "Create a MultiPolygon from a Shapely MultiPolygon, a Shapely Polygon or a Shapely GeometryCollection.\n\n This also creates all necessary Polygons contained by this MultiPolygon.\n\n Parameters\n ----------\n geometry : shapely.geometry.MultiPolygon or shapely.geometry.Polygon\\\n or shapely.geometry.collection.GeometryCollection\n The object to convert to a MultiPolygon.\n\n label : None or str, optional\n A label assigned to all Polygons within the MultiPolygon.\n\n Returns\n -------\n imgaug.MultiPolygon\n The derived MultiPolygon."}
{"_id": "q_476", "text": "Return a list of unordered intersection points."}
{"_id": "q_477", "text": "Get predecessor to key, raises KeyError if key is min key\n or key does not exist."}
{"_id": "q_478", "text": "Get successor to key, raises KeyError if key is max key\n or key does not exist."}
{"_id": "q_479", "text": "Generate 2D OpenSimplex noise from X,Y coordinates."}
{"_id": "q_480", "text": "Get the height of a bounding box encapsulating the line."}
{"_id": "q_481", "text": "Get the width of a bounding box encapsulating the line."}
{"_id": "q_482", "text": "Get for each point whether it is inside of the given image plane.\n\n Parameters\n ----------\n image : ndarray or tuple of int\n Either an image with shape ``(H,W,[C])`` or a tuple denoting\n such an image shape.\n\n Returns\n -------\n ndarray\n Boolean array with one value per point indicating whether it is\n inside of the provided image plane (``True``) or not (``False``)."}
{"_id": "q_483", "text": "Get the euclidean distance between each two consecutive points.\n\n Returns\n -------\n ndarray\n Euclidean distances between point pairs.\n Same order as in `coords`. For ``N`` points, ``N-1`` distances\n are returned."}
{"_id": "q_484", "text": "Compute the minimal distance between the line string and `other`.\n\n Parameters\n ----------\n other : tuple of number \\\n or imgaug.augmentables.kps.Keypoint \\\n or imgaug.augmentables.LineString\n Other object to which to compute the distance.\n\n default\n Value to return if this line string or `other` contain no points.\n\n Returns\n -------\n float\n Distance to `other` or `default` if not distance could be computed."}
{"_id": "q_485", "text": "Project the line string onto a differently shaped image.\n\n E.g. if a point of the line string is on its original image at\n ``x=(10 of 100 pixels)`` and ``y=(20 of 100 pixels)`` and is projected\n onto a new image with size ``(width=200, height=200)``, its new\n position will be ``(x=20, y=40)``.\n\n This is intended for cases where the original image is resized.\n It cannot be used for more complex changes (e.g. padding, cropping).\n\n Parameters\n ----------\n from_shape : tuple of int or ndarray\n Shape of the original image. (Before resize.)\n\n to_shape : tuple of int or ndarray\n Shape of the new image. (After resize.)\n\n Returns\n -------\n out : imgaug.augmentables.lines.LineString\n Line string with new coordinates."}
{"_id": "q_486", "text": "Estimate whether the line string is fully inside the image area.\n\n Parameters\n ----------\n image : ndarray or tuple of int\n Either an image with shape ``(H,W,[C])`` or a tuple denoting\n such an image shape.\n\n default\n Default value to return if the line string contains no points.\n\n Returns\n -------\n bool\n True if the line string is fully inside the image area.\n False otherwise."}
{"_id": "q_487", "text": "Draw the line segments of the line string as a heatmap array.\n\n Parameters\n ----------\n image_shape : tuple of int\n The shape of the image onto which to draw the line mask.\n\n alpha : float, optional\n Opacity of the line string. Higher values denote a more visible\n line string.\n\n size : int, optional\n Thickness of the line segments.\n\n antialiased : bool, optional\n Whether to draw the line with anti-aliasing activated.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n ndarray\n Float array of shape `image_shape` (no channel axis) with drawn\n line string. All values are in the interval ``[0.0, 1.0]``."}
{"_id": "q_488", "text": "Draw the points of the line string as a heatmap array.\n\n Parameters\n ----------\n image_shape : tuple of int\n The shape of the image onto which to draw the point mask.\n\n alpha : float, optional\n Opacity of the line string points. Higher values denote a more\n visible points.\n\n size : int, optional\n Size of the points in pixels.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n ndarray\n Float array of shape `image_shape` (no channel axis) with drawn\n line string points. All values are in the interval ``[0.0, 1.0]``."}
{"_id": "q_489", "text": "Draw the line segments and points of the line string as a heatmap array.\n\n Parameters\n ----------\n image_shape : tuple of int\n The shape of the image onto which to draw the line mask.\n\n alpha_lines : float, optional\n Opacity of the line string. Higher values denote a more visible\n line string.\n\n alpha_points : float, optional\n Opacity of the line string points. Higher values denote a more\n visible points.\n\n size_lines : int, optional\n Thickness of the line segments.\n\n size_points : int, optional\n Size of the points in pixels.\n\n antialiased : bool, optional\n Whether to draw the line with anti-aliasing activated.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n ndarray\n Float array of shape `image_shape` (no channel axis) with drawn\n line segments and points. All values are in the\n interval ``[0.0, 1.0]``."}
{"_id": "q_490", "text": "Draw the line string on an image.\n\n Parameters\n ----------\n image : ndarray\n The `(H,W,C)` `uint8` image onto which to draw the line string.\n\n color : iterable of int, optional\n Color to use as RGB, i.e. three values.\n The color of the line and points are derived from this value,\n unless they are set.\n\n color_lines : None or iterable of int\n Color to use for the line segments as RGB, i.e. three values.\n If ``None``, this value is derived from `color`.\n\n color_points : None or iterable of int\n Color to use for the points as RGB, i.e. three values.\n If ``None``, this value is derived from ``0.5 * color``.\n\n alpha : float, optional\n Opacity of the line string. Higher values denote more visible\n points.\n The alphas of the line and points are derived from this value,\n unless they are set.\n\n alpha_lines : None or float, optional\n Opacity of the line string. Higher values denote more visible\n line string.\n If ``None``, this value is derived from `alpha`.\n\n alpha_points : None or float, optional\n Opacity of the line string points. Higher values denote more\n visible points.\n If ``None``, this value is derived from `alpha`.\n\n size : int, optional\n Size of the line string.\n The sizes of the line and points are derived from this value,\n unless they are set.\n\n size_lines : None or int, optional\n Thickness of the line segments.\n If ``None``, this value is derived from `size`.\n\n size_points : None or int, optional\n Size of the points in pixels.\n If ``None``, this value is derived from ``3 * size``.\n\n antialiased : bool, optional\n Whether to draw the line with anti-aliasing activated.\n This does currently not affect the point drawing.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n ndarray\n Image with line string drawn on it."}
{"_id": "q_491", "text": "Extract the image pixels covered by the line string.\n\n It will only extract pixels overlapped by the line string.\n\n This function will by default zero-pad the image if the line string is\n partially/fully outside of the image. This is for consistency with\n the same implementations for bounding boxes and polygons.\n\n Parameters\n ----------\n image : ndarray\n The image of shape `(H,W,[C])` from which to extract the pixels\n within the line string.\n\n size : int, optional\n Thickness of the line.\n\n pad : bool, optional\n Whether to zero-pad the image if the object is partially/fully\n outside of it.\n\n pad_max : None or int, optional\n The maximum number of pixels that may be zero-paded on any side,\n i.e. if this has value ``N`` the total maximum of added pixels\n is ``4*N``.\n This option exists to prevent extremely large images as a result of\n single points being moved very far away during augmentation.\n\n antialiased : bool, optional\n Whether to apply anti-aliasing to the line string.\n\n prevent_zero_size : bool, optional\n Whether to prevent height or width of the extracted image from\n becoming zero. If this is set to True and height or width of the\n line string is below 1, the height/width will be increased to 1.\n This can be useful to prevent problems, e.g. with image saving or\n plotting. If it is set to False, images will be returned as\n ``(H', W')`` or ``(H', W', 3)`` with ``H`` or ``W`` potentially\n being 0.\n\n Returns\n -------\n image : (H',W') ndarray or (H',W',C) ndarray\n Pixels overlapping with the line string. Zero-padded if the\n line string is partially/fully outside of the image and\n ``pad=True``. If `prevent_zero_size` is activated, it is\n guarantueed that ``H'>0`` and ``W'>0``, otherwise only\n ``H'>=0`` and ``W'>=0``."}
{"_id": "q_492", "text": "Concatenate this line string with another one.\n\n This will add a line segment between the end point of this line string\n and the start point of `other`.\n\n Parameters\n ----------\n other : imgaug.augmentables.lines.LineString or ndarray \\\n or iterable of tuple of number\n The points to add to this line string.\n\n Returns\n -------\n imgaug.augmentables.lines.LineString\n New line string with concatenated points.\n The `label` of this line string will be kept."}
{"_id": "q_493", "text": "Generate a bounding box encapsulating the line string.\n\n Returns\n -------\n None or imgaug.augmentables.bbs.BoundingBox\n Bounding box encapsulating the line string.\n ``None`` if the line string contained no points."}
{"_id": "q_494", "text": "Generate a heatmap object from the line string.\n\n This is similar to\n :func:`imgaug.augmentables.lines.LineString.draw_lines_heatmap_array`\n executed with ``alpha=1.0``. The result is wrapped in a\n ``HeatmapsOnImage`` object instead of just an array.\n No points are drawn.\n\n Parameters\n ----------\n image_shape : tuple of int\n The shape of the image onto which to draw the line mask.\n\n size_lines : int, optional\n Thickness of the line.\n\n size_points : int, optional\n Size of the points in pixels.\n\n antialiased : bool, optional\n Whether to draw the line with anti-aliasing activated.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n imgaug.augmentables.heatmaps.HeatmapOnImage\n Heatmap object containing drawn line string."}
{"_id": "q_495", "text": "Generate a segmentation map object from the line string.\n\n This is similar to\n :func:`imgaug.augmentables.lines.LineString.draw_mask`.\n The result is wrapped in a ``SegmentationMapOnImage`` object\n instead of just an array.\n\n Parameters\n ----------\n image_shape : tuple of int\n The shape of the image onto which to draw the line mask.\n\n size_lines : int, optional\n Thickness of the line.\n\n size_points : int, optional\n Size of the points in pixels.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n imgaug.augmentables.segmaps.SegmentationMapOnImage\n Segmentation map object containing drawn line string."}
{"_id": "q_496", "text": "Compare this and another LineString's coordinates.\n\n This is an approximate method based on pointwise distances and can\n in rare corner cases produce wrong outputs.\n\n Parameters\n ----------\n other : imgaug.augmentables.lines.LineString \\\n or tuple of number \\\n or ndarray \\\n or list of ndarray \\\n or list of tuple of number\n The other line string or its coordinates.\n\n max_distance : float\n Max distance of any point from the other line string before\n the two line strings are evaluated to be unequal.\n\n points_per_edge : int, optional\n How many points to interpolate on each edge.\n\n Returns\n -------\n bool\n Whether the two LineString's coordinates are almost identical,\n i.e. the max distance is below the threshold.\n If both have no coordinates, ``True`` is returned.\n If only one has no coordinates, ``False`` is returned.\n Beyond that, the number of points is not evaluated."}
{"_id": "q_497", "text": "Compare this and another LineString.\n\n Parameters\n ----------\n other: imgaug.augmentables.lines.LineString\n The other line string. Must be a LineString instance, not just\n its coordinates.\n\n max_distance : float, optional\n See :func:`imgaug.augmentables.lines.LineString.coords_almost_equals`.\n\n points_per_edge : int, optional\n See :func:`imgaug.augmentables.lines.LineString.coords_almost_equals`.\n\n Returns\n -------\n bool\n ``True`` if the coordinates are almost equal according to\n :func:`imgaug.augmentables.lines.LineString.coords_almost_equals`\n and additionally the labels are identical. Otherwise ``False``."}
{"_id": "q_498", "text": "Create a shallow copy of the LineString object.\n\n Parameters\n ----------\n coords : None or iterable of tuple of number or ndarray\n If not ``None``, then the coords of the copied object will be set\n to this value.\n\n label : None or str\n If not ``None``, then the label of the copied object will be set to\n this value.\n\n Returns\n -------\n imgaug.augmentables.lines.LineString\n Shallow copy."}
{"_id": "q_499", "text": "Clip off all parts of the line strings that are outside of the image.\n\n Returns\n -------\n imgaug.augmentables.lines.LineStringsOnImage\n Line strings, clipped to fall within the image dimensions."}
{"_id": "q_500", "text": "Create a shallow copy of the LineStringsOnImage object.\n\n Parameters\n ----------\n line_strings : None \\\n or list of imgaug.augmentables.lines.LineString, optional\n List of line strings on the image.\n If not ``None``, then the ``line_strings`` attribute of the copied\n object will be set to this value.\n\n shape : None or tuple of int or ndarray, optional\n The shape of the image on which the objects are placed.\n Either an image with shape ``(H,W,[C])`` or a tuple denoting\n such an image shape.\n If not ``None``, then the ``shape`` attribute of the copied object\n will be set to this value.\n\n Returns\n -------\n imgaug.augmentables.lines.LineStringsOnImage\n Shallow copy."}
{"_id": "q_501", "text": "Create a deep copy of the LineStringsOnImage object.\n\n Parameters\n ----------\n line_strings : None \\\n or list of imgaug.augmentables.lines.LineString, optional\n List of line strings on the image.\n If not ``None``, then the ``line_strings`` attribute of the copied\n object will be set to this value.\n\n shape : None or tuple of int or ndarray, optional\n The shape of the image on which the objects are placed.\n Either an image with shape ``(H,W,[C])`` or a tuple denoting\n such an image shape.\n If not ``None``, then the ``shape`` attribute of the copied object\n will be set to this value.\n\n Returns\n -------\n imgaug.augmentables.lines.LineStringsOnImage\n Deep copy."}
{"_id": "q_502", "text": "Blend two images using an alpha blending.\n\n In an alpha blending, the two images are naively mixed. Let ``A`` be the foreground image\n and ``B`` the background image and ``a`` is the alpha value. Each pixel intensity is then\n computed as ``a * A_ij + (1-a) * B_ij``.\n\n dtype support::\n\n * ``uint8``: yes; fully tested\n * ``uint16``: yes; fully tested\n * ``uint32``: yes; fully tested\n * ``uint64``: yes; fully tested (1)\n * ``int8``: yes; fully tested\n * ``int16``: yes; fully tested\n * ``int32``: yes; fully tested\n * ``int64``: yes; fully tested (1)\n * ``float16``: yes; fully tested\n * ``float32``: yes; fully tested\n * ``float64``: yes; fully tested (1)\n * ``float128``: no (2)\n * ``bool``: yes; fully tested (2)\n\n - (1) Tests show that these dtypes work, but a conversion to float128 happens, which only\n has 96 bits of size instead of true 128 bits and hence not twice as much resolution.\n It is possible that these dtypes result in inaccuracies, though the tests did not\n indicate that.\n - (2) Not available due to the input dtype having to be increased to an equivalent float\n dtype with two times the input resolution.\n - (3) Mapped internally to ``float16``.\n\n Parameters\n ----------\n image_fg : (H,W,[C]) ndarray\n Foreground image. Shape and dtype kind must match the one of the\n background image.\n\n image_bg : (H,W,[C]) ndarray\n Background image. Shape and dtype kind must match the one of the\n foreground image.\n\n alpha : number or iterable of number or ndarray\n The blending factor, between 0.0 and 1.0. Can be interpreted as the opacity of the\n foreground image. Values around 1.0 result in only the foreground image being visible.\n Values around 0.0 result in only the background image being visible.\n Multiple alphas may be provided. In these cases, there must be exactly one alpha per\n channel in the foreground/background image. Alternatively, for ``(H,W,C)`` images,\n either one ``(H,W)`` array or an ``(H,W,C)`` array of alphas may be provided,\n denoting the elementwise alpha value.\n\n eps : number, optional\n Controls when an alpha is to be interpreted as exactly 1.0 or exactly 0.0, resulting\n in only the foreground/background being visible and skipping the actual computation.\n\n Returns\n -------\n image_blend : (H,W,C) ndarray\n Blend of foreground and background image."}
{"_id": "q_503", "text": "Augmenter that sharpens images and overlays the result with the original image.\n\n dtype support::\n\n See ``imgaug.augmenters.convolutional.Convolve``.\n\n Parameters\n ----------\n k : int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional\n Kernel size to use.\n\n * If a single int, then that value will be used for the height\n and width of the kernel.\n * If a tuple of two ints ``(a, b)``, then the kernel size will be\n sampled from the interval ``[a..b]``.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then ``N`` samples will be drawn from\n that parameter per ``N`` input images, each representing the kernel\n size for the nth image.\n\n angle : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Angle of the motion blur in degrees (clockwise, relative to top center direction).\n\n * If a number, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n direction : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Forward/backward direction of the motion blur. Lower values towards -1.0 will point the motion blur towards\n the back (with angle provided via `angle`). Higher values towards 1.0 will point the motion blur forward.\n A value of 0.0 leads to a uniformly (but still angled) motion blur.\n\n * If a number, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n order : int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional\n Interpolation order to use when rotating the kernel according to `angle`.\n See :func:`imgaug.augmenters.geometric.Affine.__init__`.\n Recommended to be ``0`` or ``1``, with ``0`` being faster, but less continuous/smooth as `angle` is changed,\n particularly around multiple of 45 degrees.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.MotionBlur(k=15)\n\n Create a motion blur augmenter with kernel size of 15x15.\n\n >>> aug = iaa.MotionBlur(k=15, angle=[-45, 45])\n\n Create a motion blur augmenter with kernel size of 15x15 and a blur angle of either -45 or 45 degrees (randomly\n picked per image)."}
{"_id": "q_504", "text": "Augmenter to draw clouds in images.\n\n This is a wrapper around ``CloudLayer``. It executes 1 to 2 layers per image, leading to varying densities\n and frequency patterns of clouds.\n\n This augmenter seems to be fairly robust w.r.t. the image size. Tested with ``96x128``, ``192x256``\n and ``960x1280``.\n\n dtype support::\n\n * ``uint8``: yes; tested\n * ``uint16``: no (1)\n * ``uint32``: no (1)\n * ``uint64``: no (1)\n * ``int8``: no (1)\n * ``int16``: no (1)\n * ``int32``: no (1)\n * ``int64``: no (1)\n * ``float16``: no (1)\n * ``float32``: no (1)\n * ``float64``: no (1)\n * ``float128``: no (1)\n * ``bool``: no (1)\n\n - (1) Parameters of this augmenter are optimized for the value range of uint8.\n While other dtypes may be accepted, they will lead to images augmented in\n ways inappropriate for the respective dtype.\n\n Parameters\n ----------\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.Clouds()\n\n Creates an augmenter that adds clouds to images."}
{"_id": "q_505", "text": "Augmenter to draw fog in images.\n\n This is a wrapper around ``CloudLayer``. It executes a single layer per image with a configuration leading\n to fairly dense clouds with low-frequency patterns.\n\n This augmenter seems to be fairly robust w.r.t. the image size. Tested with ``96x128``, ``192x256``\n and ``960x1280``.\n\n dtype support::\n\n * ``uint8``: yes; tested\n * ``uint16``: no (1)\n * ``uint32``: no (1)\n * ``uint64``: no (1)\n * ``int8``: no (1)\n * ``int16``: no (1)\n * ``int32``: no (1)\n * ``int64``: no (1)\n * ``float16``: no (1)\n * ``float32``: no (1)\n * ``float64``: no (1)\n * ``float128``: no (1)\n * ``bool``: no (1)\n\n - (1) Parameters of this augmenter are optimized for the value range of uint8.\n While other dtypes may be accepted, they will lead to images augmented in\n ways inappropriate for the respective dtype.\n\n Parameters\n ----------\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.Fog()\n\n Creates an augmenter that adds fog to images."}
{"_id": "q_506", "text": "Augmenter to add falling snowflakes to images.\n\n This is a wrapper around ``SnowflakesLayer``. It executes 1 to 3 layers per image.\n\n dtype support::\n\n * ``uint8``: yes; tested\n * ``uint16``: no (1)\n * ``uint32``: no (1)\n * ``uint64``: no (1)\n * ``int8``: no (1)\n * ``int16``: no (1)\n * ``int32``: no (1)\n * ``int64``: no (1)\n * ``float16``: no (1)\n * ``float32``: no (1)\n * ``float64``: no (1)\n * ``float128``: no (1)\n * ``bool``: no (1)\n\n - (1) Parameters of this augmenter are optimized for the value range of uint8.\n While other dtypes may be accepted, they will lead to images augmented in\n ways inappropriate for the respective dtype.\n\n Parameters\n ----------\n density : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Density of the snowflake layer, as a probability of each pixel in low resolution space to be a snowflake.\n Valid value range is ``(0.0, 1.0)``. Recommended to be around ``(0.01, 0.075)``.\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n density_uniformity : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Size uniformity of the snowflakes. Higher values denote more similarly sized snowflakes.\n Valid value range is ``(0.0, 1.0)``. Recommended to be around ``0.5``.\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n flake_size : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Size of the snowflakes. This parameter controls the resolution at which snowflakes are sampled.\n Higher values mean that the resolution is closer to the input image's resolution and hence each sampled\n snowflake will be smaller (because of the smaller pixel size).\n\n Valid value range is ``[0.0, 1.0)``. Recommended values:\n\n * On ``96x128`` a value of ``(0.1, 0.4)`` worked well.\n * On ``192x256`` a value of ``(0.2, 0.7)`` worked well.\n * On ``960x1280`` a value of ``(0.7, 0.95)`` worked well.\n\n Allowed datatypes:\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n flake_size_uniformity : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Controls the size uniformity of the snowflakes. Higher values mean that the snowflakes are more similarly\n sized. Valid value range is ``(0.0, 1.0)``. Recommended to be around ``0.5``.\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n angle : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Angle in degrees of motion blur applied to the snowflakes, where ``0.0`` is motion blur that points straight\n upwards. Recommended to be around ``(-30, 30)``.\n See also :func:`imgaug.augmenters.blur.MotionBlur.__init__`.\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n speed : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Perceived falling speed of the snowflakes. This parameter controls the motion blur's kernel size.\n It follows roughly the form ``kernel_size = image_size * speed``. Hence,\n Values around ``1.0`` denote that the motion blur should \"stretch\" each snowflake over the whole image.\n\n Valid value range is ``(0.0, 1.0)``. Recommended values:\n\n * On ``96x128`` a value of ``(0.01, 0.05)`` worked well.\n * On ``192x256`` a value of ``(0.007, 0.03)`` worked well.\n * On ``960x1280`` a value of ``(0.001, 0.03)`` worked well.\n\n\n Allowed datatypes:\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.Snowflakes(flake_size=(0.1, 0.4), speed=(0.01, 0.05))\n\n Adds snowflakes to small images (around ``96x128``).\n\n >>> aug = iaa.Snowflakes(flake_size=(0.2, 0.7), speed=(0.007, 0.03))\n\n Adds snowflakes to medium-sized images (around ``192x256``).\n\n >>> aug = iaa.Snowflakes(flake_size=(0.7, 0.95), speed=(0.001, 0.03))\n\n Adds snowflakes to large images (around ``960x1280``)."}
{"_id": "q_507", "text": "Draw the segmentation map as an overlay over an image.\n\n Parameters\n ----------\n image : (H,W,3) ndarray\n Image onto which to draw the segmentation map. Dtype is expected to be uint8.\n\n alpha : float, optional\n Alpha/opacity value to use for the mixing of image and segmentation map.\n Higher values mean that the segmentation map will be more visible and the image less visible.\n\n resize : {'segmentation_map', 'image'}, optional\n In case of size differences between the image and segmentation map, either the image or\n the segmentation map can be resized. This parameter controls which of the two will be\n resized to the other's size.\n\n background_threshold : float, optional\n See :func:`imgaug.SegmentationMapOnImage.get_arr_int`.\n\n background_class_id : None or int, optional\n See :func:`imgaug.SegmentationMapOnImage.get_arr_int`.\n\n colors : None or list of tuple of int, optional\n Colors to use. One for each class to draw. If None, then default colors will be used.\n\n draw_background : bool, optional\n If True, the background will be drawn like any other class.\n If False, the background will not be drawn, i.e. the respective background pixels\n will be identical with the image's RGB color at the corresponding spatial location\n and no color overlay will be applied.\n\n Returns\n -------\n mix : (H,W,3) ndarray\n Rendered overlays (dtype is uint8)."}
{"_id": "q_508", "text": "Pad the segmentation map on its sides so that its matches a target aspect ratio.\n\n Depending on which dimension is smaller (height or width), only the corresponding\n sides (left/right or top/bottom) will be padded. In each case, both of the sides will\n be padded equally.\n\n Parameters\n ----------\n aspect_ratio : float\n Target aspect ratio, given as width/height. E.g. 2.0 denotes the image having twice\n as much width as height.\n\n mode : str, optional\n Padding mode to use. See :func:`numpy.pad` for details.\n\n cval : number, optional\n Value to use for padding if `mode` is ``constant``. See :func:`numpy.pad` for details.\n\n return_pad_amounts : bool, optional\n If False, then only the padded image will be returned. If True, a tuple with two\n entries will be returned, where the first entry is the padded image and the second\n entry are the amounts by which each image side was padded. These amounts are again a\n tuple of the form (top, right, bottom, left), with each value being an integer.\n\n Returns\n -------\n segmap : imgaug.SegmentationMapOnImage\n Padded segmentation map as SegmentationMapOnImage object.\n\n pad_amounts : tuple of int\n Amounts by which the segmentation map was padded on each side, given as a\n tuple ``(top, right, bottom, left)``.\n This tuple is only returned if `return_pad_amounts` was set to True."}
{"_id": "q_509", "text": "Resize the segmentation map array to the provided size given the provided interpolation.\n\n Parameters\n ----------\n sizes : float or iterable of int or iterable of float\n New size of the array in ``(height, width)``.\n See :func:`imgaug.imgaug.imresize_single_image` for details.\n\n interpolation : None or str or int, optional\n The interpolation to use during resize.\n See :func:`imgaug.imgaug.imresize_single_image` for details.\n Note: The segmentation map is internally stored as multiple float-based heatmaps,\n making smooth interpolations potentially more reasonable than nearest neighbour\n interpolation.\n\n Returns\n -------\n segmap : imgaug.SegmentationMapOnImage\n Resized segmentation map object."}
{"_id": "q_510", "text": "Offer a new event ``s`` at point ``p`` in this queue."}
{"_id": "q_511", "text": "Render the heatmaps as RGB images.\n\n Parameters\n ----------\n size : None or float or iterable of int or iterable of float, optional\n Size of the rendered RGB image as ``(height, width)``.\n See :func:`imgaug.imgaug.imresize_single_image` for details.\n If set to None, no resizing is performed and the size of the heatmaps array is used.\n\n cmap : str or None, optional\n Color map of ``matplotlib`` to use in order to convert the heatmaps to RGB images.\n If set to None, no color map will be used and the heatmaps will be converted\n to simple intensity maps.\n\n Returns\n -------\n heatmaps_drawn : list of (H,W,3) ndarray\n Rendered heatmaps. One per heatmap array channel. Dtype is uint8."}
{"_id": "q_512", "text": "Draw the heatmaps as overlays over an image.\n\n Parameters\n ----------\n image : (H,W,3) ndarray\n Image onto which to draw the heatmaps. Expected to be of dtype uint8.\n\n alpha : float, optional\n Alpha/opacity value to use for the mixing of image and heatmaps.\n Higher values mean that the heatmaps will be more visible and the image less visible.\n\n cmap : str or None, optional\n Color map to use. See :func:`imgaug.HeatmapsOnImage.draw` for details.\n\n resize : {'heatmaps', 'image'}, optional\n In case of size differences between the image and heatmaps, either the image or\n the heatmaps can be resized. This parameter controls which of the two will be resized\n to the other's size.\n\n Returns\n -------\n mix : list of (H,W,3) ndarray\n Rendered overlays. One per heatmap array channel. Dtype is uint8."}
{"_id": "q_513", "text": "Pad the heatmaps on their sides so that they match a target aspect ratio.\n\n Depending on which dimension is smaller (height or width), only the corresponding\n sides (left/right or top/bottom) will be padded. In each case, both of the sides will\n be padded equally.\n\n Parameters\n ----------\n aspect_ratio : float\n Target aspect ratio, given as width/height. E.g. 2.0 denotes the image having twice\n as much width as height.\n\n mode : str, optional\n Padding mode to use. See :func:`numpy.pad` for details.\n\n cval : number, optional\n Value to use for padding if `mode` is ``constant``. See :func:`numpy.pad` for details.\n\n return_pad_amounts : bool, optional\n If False, then only the padded image will be returned. If True, a tuple with two\n entries will be returned, where the first entry is the padded image and the second\n entry are the amounts by which each image side was padded. These amounts are again a\n tuple of the form (top, right, bottom, left), with each value being an integer.\n\n Returns\n -------\n heatmaps : imgaug.HeatmapsOnImage\n Padded heatmaps as HeatmapsOnImage object.\n\n pad_amounts : tuple of int\n Amounts by which the heatmaps were padded on each side, given as a tuple ``(top, right, bottom, left)``.\n This tuple is only returned if `return_pad_amounts` was set to True."}
{"_id": "q_514", "text": "Create a heatmaps object from an heatmap array containing values ranging from 0.0 to 1.0.\n\n Parameters\n ----------\n arr_0to1 : (H,W) or (H,W,C) ndarray\n Heatmap(s) array, where ``H`` is height, ``W`` is width and ``C`` is the number of heatmap channels.\n Expected dtype is float32.\n\n shape : tuple of ints\n Shape of the image on which the heatmap(s) is/are placed. NOT the shape of the\n heatmap(s) array, unless it is identical to the image shape (note the likely\n difference between the arrays in the number of channels).\n If there is not a corresponding image, use the shape of the heatmaps array.\n\n min_value : float, optional\n Minimum value for the heatmaps that the 0-to-1 array represents. This will usually\n be 0.0. It is used when calling :func:`imgaug.HeatmapsOnImage.get_arr`, which converts the\n underlying ``(0.0, 1.0)`` array to value range ``(min_value, max_value)``.\n E.g. if you started with heatmaps in the range ``(-1.0, 1.0)`` and projected these\n to (0.0, 1.0), you should call this function with ``min_value=-1.0``, ``max_value=1.0``\n so that :func:`imgaug.HeatmapsOnImage.get_arr` returns heatmap arrays having value\n range (-1.0, 1.0).\n\n max_value : float, optional\n Maximum value for the heatmaps that to 0-to-255 array represents.\n See parameter min_value for details.\n\n Returns\n -------\n heatmaps : imgaug.HeatmapsOnImage\n Heatmaps object."}
{"_id": "q_515", "text": "Create a deep copy of the Heatmaps object.\n\n Returns\n -------\n imgaug.HeatmapsOnImage\n Deep copy."}
{"_id": "q_516", "text": "Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor.\n\n See ``ToTensor`` for more details.\n\n Args:\n pic (PIL Image or numpy.ndarray): Image to be converted to tensor.\n\n Returns:\n Tensor: Converted image."}
{"_id": "q_517", "text": "Normalize a tensor image with mean and standard deviation.\n\n .. note::\n This transform acts out of place by default, i.e., it does not mutates the input tensor.\n\n See :class:`~torchvision.transforms.Normalize` for more details.\n\n Args:\n tensor (Tensor): Tensor image of size (C, H, W) to be normalized.\n mean (sequence): Sequence of means for each channel.\n std (sequence): Sequence of standard deviations for each channel.\n\n Returns:\n Tensor: Normalized Tensor image."}
{"_id": "q_518", "text": "r\"\"\"Resize the input PIL Image to the given size.\n\n Args:\n img (PIL Image): Image to be resized.\n size (sequence or int): Desired output size. If size is a sequence like\n (h, w), the output size will be matched to this. If size is an int,\n the smaller edge of the image will be matched to this number maintaing\n the aspect ratio. i.e, if height > width, then image will be rescaled to\n :math:`\\left(\\text{size} \\times \\frac{\\text{height}}{\\text{width}}, \\text{size}\\right)`\n interpolation (int, optional): Desired interpolation. Default is\n ``PIL.Image.BILINEAR``\n\n Returns:\n PIL Image: Resized image."}
{"_id": "q_519", "text": "r\"\"\"Pad the given PIL Image on all sides with specified padding mode and fill value.\n\n Args:\n img (PIL Image): Image to be padded.\n padding (int or tuple): Padding on each border. If a single int is provided this\n is used to pad all borders. If tuple of length 2 is provided this is the padding\n on left/right and top/bottom respectively. If a tuple of length 4 is provided\n this is the padding for the left, top, right and bottom borders\n respectively.\n fill: Pixel fill value for constant fill. Default is 0. If a tuple of\n length 3, it is used to fill R, G, B channels respectively.\n This value is only used when the padding_mode is constant\n padding_mode: Type of padding. Should be: constant, edge, reflect or symmetric. Default is constant.\n\n - constant: pads with a constant value, this value is specified with fill\n\n - edge: pads with the last value on the edge of the image\n\n - reflect: pads with reflection of image (without repeating the last value on the edge)\n\n padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode\n will result in [3, 2, 1, 2, 3, 4, 3, 2]\n\n - symmetric: pads with reflection of image (repeating the last value on the edge)\n\n padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode\n will result in [2, 1, 1, 2, 3, 4, 4, 3]\n\n Returns:\n PIL Image: Padded image."}
{"_id": "q_520", "text": "Crop the given PIL Image.\n\n Args:\n img (PIL Image): Image to be cropped.\n i (int): i in (i,j) i.e coordinates of the upper left corner.\n j (int): j in (i,j) i.e coordinates of the upper left corner.\n h (int): Height of the cropped image.\n w (int): Width of the cropped image.\n\n Returns:\n PIL Image: Cropped image."}
{"_id": "q_521", "text": "Crop the given PIL Image and resize it to desired size.\n\n Notably used in :class:`~torchvision.transforms.RandomResizedCrop`.\n\n Args:\n img (PIL Image): Image to be cropped.\n i (int): i in (i,j) i.e coordinates of the upper left corner\n j (int): j in (i,j) i.e coordinates of the upper left corner\n h (int): Height of the cropped image.\n w (int): Width of the cropped image.\n size (sequence or int): Desired output size. Same semantics as ``resize``.\n interpolation (int, optional): Desired interpolation. Default is\n ``PIL.Image.BILINEAR``.\n Returns:\n PIL Image: Cropped image."}
{"_id": "q_522", "text": "Horizontally flip the given PIL Image.\n\n Args:\n img (PIL Image): Image to be flipped.\n\n Returns:\n PIL Image: Horizontall flipped image."}
{"_id": "q_523", "text": "Perform perspective transform of the given PIL Image.\n\n Args:\n img (PIL Image): Image to be transformed.\n coeffs (tuple) : 8-tuple (a, b, c, d, e, f, g, h) which contains the coefficients.\n for a perspective transform.\n interpolation: Default- Image.BICUBIC\n Returns:\n PIL Image: Perspectively transformed Image."}
{"_id": "q_524", "text": "Vertically flip the given PIL Image.\n\n Args:\n img (PIL Image): Image to be flipped.\n\n Returns:\n PIL Image: Vertically flipped image."}
{"_id": "q_525", "text": "Crop the given PIL Image into four corners and the central crop.\n\n .. Note::\n This transform returns a tuple of images and there may be a\n mismatch in the number of inputs and targets your ``Dataset`` returns.\n\n Args:\n size (sequence or int): Desired output size of the crop. If size is an\n int instead of sequence like (h, w), a square crop (size, size) is\n made.\n\n Returns:\n tuple: tuple (tl, tr, bl, br, center)\n Corresponding top left, top right, bottom left, bottom right and center crop."}
{"_id": "q_526", "text": "Adjust brightness of an Image.\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n brightness_factor (float): How much to adjust the brightness. Can be\n any non negative number. 0 gives a black image, 1 gives the\n original image while 2 increases the brightness by a factor of 2.\n\n Returns:\n PIL Image: Brightness adjusted image."}
{"_id": "q_527", "text": "Rotate the image by angle.\n\n\n Args:\n img (PIL Image): PIL Image to be rotated.\n angle (float or int): In degrees degrees counter clockwise order.\n resample (``PIL.Image.NEAREST`` or ``PIL.Image.BILINEAR`` or ``PIL.Image.BICUBIC``, optional):\n An optional resampling filter. See `filters`_ for more information.\n If omitted, or if the image has mode \"1\" or \"P\", it is set to ``PIL.Image.NEAREST``.\n expand (bool, optional): Optional expansion flag.\n If true, expands the output image to make it large enough to hold the entire rotated image.\n If false or omitted, make the output image the same size as the input image.\n Note that the expand flag assumes rotation around the center and no translation.\n center (2-tuple, optional): Optional center of rotation.\n Origin is the upper left corner.\n Default is the center of the image.\n\n .. _filters: https://pillow.readthedocs.io/en/latest/handbook/concepts.html#filters"}
{"_id": "q_528", "text": "Apply affine transformation on the image keeping image center invariant\n\n Args:\n img (PIL Image): PIL Image to be rotated.\n angle (float or int): rotation angle in degrees between -180 and 180, clockwise direction.\n translate (list or tuple of integers): horizontal and vertical translations (post-rotation translation)\n scale (float): overall scale\n shear (float): shear angle value in degrees between -180 to 180, clockwise direction.\n resample (``PIL.Image.NEAREST`` or ``PIL.Image.BILINEAR`` or ``PIL.Image.BICUBIC``, optional):\n An optional resampling filter.\n See `filters`_ for more information.\n If omitted, or if the image has mode \"1\" or \"P\", it is set to ``PIL.Image.NEAREST``.\n fillcolor (int): Optional fill color for the area outside the transform in the output image. (Pillow>=5.0.0)"}
{"_id": "q_529", "text": "Finds the class folders in a dataset.\n\n Args:\n dir (string): Root directory path.\n\n Returns:\n tuple: (classes, class_to_idx) where classes are relative to (dir), and class_to_idx is a dictionary.\n\n Ensures:\n No class is a subdirectory of another."}
{"_id": "q_530", "text": "Return a Tensor containing the list of labels\n Read the file and keep only the ID of the 3D point."}
{"_id": "q_531", "text": "Computes the accuracy over the k top predictions for the specified values of k"}
{"_id": "q_532", "text": "This function disables printing when not in master process"}
{"_id": "q_533", "text": "Download a file from a url and place it in root.\n\n Args:\n url (str): URL to download file from\n root (str): Directory to place downloaded file in\n filename (str, optional): Name to save the file under. If None, use the basename of the URL\n md5 (str, optional): MD5 checksum of the download. If None, do not check"}
{"_id": "q_534", "text": "List all directories at a given root\n\n Args:\n root (str): Path to directory whose folders need to be listed\n prefix (bool, optional): If true, prepends the path to each result, otherwise\n only returns the name of the directories found"}
{"_id": "q_535", "text": "List all files ending with a suffix at a given root\n\n Args:\n root (str): Path to directory whose folders need to be listed\n suffix (str or tuple): Suffix of the files to match, e.g. '.png' or ('.jpg', '.png').\n It uses the Python \"str.endswith\" method and is passed directly\n prefix (bool, optional): If true, prepends the path to each result, otherwise\n only returns the name of the files found"}
{"_id": "q_536", "text": "Get parameters for ``crop`` for a random crop.\n\n Args:\n img (PIL Image): Image to be cropped.\n output_size (tuple): Expected output size of the crop.\n\n Returns:\n tuple: params (i, j, h, w) to be passed to ``crop`` for random crop."}
{"_id": "q_537", "text": "Get parameters for ``perspective`` for a random perspective transform.\n\n Args:\n width : width of the image.\n height : height of the image.\n\n Returns:\n List containing [top-left, top-right, bottom-right, bottom-left] of the orignal image,\n List containing [top-left, top-right, bottom-right, bottom-left] of the transformed image."}
{"_id": "q_538", "text": "Get parameters for ``crop`` for a random sized crop.\n\n Args:\n img (PIL Image): Image to be cropped.\n scale (tuple): range of size of the origin size cropped\n ratio (tuple): range of aspect ratio of the origin aspect ratio cropped\n\n Returns:\n tuple: params (i, j, h, w) to be passed to ``crop`` for a random\n sized crop."}
{"_id": "q_539", "text": "Get a randomized transform to be applied on image.\n\n Arguments are same as that of __init__.\n\n Returns:\n Transform which randomly adjusts brightness, contrast and\n saturation in a random order."}
{"_id": "q_540", "text": "Get parameters for affine transformation\n\n Returns:\n sequence: params to be passed to the affine transformation"}
{"_id": "q_541", "text": "Download the MNIST data if it doesn't exist in processed_folder already."}
{"_id": "q_542", "text": "Returns theme name.\n\n Checks in this order:\n 1. override\n 2. cookies\n 3. settings"}
{"_id": "q_543", "text": "Return autocompleter results"}
{"_id": "q_544", "text": "Render preferences page && save user preferences"}
{"_id": "q_545", "text": "check if the searchQuery contain a bang, and create fitting autocompleter results"}
{"_id": "q_546", "text": "Eight-schools joint log-prob."}
{"_id": "q_547", "text": "Runs HMC on the eight-schools unnormalized posterior."}
{"_id": "q_548", "text": "Decorator to programmatically expand the docstring.\n\n Args:\n **kwargs: Keyword arguments to set. For each key-value pair `k` and `v`,\n the key is found as `${k}` in the docstring and replaced with `v`.\n\n Returns:\n Decorated function."}
{"_id": "q_549", "text": "Infer the original name passed into a distribution constructor.\n\n Distributions typically follow the pattern of\n with.name_scope(name) as name:\n super(name=name)\n so we attempt to reverse the name-scope transformation to allow\n addressing of RVs by the distribution's original, user-visible\n name kwarg.\n\n Args:\n distribution: a tfd.Distribution instance.\n Returns:\n simple_name: the original name passed into the Distribution.\n\n #### Example\n\n ```\n d1 = tfd.Normal(0., 1., name='x') # d1.name = 'x/'\n d2 = tfd.Normal(0., 1., name='x') # d2.name = 'x_2/'\n _simple_name(d2) # returns 'x'\n\n ```"}
{"_id": "q_550", "text": "RandomVariable constructor with a dummy name argument."}
{"_id": "q_551", "text": "Factory function to make random variable given distribution class."}
{"_id": "q_552", "text": "Compute one-step-ahead predictive distributions for all timesteps.\n\n Given samples from the posterior over parameters, return the predictive\n distribution over observations at each time `T`, given observations up\n through time `T-1`.\n\n Args:\n model: An instance of `StructuralTimeSeries` representing a\n time-series model. This represents a joint distribution over\n time-series and their parameters with batch shape `[b1, ..., bN]`.\n observed_time_series: `float` `Tensor` of shape\n `concat([sample_shape, model.batch_shape, [num_timesteps, 1]]) where\n `sample_shape` corresponds to i.i.d. observations, and the trailing `[1]`\n dimension may (optionally) be omitted if `num_timesteps > 1`. May\n optionally be an instance of `tfp.sts.MaskedTimeSeries` including a\n mask `Tensor` to encode the locations of missing observations.\n parameter_samples: Python `list` of `Tensors` representing posterior samples\n of model parameters, with shapes `[concat([[num_posterior_draws],\n param.prior.batch_shape, param.prior.event_shape]) for param in\n model.parameters]`. This may optionally also be a map (Python `dict`) of\n parameter names to `Tensor` values.\n\n Returns:\n forecast_dist: a `tfd.MixtureSameFamily` instance with event shape\n [num_timesteps] and\n batch shape `concat([sample_shape, model.batch_shape])`, with\n `num_posterior_draws` mixture components. The `t`th step represents the\n forecast distribution `p(observed_time_series[t] |\n observed_time_series[0:t-1], parameter_samples)`.\n\n #### Examples\n\n Suppose we've built a model and fit it to data using HMC:\n\n ```python\n day_of_week = tfp.sts.Seasonal(\n num_seasons=7,\n observed_time_series=observed_time_series,\n name='day_of_week')\n local_linear_trend = tfp.sts.LocalLinearTrend(\n observed_time_series=observed_time_series,\n name='local_linear_trend')\n model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],\n observed_time_series=observed_time_series)\n\n samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series)\n ```\n\n Passing the posterior samples into `one_step_predictive`, we construct a\n one-step-ahead predictive distribution:\n\n ```python\n one_step_predictive_dist = tfp.sts.one_step_predictive(\n model, observed_time_series, parameter_samples=samples)\n\n predictive_means = one_step_predictive_dist.mean()\n predictive_scales = one_step_predictive_dist.stddev()\n ```\n\n If using variational inference instead of HMC, we'd construct a forecast using\n samples from the variational posterior:\n\n ```python\n (variational_loss,\n variational_distributions) = tfp.sts.build_factored_variational_loss(\n model=model, observed_time_series=observed_time_series)\n\n # OMITTED: take steps to optimize variational loss\n\n samples = {k: q.sample(30) for (k, q) in variational_distributions.items()}\n one_step_predictive_dist = tfp.sts.one_step_predictive(\n model, observed_time_series, parameter_samples=samples)\n ```\n\n We can visualize the forecast by plotting:\n\n ```python\n from matplotlib import pylab as plt\n def plot_one_step_predictive(observed_time_series,\n forecast_mean,\n forecast_scale):\n plt.figure(figsize=(12, 6))\n num_timesteps = forecast_mean.shape[-1]\n c1, c2 = (0.12, 0.47, 0.71), (1.0, 0.5, 0.05)\n plt.plot(observed_time_series, label=\"observed time series\", color=c1)\n plt.plot(forecast_mean, label=\"one-step prediction\", color=c2)\n plt.fill_between(np.arange(num_timesteps),\n forecast_mean - 2 * forecast_scale,\n forecast_mean + 2 * forecast_scale,\n alpha=0.1, color=c2)\n plt.legend()\n\n plot_one_step_predictive(observed_time_series,\n forecast_mean=predictive_means,\n forecast_scale=predictive_scales)\n ```\n\n To detect anomalous timesteps, we check whether the observed value at each\n step is within a 95% predictive interval, i.e., two standard deviations from\n the mean:\n\n ```python\n z_scores = ((observed_time_series[..., 1:] - predictive_means[..., :-1])\n / predictive_scales[..., :-1])\n anomalous_timesteps = tf.boolean_mask(\n tf.range(1, num_timesteps),\n tf.abs(z_scores) > 2.0)\n ```"}
{"_id": "q_553", "text": "Construct predictive distribution over future observations.\n\n Given samples from the posterior over parameters, return the predictive\n distribution over future observations for num_steps_forecast timesteps.\n\n Args:\n model: An instance of `StructuralTimeSeries` representing a\n time-series model. This represents a joint distribution over\n time-series and their parameters with batch shape `[b1, ..., bN]`.\n observed_time_series: `float` `Tensor` of shape\n `concat([sample_shape, model.batch_shape, [num_timesteps, 1]])` where\n `sample_shape` corresponds to i.i.d. observations, and the trailing `[1]`\n dimension may (optionally) be omitted if `num_timesteps > 1`. May\n optionally be an instance of `tfp.sts.MaskedTimeSeries` including a\n mask `Tensor` to encode the locations of missing observations.\n parameter_samples: Python `list` of `Tensors` representing posterior samples\n of model parameters, with shapes `[concat([[num_posterior_draws],\n param.prior.batch_shape, param.prior.event_shape]) for param in\n model.parameters]`. This may optionally also be a map (Python `dict`) of\n parameter names to `Tensor` values.\n num_steps_forecast: scalar `int` `Tensor` number of steps to forecast.\n\n Returns:\n forecast_dist: a `tfd.MixtureSameFamily` instance with event shape\n [num_steps_forecast, 1] and batch shape\n `concat([sample_shape, model.batch_shape])`, with `num_posterior_draws`\n mixture components.\n\n #### Examples\n\n Suppose we've built a model and fit it to data using HMC:\n\n ```python\n day_of_week = tfp.sts.Seasonal(\n num_seasons=7,\n observed_time_series=observed_time_series,\n name='day_of_week')\n local_linear_trend = tfp.sts.LocalLinearTrend(\n observed_time_series=observed_time_series,\n name='local_linear_trend')\n model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],\n observed_time_series=observed_time_series)\n\n samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series)\n ```\n\n Passing the posterior samples into `forecast`, we construct a forecast\n distribution:\n\n ```python\n forecast_dist = tfp.sts.forecast(model, observed_time_series,\n parameter_samples=samples,\n num_steps_forecast=50)\n\n forecast_mean = forecast_dist.mean()[..., 0] # shape: [50]\n forecast_scale = forecast_dist.stddev()[..., 0] # shape: [50]\n forecast_samples = forecast_dist.sample(10)[..., 0] # shape: [10, 50]\n ```\n\n If using variational inference instead of HMC, we'd construct a forecast using\n samples from the variational posterior:\n\n ```python\n (variational_loss,\n variational_distributions) = tfp.sts.build_factored_variational_loss(\n model=model, observed_time_series=observed_time_series)\n\n # OMITTED: take steps to optimize variational loss\n\n samples = {k: q.sample(30) for (k, q) in variational_distributions.items()}\n forecast_dist = tfp.sts.forecast(model, observed_time_series,\n parameter_samples=samples,\n num_steps_forecast=50)\n ```\n\n We can visualize the forecast by plotting:\n\n ```python\n from matplotlib import pylab as plt\n def plot_forecast(observed_time_series,\n forecast_mean,\n forecast_scale,\n forecast_samples):\n plt.figure(figsize=(12, 6))\n\n num_steps = observed_time_series.shape[-1]\n num_steps_forecast = forecast_mean.shape[-1]\n num_steps_train = num_steps - num_steps_forecast\n\n c1, c2 = (0.12, 0.47, 0.71), (1.0, 0.5, 0.05)\n plt.plot(np.arange(num_steps), observed_time_series,\n lw=2, color=c1, label='ground truth')\n\n forecast_steps = np.arange(num_steps_train,\n num_steps_train+num_steps_forecast)\n plt.plot(forecast_steps, forecast_samples.T, lw=1, color=c2, alpha=0.1)\n plt.plot(forecast_steps, forecast_mean, lw=2, ls='--', color=c2,\n label='forecast')\n plt.fill_between(forecast_steps,\n forecast_mean - 2 * forecast_scale,\n forecast_mean + 2 * forecast_scale, color=c2, alpha=0.2)\n\n plt.xlim([0, num_steps])\n plt.legend()\n\n plot_forecast(observed_time_series,\n forecast_mean=forecast_mean,\n forecast_scale=forecast_scale,\n forecast_samples=forecast_samples)\n ```"}
{"_id": "q_554", "text": "Returns `max` or `mask` if `max` is not finite."}
{"_id": "q_555", "text": "Assert `x` has rank equal to `rank` or smaller.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.assert_rank_at_most(x, 2)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n rank: Scalar `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional).\n Defaults to \"assert_rank_at_most\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` has specified rank or lower.\n If static checks determine `x` has correct rank, a `no_op` is returned.\n\n Raises:\n ValueError: If static checks determine `x` has wrong rank."}
{"_id": "q_556", "text": "OneHotCategorical helper computing probs, cdf, etc over its support."}
{"_id": "q_557", "text": "Return a convert-to-tensor func, given a name, config, callable, etc."}
{"_id": "q_558", "text": "Yields the top-most interceptor on the thread-local interceptor stack.\n\n Operations may be intercepted by multiple nested interceptors. Once reached,\n an operation can be forwarded through nested interceptors until resolved.\n To allow for nesting, implement interceptors by re-wrapping their first\n argument (`f`) as an `interceptable`. To avoid nesting, manipulate the\n computation without using `interceptable`.\n\n This function allows for nesting by manipulating the thread-local interceptor\n stack, so that operations are intercepted in the order of interceptor nesting.\n\n #### Examples\n\n ```python\n from tensorflow_probability import edward2 as ed\n\n def model():\n x = ed.Normal(loc=0., scale=1., name=\"x\")\n y = ed.Normal(loc=x, scale=1., name=\"y\")\n return x + y\n\n def double(f, *args, **kwargs):\n return 2. * interceptable(f)(*args, **kwargs)\n\n def set_y(f, *args, **kwargs):\n if kwargs.get(\"name\") == \"y\":\n kwargs[\"value\"] = 0.42\n return interceptable(f)(*args, **kwargs)\n\n with interception(double):\n with interception(set_y):\n z = model()\n ```\n\n This will firstly put `double` on the stack, and then `set_y`,\n resulting in the stack:\n (TOP) set_y -> double -> apply (BOTTOM)\n\n The execution of `model` is then (top lines are current stack state):\n 1) (TOP) set_y -> double -> apply (BOTTOM);\n `ed.Normal(0., 1., \"x\")` is intercepted by `set_y`, and as the name is not \"y\"\n the operation is simply forwarded to the next interceptor on the stack.\n\n 2) (TOP) double -> apply (BOTTOM);\n `ed.Normal(0., 1., \"x\")` is intercepted by `double`, to produce\n `2*ed.Normal(0., 1., \"x\")`, with the operation being forwarded down the stack.\n\n 3) (TOP) apply (BOTTOM);\n `ed.Normal(0., 1., \"x\")` is intercepted by `apply`, which simply calls the\n constructor.\n\n (At this point, the nested calls to `get_next_interceptor()`, produced by\n forwarding operations, exit, and the current stack is again:\n (TOP) set_y -> double -> apply (BOTTOM))\n\n 4) (TOP) set_y -> double -> apply (BOTTOM);\n `ed.Normal(0., 1., \"y\")` is intercepted by `set_y`,\n the value of `y` is set to 0.42 and the operation is forwarded down the stack.\n\n 5) (TOP) double -> apply (BOTTOM);\n `ed.Normal(0., 1., \"y\")` is intercepted by `double`, to produce\n `2*ed.Normal(0., 1., \"y\")`, with the operation being forwarded down the stack.\n\n 6) (TOP) apply (BOTTOM);\n `ed.Normal(0., 1., \"y\")` is intercepted by `apply`, which simply calls the\n constructor.\n\n The final values for `x` and `y` inside of `model()` are tensors where `x` is\n a random draw from Normal(0., 1.) doubled, and `y` is a constant 0.84, thus\n z = 2 * Normal(0., 1.) + 0.84."}
{"_id": "q_559", "text": "Decorator that wraps `func` so that its execution is intercepted.\n\n The wrapper passes `func` to the interceptor for the current thread.\n\n If there is no next interceptor, we perform an \"immediate\" call to `func`.\n That is, `func` terminates without forwarding its execution to another\n interceptor.\n\n Args:\n func: Function to wrap.\n\n Returns:\n The decorated function."}
{"_id": "q_560", "text": "Generates synthetic data for binary classification.\n\n Args:\n num_examples: The number of samples to generate (scalar Python `int`).\n input_size: The input space dimension (scalar Python `int`).\n weights_prior_stddev: The prior standard deviation of the weight\n vector. (scalar Python `float`).\n\n Returns:\n random_weights: Sampled weights as a Numpy `array` of shape\n `[input_size]`.\n random_bias: Sampled bias as a scalar Python `float`.\n design_matrix: Points sampled uniformly from the cube `[-1,\n 1]^{input_size}`, as a Numpy `array` of shape `(num_examples,\n input_size)`.\n labels: Labels sampled from the logistic model `p(label=1) =\n logistic(dot(features, random_weights) + random_bias)`, as a Numpy\n `int32` `array` of shape `(num_examples, 1)`."}
{"_id": "q_561", "text": "Build a Dataset iterator for supervised classification.\n\n Args:\n x: Numpy `array` of features, indexed by the first dimension.\n y: Numpy `array` of labels, with the same first dimension as `x`.\n batch_size: Number of elements in each training batch.\n\n Returns:\n batch_features: `Tensor` feed features, of shape\n `[batch_size] + x.shape[1:]`.\n batch_labels: `Tensor` feed of labels, of shape\n `[batch_size] + y.shape[1:]`."}
{"_id": "q_562", "text": "Validate `map_values` if `validate_args`==True."}
{"_id": "q_563", "text": "Calls `fn` and returns the gradients with respect to `fn`'s first output.\n\n Args:\n fn: A `TransitionOperator`.\n args: Arguments to `fn`\n\n Returns:\n ret: First output of `fn`.\n extra: Second output of `fn`.\n grads: Gradients of `ret` with respect to `args`."}
{"_id": "q_564", "text": "Maybe broadcasts `from_structure` to `to_structure`.\n\n If `from_structure` is a singleton, it is tiled to match the structure of\n `to_structure`. Note that the elements in `from_structure` are not copied if\n this tiling occurs.\n\n Args:\n from_structure: A structure.\n to_structure: A structure.\n\n Returns:\n new_from_structure: Same structure as `to_structure`."}
{"_id": "q_565", "text": "Transforms a log-prob function using a bijector.\n\n This takes a log-prob function and creates a new log-prob function that now\n takes takes state in the domain of the bijector, forward transforms that state\n and calls the original log-prob function. It then returns the log-probability\n that correctly accounts for this transformation.\n\n The forward-transformed state is pre-pended to the original log-prob\n function's extra returns and returned as the new extra return.\n\n For convenience you can also pass the initial state (in the original space),\n and this function will return the inverse transformed as the 2nd return value.\n You'd use this to initialize MCMC operators that operate in the transformed\n space.\n\n Args:\n log_prob_fn: Log prob fn.\n bijector: Bijector(s), must be of the same structure as the `log_prob_fn`\n inputs.\n init_state: Initial state, in the original space.\n\n Returns:\n transformed_log_prob_fn: Transformed log prob fn.\n transformed_init_state: If `init_state` is provided. Initial state in the\n transformed space."}
{"_id": "q_566", "text": "Leapfrog `TransitionOperator`.\n\n Args:\n leapfrog_step_state: LeapFrogStepState.\n step_size: Step size, structure broadcastable to the `target_log_prob_fn`\n state.\n target_log_prob_fn: Target log prob fn.\n kinetic_energy_fn: Kinetic energy fn.\n\n Returns:\n leapfrog_step_state: LeapFrogStepState.\n leapfrog_step_extras: LeapFrogStepExtras."}
{"_id": "q_567", "text": "Metropolis-Hastings step.\n\n This probabilistically chooses between `current_state` and `proposed_state`\n based on the `energy_change` so as to preserve detailed balance.\n\n Energy change is the negative of `log_accept_ratio`.\n\n Args:\n current_state: Current state.\n proposed_state: Proposed state.\n energy_change: E(proposed_state) - E(previous_state).\n seed: For reproducibility.\n\n Returns:\n new_state: The chosen state.\n is_accepted: Whether the proposed state was accepted.\n log_uniform: The random number that was used to select between the two\n states."}
{"_id": "q_568", "text": "Construct `scale` from various components.\n\n Args:\n identity_multiplier: floating point rank 0 `Tensor` representing a scaling\n done to the identity matrix.\n diag: Floating-point `Tensor` representing the diagonal matrix.`diag` has\n shape `[N1, N2, ... k]`, which represents a k x k diagonal matrix.\n tril: Floating-point `Tensor` representing the lower triangular matrix.\n `tril` has shape `[N1, N2, ... k, k]`, which represents a k x k lower\n triangular matrix.\n perturb_diag: Floating-point `Tensor` representing the diagonal matrix of\n the low rank update.\n perturb_factor: Floating-point `Tensor` representing factor matrix.\n shift: Floating-point `Tensor` representing `shift in `scale @ X + shift`.\n validate_args: Python `bool` indicating whether arguments should be\n checked for correctness.\n dtype: `DType` for arg `Tensor` conversions.\n\n Returns:\n scale. In the case of scaling by a constant, scale is a\n floating point `Tensor`. Otherwise, scale is a `LinearOperator`.\n\n Raises:\n ValueError: if all of `tril`, `diag` and `identity_multiplier` are `None`."}
{"_id": "q_569", "text": "Returns a callable that adds a random normal perturbation to the input.\n\n This function returns a callable that accepts a Python `list` of `Tensor`s of\n any shapes and `dtypes` representing the state parts of the `current_state`\n and a random seed. The supplied argument `scale` must be a `Tensor` or Python\n `list` of `Tensor`s representing the scale of the generated\n proposal. `scale` must broadcast with the state parts of `current_state`.\n The callable adds a sample from a zero-mean normal distribution with the\n supplied scales to each state part and returns a same-type `list` of `Tensor`s\n as the state parts of `current_state`.\n\n Args:\n scale: a `Tensor` or Python `list` of `Tensor`s of any shapes and `dtypes`\n controlling the scale of the normal proposal distribution.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: 'random_walk_normal_fn'.\n\n Returns:\n random_walk_normal_fn: A callable accepting a Python `list` of `Tensor`s\n representing the state parts of the `current_state` and an `int`\n representing the random seed to be used to generate the proposal. The\n callable returns the same-type `list` of `Tensor`s as the input and\n represents the proposal for the RWM algorithm."}
{"_id": "q_570", "text": "Returns a callable that adds a random uniform perturbation to the input.\n\n For more details on `random_walk_uniform_fn`, see\n `random_walk_normal_fn`. `scale` might\n be a `Tensor` or a list of `Tensor`s that should broadcast with state parts\n of the `current_state`. The generated uniform perturbation is sampled as a\n uniform point on the rectangle `[-scale, scale]`.\n\n Args:\n scale: a `Tensor` or Python `list` of `Tensor`s of any shapes and `dtypes`\n controlling the upper and lower bound of the uniform proposal\n distribution.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: 'random_walk_uniform_fn'.\n\n Returns:\n random_walk_uniform_fn: A callable accepting a Python `list` of `Tensor`s\n representing the state parts of the `current_state` and an `int`\n representing the random seed used to generate the proposal. The callable\n returns the same-type `list` of `Tensor`s as the input and represents the\n proposal for the RWM algorithm."}
{"_id": "q_571", "text": "Get a list of num_components batchwise probabilities."}
{"_id": "q_572", "text": "Validate `outcomes`, `logits` and `probs`'s shapes."}
{"_id": "q_573", "text": "Bayesian logistic regression, which returns labels given features."}
{"_id": "q_574", "text": "Builds the Covertype data set."}
{"_id": "q_575", "text": "Cholesky factor of the covariance matrix of vector-variate random samples.\n\n This function can be use to fit a multivariate normal to data.\n\n ```python\n tf.enable_eager_execution()\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n # Assume data.shape = (1000, 2). 1000 samples of a random variable in R^2.\n observed_data = read_data_samples(...)\n\n # The mean is easy\n mu = tf.reduce_mean(observed_data, axis=0)\n\n # Get the scale matrix\n L = tfp.stats.cholesky_covariance(observed_data)\n\n # Make the best fit multivariate normal (under maximum likelihood condition).\n mvn = tfd.MultivariateNormalTriL(loc=mu, scale_tril=L)\n\n # Plot contours of the pdf.\n xs, ys = tf.meshgrid(\n tf.linspace(-5., 5., 50), tf.linspace(-5., 5., 50), indexing='ij')\n xy = tf.stack((tf.reshape(xs, [-1]), tf.reshape(ys, [-1])), axis=-1)\n pdf = tf.reshape(mvn.prob(xy), (50, 50))\n CS = plt.contour(xs, ys, pdf, 10)\n plt.clabel(CS, inline=1, fontsize=10)\n ```\n\n Why does this work?\n Given vector-variate random variables `X = (X1, ..., Xd)`, one may obtain the\n sample covariance matrix in `R^{d x d}` (see `tfp.stats.covariance`).\n\n The [Cholesky factor](https://en.wikipedia.org/wiki/Cholesky_decomposition)\n of this matrix is analogous to standard deviation for scalar random variables:\n Suppose `X` has covariance matrix `C`, with Cholesky factorization `C = L L^T`\n Then multiplying a vector of iid random variables which have unit variance by\n `L` produces a vector with covariance `L L^T`, which is the same as `X`.\n\n ```python\n observed_data = read_data_samples(...)\n L = tfp.stats.cholesky_covariance(observed_data, sample_axis=0)\n\n # Make fake_data with the same covariance as observed_data.\n uncorrelated_normal = tf.random_normal(shape=(500, 10))\n fake_data = tf.linalg.matvec(L, uncorrelated_normal)\n ```\n\n Args:\n x: Numeric `Tensor`. The rightmost dimension of `x` indexes events. E.g.\n dimensions of a random vector.\n sample_axis: Scalar or vector `Tensor` designating axis holding samples.\n Default value: `0` (leftmost dimension). Cannot be the rightmost dimension\n (since this indexes events).\n keepdims: Boolean. Whether to keep the sample axis as singletons.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., `'covariance'`).\n\n Returns:\n chol: `Tensor` of same `dtype` as `x`. The last two dimensions hold\n lower triangular matrices (the Cholesky factors)."}
{"_id": "q_576", "text": "Estimate standard deviation using samples.\n\n Given `N` samples of scalar valued random variable `X`, standard deviation may\n be estimated as\n\n ```none\n Stddev[X] := Sqrt[Var[X]],\n Var[X] := N^{-1} sum_{n=1}^N (X_n - Xbar) Conj{(X_n - Xbar)},\n Xbar := N^{-1} sum_{n=1}^N X_n\n ```\n\n ```python\n x = tf.random_normal(shape=(100, 2, 3))\n\n # stddev[i, j] is the sample standard deviation of the (i, j) batch member.\n stddev = tfp.stats.stddev(x, sample_axis=0)\n ```\n\n Scaling a unit normal by a standard deviation produces normal samples\n with that standard deviation.\n\n ```python\n observed_data = read_data_samples(...)\n stddev = tfp.stats.stddev(observed_data)\n\n # Make fake_data with the same standard deviation as observed_data.\n fake_data = stddev * tf.random_normal(shape=(100,))\n ```\n\n Notice we divide by `N` (the numpy default), which does not create `NaN`\n when `N = 1`, but is slightly biased.\n\n Args:\n x: A numeric `Tensor` holding samples.\n sample_axis: Scalar or vector `Tensor` designating axis holding samples, or\n `None` (meaning all axis hold samples).\n Default value: `0` (leftmost dimension).\n keepdims: Boolean. Whether to keep the sample axis as singletons.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., `'stddev'`).\n\n Returns:\n stddev: A `Tensor` of same `dtype` as the `x`, and rank equal to\n `rank(x) - len(sample_axis)`"}
{"_id": "q_577", "text": "Rectify possibly negatively axis. Prefer return Python list."}
{"_id": "q_578", "text": "A version of squeeze that works with dynamic axis."}
{"_id": "q_579", "text": "Standardize input `x` to a unit normal."}
{"_id": "q_580", "text": "r\"\"\"Returns a sample from the `dim` dimensional Halton sequence.\n\n Warning: The sequence elements take values only between 0 and 1. Care must be\n taken to appropriately transform the domain of a function if it differs from\n the unit cube before evaluating integrals using Halton samples. It is also\n important to remember that quasi-random numbers without randomization are not\n a replacement for pseudo-random numbers in every context. Quasi random numbers\n are completely deterministic and typically have significant negative\n autocorrelation unless randomization is used.\n\n Computes the members of the low discrepancy Halton sequence in dimension\n `dim`. The `dim`-dimensional sequence takes values in the unit hypercube in\n `dim` dimensions. Currently, only dimensions up to 1000 are supported. The\n prime base for the k-th axes is the k-th prime starting from 2. For example,\n if `dim` = 3, then the bases will be [2, 3, 5] respectively and the first\n element of the non-randomized sequence will be: [0.5, 0.333, 0.2]. For a more\n complete description of the Halton sequences see\n [here](https://en.wikipedia.org/wiki/Halton_sequence). For low discrepancy\n sequences and their applications see\n [here](https://en.wikipedia.org/wiki/Low-discrepancy_sequence).\n\n If `randomized` is true, this function produces a scrambled version of the\n Halton sequence introduced by [Owen (2017)][1]. For the advantages of\n randomization of low discrepancy sequences see [here](\n https://en.wikipedia.org/wiki/Quasi-Monte_Carlo_method#Randomization_of_quasi-Monte_Carlo).\n\n The number of samples produced is controlled by the `num_results` and\n `sequence_indices` parameters. The user must supply either `num_results` or\n `sequence_indices` but not both.\n The former is the number of samples to produce starting from the first\n element. If `sequence_indices` is given instead, the specified elements of\n the sequence are generated. For example, sequence_indices=tf.range(10) is\n equivalent to specifying n=10.\n\n #### Examples\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n # Produce the first 1000 members of the Halton sequence in 3 dimensions.\n num_results = 1000\n dim = 3\n sample = tfp.mcmc.sample_halton_sequence(\n dim,\n num_results=num_results,\n seed=127)\n\n # Evaluate the integral of x_1 * x_2^2 * x_3^3 over the three dimensional\n # hypercube.\n powers = tf.range(1.0, limit=dim + 1)\n integral = tf.reduce_mean(tf.reduce_prod(sample ** powers, axis=-1))\n true_value = 1.0 / tf.reduce_prod(powers + 1.0)\n with tf.Session() as session:\n values = session.run((integral, true_value))\n\n # Produces a relative absolute error of 1.7%.\n print (\"Estimated: %f, True Value: %f\" % values)\n\n # Now skip the first 1000 samples and recompute the integral with the next\n # thousand samples. The sequence_indices argument can be used to do this.\n\n\n sequence_indices = tf.range(start=1000, limit=1000 + num_results,\n dtype=tf.int32)\n sample_leaped = tfp.mcmc.sample_halton_sequence(\n dim,\n sequence_indices=sequence_indices,\n seed=111217)\n\n integral_leaped = tf.reduce_mean(tf.reduce_prod(sample_leaped ** powers,\n axis=-1))\n with tf.Session() as session:\n values = session.run((integral_leaped, true_value))\n # Now produces a relative absolute error of 0.05%.\n print (\"Leaped Estimated: %f, True Value: %f\" % values)\n ```\n\n Args:\n dim: Positive Python `int` representing each sample's `event_size.` Must\n not be greater than 1000.\n num_results: (Optional) Positive scalar `Tensor` of dtype int32. The number\n of samples to generate. Either this parameter or sequence_indices must\n be specified but not both. If this parameter is None, then the behaviour\n is determined by the `sequence_indices`.\n Default value: `None`.\n sequence_indices: (Optional) `Tensor` of dtype int32 and rank 1. The\n elements of the sequence to compute specified by their position in the\n sequence. The entries index into the Halton sequence starting with 0 and\n hence, must be whole numbers. For example, sequence_indices=[0, 5, 6] will\n produce the first, sixth and seventh elements of the sequence. If this\n parameter is None, then the `num_results` parameter must be specified\n which gives the number of desired samples starting from the first sample.\n Default value: `None`.\n dtype: (Optional) The dtype of the sample. One of: `float16`, `float32` or\n `float64`.\n Default value: `tf.float32`.\n randomized: (Optional) bool indicating whether to produce a randomized\n Halton sequence. If True, applies the randomization described in\n [Owen (2017)][1].\n Default value: `True`.\n seed: (Optional) Python integer to seed the random number generator. Only\n used if `randomized` is True. If not supplied and `randomized` is True,\n no seed is set.\n Default value: `None`.\n name: (Optional) Python `str` describing ops managed by this function. If\n not supplied the name of this function is used.\n Default value: \"sample_halton_sequence\".\n\n Returns:\n halton_elements: Elements of the Halton sequence. `Tensor` of supplied dtype\n and `shape` `[num_results, dim]` if `num_results` was specified or shape\n `[s, dim]` where s is the size of `sequence_indices` if `sequence_indices`\n were specified.\n\n Raises:\n ValueError: if both `sequence_indices` and `num_results` were specified or\n if dimension `dim` is less than 1 or greater than 1000.\n\n #### References\n\n [1]: Art B. Owen. A randomized Halton algorithm in R. _arXiv preprint\n arXiv:1706.02808_, 2017. https://arxiv.org/abs/1706.02808"}
{"_id": "q_581", "text": "Uniform iid sample from the space of permutations.\n\n Draws a sample of size `num_results` from the group of permutations of degrees\n specified by the `dims` tensor. These are packed together into one tensor\n such that each row is one sample from each of the dimensions in `dims`. For\n example, if dims = [2,3] and num_results = 2, the result is a tensor of shape\n [2, 2 + 3] and the first row of the result might look like:\n [1, 0, 2, 0, 1]. The first two elements are a permutation over 2 elements\n while the next three are a permutation over 3 elements.\n\n Args:\n num_results: A positive scalar `Tensor` of integral type. The number of\n draws from the discrete uniform distribution over the permutation groups.\n dims: A 1D `Tensor` of the same dtype as `num_results`. The degree of the\n permutation groups from which to sample.\n seed: (Optional) Python integer to seed the random number generator.\n\n Returns:\n permutations: A `Tensor` of shape `[num_results, sum(dims)]` and the same\n dtype as `dims`."}
{"_id": "q_582", "text": "Generates starting points for the Halton sequence procedure.\n\n The k'th element of the sequence is generated starting from a positive integer\n which must be distinct for each `k`. It is conventional to choose the starting\n point as `k` itself (or `k+1` if k is zero based). This function generates\n the starting integers for the required elements and reshapes the result for\n later use.\n\n Args:\n num_results: Positive scalar `Tensor` of dtype int32. The number of samples\n to generate. If this parameter is supplied, then `sequence_indices`\n should be None.\n sequence_indices: `Tensor` of dtype int32 and rank 1. The entries\n index into the Halton sequence starting with 0 and hence, must be whole\n numbers. For example, sequence_indices=[0, 5, 6] will produce the first,\n sixth and seventh elements of the sequence. If this parameter is not None\n then `n` must be None.\n dtype: The dtype of the sample. One of `float32` or `float64`.\n Default is `float32`.\n name: Python `str` name which describes ops created by this function.\n\n Returns:\n indices: `Tensor` of dtype `dtype` and shape = `[n, 1, 1]`."}
{"_id": "q_583", "text": "Computes the number of terms in the place value expansion.\n\n Let num = a0 + a1 b + a2 b^2 + ... ak b^k be the place value expansion of\n `num` in base b (ak <> 0). This function computes and returns `k+1` for each\n base `b` specified in `bases`.\n\n This can be inferred from the base `b` logarithm of `num` as follows:\n $$k = Floor(log_b (num)) + 1 = Floor( log(num) / log(b)) + 1$$\n\n Args:\n num: Scalar `Tensor` of dtype either `float32` or `float64`. The number to\n compute the base expansion size of.\n bases: `Tensor` of the same dtype as num. The bases to compute the size\n against.\n\n Returns:\n Tensor of same dtype and shape as `bases` containing the size of num when\n written in that base."}
{"_id": "q_584", "text": "Returns sorted array of primes such that `2 <= prime < n`."}
{"_id": "q_585", "text": "Returns the machine epsilon for the supplied dtype."}
{"_id": "q_586", "text": "The Hager Zhang line search algorithm.\n\n Performs an inexact line search based on the algorithm of\n [Hager and Zhang (2006)][2].\n The univariate objective function `value_and_gradients_function` is typically\n generated by projecting a multivariate objective function along a search\n direction. Suppose the multivariate function to be minimized is\n `g(x1,x2, .. xn)`. Let (d1, d2, ..., dn) be the direction along which we wish\n to perform a line search. Then the projected univariate function to be used\n for line search is\n\n ```None\n f(a) = g(x1 + d1 * a, x2 + d2 * a, ..., xn + dn * a)\n ```\n\n The directional derivative along (d1, d2, ..., dn) is needed for this\n procedure. This also corresponds to the derivative of the projected function\n `f(a)` with respect to `a`. Note that this derivative must be negative for\n `a = 0` if the direction is a descent direction.\n\n The usual stopping criteria for the line search is the satisfaction of the\n (weak) Wolfe conditions. For details of the Wolfe conditions, see\n ref. [3]. On a finite precision machine, the exact Wolfe conditions can\n be difficult to satisfy when one is very close to the minimum and as argued\n by [Hager and Zhang (2005)][1], one can only expect the minimum to be\n determined within square root of machine precision. To improve the situation,\n they propose to replace the Wolfe conditions with an approximate version\n depending on the derivative of the function which is applied only when one\n is very close to the minimum. The following algorithm implements this\n enhanced scheme.\n\n ### Usage:\n\n Primary use of line search methods is as an internal component of a class of\n optimization algorithms (called line search based methods as opposed to\n trust region methods). Hence, the end user will typically not want to access\n line search directly. In particular, inexact line search should not be\n confused with a univariate minimization method. The stopping criteria of line\n search is the satisfaction of Wolfe conditions and not the discovery of the\n minimum of the function.\n\n With this caveat in mind, the following example illustrates the standalone\n usage of the line search.\n\n ```python\n # Define value and gradient namedtuple\n ValueAndGradient = namedtuple('ValueAndGradient', ['x', 'f', 'df'])\n # Define a quadratic target with minimum at 1.3.\n def value_and_gradients_function(x):\n return ValueAndGradient(x=x, f=(x - 1.3) ** 2, df=2 * (x-1.3))\n # Set initial step size.\n step_size = tf.constant(0.1)\n ls_result = tfp.optimizer.linesearch.hager_zhang(\n value_and_gradients_function, initial_step_size=step_size)\n # Evaluate the results.\n with tf.Session() as session:\n results = session.run(ls_result)\n # Ensure convergence.\n assert results.converged\n # If the line search converged, the left and the right ends of the\n # bracketing interval are identical.\n assert results.left.x == result.right.x\n # Print the number of evaluations and the final step size.\n print (\"Final Step Size: %f, Evaluations: %d\" % (results.left.x,\n results.func_evals))\n ```\n\n ### References:\n [1]: William Hager, Hongchao Zhang. A new conjugate gradient method with\n guaranteed descent and an efficient line search. SIAM J. Optim., Vol 16. 1,\n pp. 170-172. 2005.\n https://www.math.lsu.edu/~hozhang/papers/cg_descent.pdf\n\n [2]: William Hager, Hongchao Zhang. Algorithm 851: CG_DESCENT, a conjugate\n gradient method with guaranteed descent. ACM Transactions on Mathematical\n Software, Vol 32., 1, pp. 113-137. 2006.\n http://users.clas.ufl.edu/hager/papers/CG/cg_compare.pdf\n\n [3]: Jorge Nocedal, Stephen Wright. Numerical Optimization. Springer Series in\n Operations Research. pp 33-36. 2006\n\n Args:\n value_and_gradients_function: A Python callable that accepts a real scalar\n tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that\n correspond to scalar tensors of real dtype containing the point at which\n the function was evaluated, the value of the function, and its\n derivative at that point. The other namedtuple fields, if present,\n should be tensors or sequences (possibly nested) of tensors.\n In usual optimization application, this function would be generated by\n projecting the multivariate objective function along some specific\n direction. The direction is determined by some other procedure but should\n be a descent direction (i.e. the derivative of the projected univariate\n function must be negative at 0.).\n Alternatively, the function may represent the batching of `n` such line\n functions (e.g. projecting a single multivariate objective function along\n `n` distinct directions at once) accepting n points as input, i.e. a\n tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned\n namedtuple should each be a tensor of shape [n], with the corresponding\n input points, function values, and derivatives at those input points.\n initial_step_size: (Optional) Scalar positive `Tensor` of real dtype, or\n a tensor of shape [n] in batching mode. The initial value (or values) to\n try to bracket the minimum. Default is `1.` as a float32.\n Note that this point need not necessarily bracket the minimum for the line\n search to work correctly but the supplied value must be greater than 0.\n A good initial value will make the search converge faster.\n value_at_initial_step: (Optional) The full return value of evaluating\n value_and_gradients_function at initial_step_size, i.e. a namedtuple with\n 'x', 'f', 'df', if already known by the caller. If supplied the value of\n `initial_step_size` will be ignored, otherwise the tuple will be computed\n by evaluating value_and_gradients_function.\n value_at_zero: (Optional) The full return value of\n value_and_gradients_function at `0.`, i.e. a namedtuple with\n 'x', 'f', 'df', if already known by the caller. If not supplied the tuple\n will be computed by evaluating value_and_gradients_function.\n converged: (Optional) In batching mode a tensor of shape [n], indicating\n batch members which have already converged and no further search should\n be performed. These batch members are also reported as converged in the\n output, and both their `left` and `right` are set to the\n `value_at_initial_step`.\n threshold_use_approximate_wolfe_condition: Scalar positive `Tensor`\n of real dtype. Corresponds to the parameter 'epsilon' in\n [Hager and Zhang (2006)][2]. Used to estimate the\n threshold at which the line search switches to approximate Wolfe\n conditions.\n shrinkage_param: Scalar positive Tensor of real dtype. Must be less than\n `1.`. Corresponds to the parameter `gamma` in\n [Hager and Zhang (2006)][2].\n If the secant**2 step does not shrink the bracketing interval by this\n proportion, a bisection step is performed to reduce the interval width.\n expansion_param: Scalar positive `Tensor` of real dtype. Must be greater\n than `1.`. Used to expand the initial interval in case it does not bracket\n a minimum. Corresponds to `rho` in [Hager and Zhang (2006)][2].\n sufficient_decrease_param: Positive scalar `Tensor` of real dtype.\n Bounded above by the curvature param. Corresponds to `delta` in the\n terminology of [Hager and Zhang (2006)][2].\n curvature_param: Positive scalar `Tensor` of real dtype. Bounded above\n by `1.`. Corresponds to 'sigma' in the terminology of\n [Hager and Zhang (2006)][2].\n step_size_shrink_param: Positive scalar `Tensor` of real dtype. Bounded\n above by `1`. If the supplied step size is too big (i.e. either the\n objective value or the gradient at that point is infinite), this factor\n is used to shrink the step size until it is finite.\n max_iterations: Positive scalar `Tensor` of integral dtype or None. The\n maximum number of iterations to perform in the line search. The number of\n iterations used to bracket the minimum are also counted against this\n parameter.\n name: (Optional) Python str. The name prefixed to the ops created by this\n function. If not supplied, the default name 'hager_zhang' is used.\n\n Returns:\n results: A namedtuple containing the following attributes.\n converged: Boolean `Tensor` of shape [n]. Whether a point satisfying\n Wolfe/Approx wolfe was found.\n failed: Boolean `Tensor` of shape [n]. Whether line search failed e.g.\n if either the objective function or the gradient are not finite at\n an evaluation point.\n iterations: Scalar int32 `Tensor`. Number of line search iterations made.\n func_evals: Scalar int32 `Tensor`. Number of function evaluations made.\n left: A namedtuple, as returned by value_and_gradients_function,\n of the left end point of the final bracketing interval. Values are\n equal to those of `right` on batch members where converged is True.\n Otherwise, it corresponds to the last interval computed.\n right: A namedtuple, as returned by value_and_gradients_function,\n of the right end point of the final bracketing interval. Values are\n equal to those of `left` on batch members where converged is True.\n Otherwise, it corresponds to the last interval computed."}
{"_id": "q_587", "text": "Shrinks the input step size until the value and grad become finite."}
{"_id": "q_588", "text": "The main loop of line search after the minimum has been bracketed.\n\n Args:\n value_and_gradients_function: A Python callable that accepts a real scalar\n tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that\n correspond to scalar tensors of real dtype containing the point at which\n the function was evaluated, the value of the function, and its\n derivative at that point. The other namedtuple fields, if present,\n should be tensors or sequences (possibly nested) of tensors.\n In usual optimization application, this function would be generated by\n projecting the multivariate objective function along some specific\n direction. The direction is determined by some other procedure but should\n be a descent direction (i.e. the derivative of the projected univariate\n function must be negative at 0.).\n Alternatively, the function may represent the batching of `n` such line\n functions (e.g. projecting a single multivariate objective function along\n `n` distinct directions at once) accepting n points as input, i.e. a\n tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned\n namedtuple should each be a tensor of shape [n], with the corresponding\n input points, function values, and derivatives at those input points.\n search_interval: Instance of `HagerZhangLineSearchResults` containing\n the current line search interval.\n val_0: A namedtuple as returned by value_and_gradients_function evaluated\n at `0.`. The gradient must be negative (i.e. must be a descent direction).\n f_lim: Scalar `Tensor` of float dtype.\n max_iterations: Positive scalar `Tensor` of integral dtype. The maximum\n number of iterations to perform in the line search. The number of\n iterations used to bracket the minimum are also counted against this\n parameter.\n sufficient_decrease_param: Positive scalar `Tensor` of real dtype.\n Bounded above by the curvature param. Corresponds to `delta` in the\n terminology of [Hager and Zhang (2006)][2].\n curvature_param: Positive scalar `Tensor` of real dtype. Bounded above\n by `1.`. Corresponds to 'sigma' in the terminology of\n [Hager and Zhang (2006)][2].\n shrinkage_param: Scalar positive Tensor of real dtype. Must be less than\n `1.`. Corresponds to the parameter `gamma` in [Hager and Zhang (2006)][2].\n\n Returns:\n A namedtuple containing the following fields.\n converged: Boolean `Tensor` of shape [n]. Whether a point satisfying\n Wolfe/Approx wolfe was found.\n failed: Boolean `Tensor` of shape [n]. Whether line search failed e.g.\n if either the objective function or the gradient are not finite at\n an evaluation point.\n iterations: Scalar int32 `Tensor`. Number of line search iterations made.\n func_evals: Scalar int32 `Tensor`. Number of function evaluations made.\n left: A namedtuple, as returned by value_and_gradients_function,\n of the left end point of the updated bracketing interval.\n right: A namedtuple, as returned by value_and_gradients_function,\n of the right end point of the updated bracketing interval."}
{"_id": "q_589", "text": "Performs bisection and updates the interval."}
{"_id": "q_590", "text": "Wrapper for tf.Print which supports lists and namedtuples for printing."}
{"_id": "q_591", "text": "Use Gauss-Hermite quadrature to form quadrature on `K - 1` simplex.\n\n A `SoftmaxNormal` random variable `Y` may be generated via\n\n ```\n Y = SoftmaxCentered(X),\n X = Normal(normal_loc, normal_scale)\n ```\n\n Note: for a given `quadrature_size`, this method is generally less accurate\n than `quadrature_scheme_softmaxnormal_quantiles`.\n\n Args:\n normal_loc: `float`-like `Tensor` with shape `[b1, ..., bB, K-1]`, B>=0.\n The location parameter of the Normal used to construct the SoftmaxNormal.\n normal_scale: `float`-like `Tensor`. Broadcastable with `normal_loc`.\n The scale parameter of the Normal used to construct the SoftmaxNormal.\n quadrature_size: Python `int` scalar representing the number of quadrature\n points.\n validate_args: Python `bool`, default `False`. When `True` distribution\n parameters are checked for validity despite possibly degrading runtime\n performance. When `False` invalid inputs may silently render incorrect\n outputs.\n name: Python `str` name prefixed to Ops created by this class.\n\n Returns:\n grid: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the\n convex combination of affine parameters for `K` components.\n `grid[..., :, n]` is the `n`-th grid point, living in the `K - 1` simplex.\n probs: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the\n associated with each grid point."}
{"_id": "q_592", "text": "Helper which checks validity of `loc` and `scale` init args."}
{"_id": "q_593", "text": "Helper which interpolates between two locs."}
{"_id": "q_594", "text": "Helper which interpolates between two scales."}
{"_id": "q_595", "text": "Multiply tensor of vectors by matrices assuming values stored are logs."}
{"_id": "q_596", "text": "Tabulate log probabilities from a batch of distributions."}
{"_id": "q_597", "text": "Compute marginal posterior distribution for each state.\n\n This function computes, for each time step, the marginal\n conditional probability that the hidden Markov model was in\n each possible state given the observations that were made\n at each time step.\n So if the hidden states are `z[0],...,z[num_steps - 1]` and\n the observations are `x[0], ..., x[num_steps - 1]`, then\n this function computes `P(z[i] | x[0], ..., x[num_steps - 1])`\n for all `i` from `0` to `num_steps - 1`.\n\n This operation is sometimes called smoothing. It uses a form\n of the forward-backward algorithm.\n\n Note: the behavior of this function is undefined if the\n `observations` argument represents impossible observations\n from the model.\n\n Args:\n observations: A tensor representing a batch of observations\n made on the hidden Markov model. The rightmost dimension of this tensor\n gives the steps in a sequence of observations from a single sample from\n the hidden Markov model. The size of this dimension should match the\n `num_steps` parameter of the hidden Markov model object. The other\n dimensions are the dimensions of the batch and these are broadcast with\n the hidden Markov model's parameters.\n name: Python `str` name prefixed to Ops created by this class.\n Default value: \"HiddenMarkovModel\".\n\n Returns:\n posterior_marginal: A `Categorical` distribution object representing the\n marginal probability of the hidden Markov model being in each state at\n each step. The rightmost dimension of the `Categorical` distributions\n batch will equal the `num_steps` parameter providing one marginal\n distribution for each step. The other dimensions are the dimensions\n corresponding to the batch of observations.\n\n Raises:\n ValueError: if rightmost dimension of `observations` does not\n have size `num_steps`."}
{"_id": "q_598", "text": "Chooses a random direction in the event space."}
{"_id": "q_599", "text": "Applies a single iteration of slice sampling update.\n\n Applies hit and run style slice sampling. Chooses a uniform random direction\n on the unit sphere in the event space. Applies the one dimensional slice\n sampling update along that direction.\n\n Args:\n target_log_prob_fn: Python callable which takes an argument like\n `*current_state_parts` and returns its (possibly unnormalized) log-density\n under the target distribution.\n current_state_parts: Python `list` of `Tensor`s representing the current\n state(s) of the Markov chain(s). The first `independent_chain_ndims` of\n the `Tensor`(s) index different chains.\n step_sizes: Python `list` of `Tensor`s. Provides a measure of the width\n of the density. Used to find the slice bounds. Must broadcast with the\n shape of `current_state_parts`.\n max_doublings: Integer number of doublings to allow while locating the slice\n boundaries.\n current_target_log_prob: `Tensor` representing the value of\n `target_log_prob_fn(*current_state_parts)`. The only reason to specify\n this argument is to reduce TF graph size.\n batch_rank: Integer. The number of axes in the state that correspond to\n independent batches.\n seed: Python integer to seed random number generators.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'find_slice_bounds').\n\n Returns:\n proposed_state_parts: Tensor or Python list of `Tensor`s representing the\n state(s) of the Markov chain(s) at each result step. Has same shape as\n input `current_state_parts`.\n proposed_target_log_prob: `Tensor` representing the value of\n `target_log_prob_fn` at `next_state`.\n bounds_satisfied: Boolean `Tensor` of the same shape as the log density.\n True indicates whether the an interval containing the slice for that\n batch was found successfully.\n direction: `Tensor` or Python list of `Tensors`s representing the direction\n along which the slice was sampled. Has the same shape and dtype(s) as\n `current_state_parts`.\n upper_bounds: `Tensor` of batch shape and the dtype of the input state. The\n upper bounds of the slices along the sampling direction.\n lower_bounds: `Tensor` of batch shape and the dtype of the input state. The\n lower bounds of the slices along the sampling direction."}
{"_id": "q_600", "text": "Helper which computes `fn_result` if needed."}
{"_id": "q_601", "text": "Pads the shape of x to the right to be of rank final_rank.\n\n Expands the dims of `x` to the right such that its rank is equal to\n final_rank. For example, if `x` is of shape [1, 5, 7, 2] and `final_rank` is\n 7, we return padded_x, which is of shape [1, 5, 7, 2, 1, 1, 1].\n\n Args:\n x: The tensor whose shape is to be padded.\n final_rank: Scalar int32 `Tensor` or Python `int`. The desired rank of x.\n\n Returns:\n padded_x: A tensor of rank final_rank."}
{"_id": "q_602", "text": "Runs one iteration of Slice Sampler.\n\n Args:\n current_state: `Tensor` or Python `list` of `Tensor`s representing the\n current state(s) of the Markov chain(s). The first `r` dimensions\n index independent chains,\n `r = tf.rank(target_log_prob_fn(*current_state))`.\n previous_kernel_results: `collections.namedtuple` containing `Tensor`s\n representing values from previous calls to this function (or from the\n `bootstrap_results` function.)\n\n Returns:\n next_state: Tensor or Python list of `Tensor`s representing the state(s)\n of the Markov chain(s) after taking exactly one step. Has same type and\n shape as `current_state`.\n kernel_results: `collections.namedtuple` of internal calculations used to\n advance the chain.\n\n Raises:\n ValueError: if there isn't one `step_size` or a list with same length as\n `current_state`.\n TypeError: if `not target_log_prob.dtype.is_floating`."}
{"_id": "q_603", "text": "Build a loss function for variational inference in STS models.\n\n Variational inference searches for the distribution within some family of\n approximate posteriors that minimizes a divergence between the approximate\n posterior `q(z)` and true posterior `p(z|observed_time_series)`. By converting\n inference to optimization, it's generally much faster than sampling-based\n inference algorithms such as HMC. The tradeoff is that the approximating\n family rarely contains the true posterior, so it may miss important aspects of\n posterior structure (in particular, dependence between variables) and should\n not be blindly trusted. Results may vary; it's generally wise to compare to\n HMC to evaluate whether inference quality is sufficient for your task at hand.\n\n This method constructs a loss function for variational inference using the\n Kullback-Liebler divergence `KL[q(z) || p(z|observed_time_series)]`, with an\n approximating family given by independent Normal distributions transformed to\n the appropriate parameter space for each parameter. Minimizing this loss (the\n negative ELBO) maximizes a lower bound on the log model evidence `-log\n p(observed_time_series)`. This is equivalent to the 'mean-field' method\n implemented in [1]. and is a standard approach. The resulting posterior\n approximations are unimodal; they will tend to underestimate posterior\n uncertainty when the true posterior contains multiple modes (the `KL[q||p]`\n divergence encourages choosing a single mode) or dependence between variables.\n\n Args:\n model: An instance of `StructuralTimeSeries` representing a\n time-series model. This represents a joint distribution over\n time-series and their parameters with batch shape `[b1, ..., bN]`.\n observed_time_series: `float` `Tensor` of shape\n `concat([sample_shape, model.batch_shape, [num_timesteps, 1]]) where\n `sample_shape` corresponds to i.i.d. observations, and the trailing `[1]`\n dimension may (optionally) be omitted if `num_timesteps > 1`. May\n optionally be an instance of `tfp.sts.MaskedTimeSeries`, which includes\n a mask `Tensor` to specify timesteps with missing observations.\n init_batch_shape: Batch shape (Python `tuple`, `list`, or `int`) of initial\n states to optimize in parallel.\n Default value: `()`. (i.e., just run a single optimization).\n seed: Python integer to seed the random number generator.\n name: Python `str` name prefixed to ops created by this function.\n Default value: `None` (i.e., 'build_factored_variational_loss').\n\n Returns:\n variational_loss: `float` `Tensor` of shape\n `concat([init_batch_shape, model.batch_shape])`, encoding a stochastic\n estimate of an upper bound on the negative model evidence `-log p(y)`.\n Minimizing this loss performs variational inference; the gap between the\n variational bound and the true (generally unknown) model evidence\n corresponds to the divergence `KL[q||p]` between the approximate and true\n posterior.\n variational_distributions: `collections.OrderedDict` giving\n the approximate posterior for each model parameter. The keys are\n Python `str` parameter names in order, corresponding to\n `[param.name for param in model.parameters]`. The values are\n `tfd.Distribution` instances with batch shape\n `concat([init_batch_shape, model.batch_shape])`; these will typically be\n of the form `tfd.TransformedDistribution(tfd.Normal(...),\n bijector=param.bijector)`.\n\n #### Examples\n\n Assume we've built a structural time-series model:\n\n ```python\n day_of_week = tfp.sts.Seasonal(\n num_seasons=7,\n observed_time_series=observed_time_series,\n name='day_of_week')\n local_linear_trend = tfp.sts.LocalLinearTrend(\n observed_time_series=observed_time_series,\n name='local_linear_trend')\n model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],\n observed_time_series=observed_time_series)\n ```\n\n To run variational inference, we simply construct the loss and optimize\n it:\n\n ```python\n (variational_loss,\n variational_distributions) = tfp.sts.build_factored_variational_loss(\n model=model, observed_time_series=observed_time_series)\n\n train_op = tf.train.AdamOptimizer(0.1).minimize(variational_loss)\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n\n for step in range(200):\n _, loss_ = sess.run((train_op, variational_loss))\n\n if step % 20 == 0:\n print(\"step {} loss {}\".format(step, loss_))\n\n posterior_samples_ = sess.run({\n param_name: q.sample(50)\n for param_name, q in variational_distributions.items()})\n ```\n\n As a more complex example, we might try to avoid local optima by optimizing\n from multiple initializations in parallel, and selecting the result with the\n lowest loss:\n\n ```python\n (variational_loss,\n variational_distributions) = tfp.sts.build_factored_variational_loss(\n model=model, observed_time_series=observed_time_series,\n init_batch_shape=[10])\n\n train_op = tf.train.AdamOptimizer(0.1).minimize(variational_loss)\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n\n for step in range(200):\n _, loss_ = sess.run((train_op, variational_loss))\n\n if step % 20 == 0:\n print(\"step {} losses {}\".format(step, loss_))\n\n # Draw multiple samples to reduce Monte Carlo error in the optimized\n # variational bounds.\n avg_loss = np.mean(\n [sess.run(variational_loss) for _ in range(25)], axis=0)\n best_posterior_idx = np.argmin(avg_loss, axis=0).astype(np.int32)\n ```\n\n #### References\n\n [1]: Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and\n David M. Blei. Automatic Differentiation Variational Inference. In\n _Journal of Machine Learning Research_, 2017.\n https://arxiv.org/abs/1603.00788"}
{"_id": "q_604", "text": "Run an optimizer within the graph to minimize a loss function."}
{"_id": "q_605", "text": "Compute mean and variance, accounting for a mask.\n\n Args:\n time_series_tensor: float `Tensor` time series of shape\n `concat([batch_shape, [num_timesteps]])`.\n broadcast_mask: bool `Tensor` of the same shape as `time_series`.\n Returns:\n mean: float `Tensor` of shape `batch_shape`.\n variance: float `Tensor` of shape `batch_shape`."}
{"_id": "q_606", "text": "Get broadcast batch shape from distributions, statically if possible."}
{"_id": "q_607", "text": "Combine MultivariateNormals into a factored joint distribution.\n\n Given a list of multivariate normal distributions\n `dist[i] = Normal(loc[i], scale[i])`, construct the joint\n distribution given by concatenating independent samples from these\n distributions. This is multivariate normal with mean vector given by the\n concatenation of the component mean vectors, and block-diagonal covariance\n matrix in which the blocks are the component covariances.\n\n Note that for computational efficiency, multivariate normals are represented\n by a 'scale' (factored covariance) linear operator rather than the full\n covariance matrix.\n\n Args:\n distributions: Python `iterable` of MultivariateNormal distribution\n instances (e.g., `tfd.MultivariateNormalDiag`,\n `tfd.MultivariateNormalTriL`, etc.). These must be broadcastable to a\n consistent batch shape, but may have different event shapes\n (i.e., defined over spaces of different dimension).\n\n Returns:\n joint_distribution: An instance of `tfd.MultivariateNormalLinearOperator`\n representing the joint distribution constructed by concatenating\n an independent sample from each input distributions."}
{"_id": "q_608", "text": "Compute statistics of a provided time series, as heuristic initialization.\n\n Args:\n observed_time_series: `Tensor` representing a time series, or batch of time\n series, of shape either `batch_shape + [num_timesteps, 1]` or\n `batch_shape + [num_timesteps]` (allowed if `num_timesteps > 1`).\n\n Returns:\n observed_mean: `Tensor` of shape `batch_shape`, giving the empirical\n mean of each time series in the batch.\n observed_stddev: `Tensor` of shape `batch_shape`, giving the empirical\n standard deviation of each time series in the batch.\n observed_initial_centered: `Tensor of shape `batch_shape`, giving the\n initial value of each time series in the batch after centering\n (subtracting the mean)."}
{"_id": "q_609", "text": "Ensures `observed_time_series_tensor` has a trailing dimension of size 1.\n\n The `tfd.LinearGaussianStateSpaceModel` Distribution has event shape of\n `[num_timesteps, observation_size]`, but canonical BSTS models\n are univariate, so their observation_size is always `1`. The extra trailing\n dimension gets annoying, so this method allows arguments with or without the\n extra dimension. There is no ambiguity except in the trivial special case\n where `num_timesteps = 1`; this can be avoided by specifying any unit-length\n series in the explicit `[num_timesteps, 1]` style.\n\n Most users should not call this method directly, and instead call\n `canonicalize_observed_time_series_with_mask`, which handles converting\n to `Tensor` and specifying an optional missingness mask.\n\n Args:\n observed_time_series_tensor: `Tensor` of shape\n `batch_shape + [num_timesteps, 1]` or `batch_shape + [num_timesteps]`,\n where `num_timesteps > 1`.\n\n Returns:\n expanded_time_series: `Tensor` of shape `batch_shape + [num_timesteps, 1]`."}
{"_id": "q_610", "text": "Construct a predictive normal distribution that mixes over posterior draws.\n\n Args:\n means: float `Tensor` of shape\n `[num_posterior_draws, ..., num_timesteps]`.\n variances: float `Tensor` of shape\n `[num_posterior_draws, ..., num_timesteps]`.\n\n Returns:\n mixture_dist: `tfd.MixtureSameFamily(tfd.Independent(tfd.Normal))` instance\n representing a uniform mixture over the posterior samples, with\n `batch_shape = ...` and `event_shape = [num_timesteps]`."}
{"_id": "q_611", "text": "Uses arg names to resolve distribution names."}
{"_id": "q_612", "text": "Calculate the KL divergence between two `JointDistributionSequential`s.\n\n Args:\n d0: instance of a `JointDistributionSequential` object.\n d1: instance of a `JointDistributionSequential` object.\n name: (optional) Name to use for created operations.\n Default value: `\"kl_joint_joint\"`.\n\n Returns:\n kl_joint_joint: `Tensor` The sum of KL divergences between elemental\n distributions of two joint distributions.\n\n Raises:\n ValueError: when joint distributions have a different number of elemental\n distributions.\n ValueError: when either joint distribution has a distribution with dynamic\n dependency, i.e., when either joint distribution is not a collection of\n independent distributions."}
{"_id": "q_613", "text": "Creates `dist_fn`, `dist_fn_wrapped`, `dist_fn_args`."}
{"_id": "q_614", "text": "Creates a `tuple` of `tuple`s of dependencies.\n\n This function is **experimental**. That said, we encourage its use\n and ask that you report problems to `tfprobability@tensorflow.org`.\n\n Args:\n distribution_names: `list` of `str` or `None` names corresponding to each\n of `model` elements. (`None`s are expanding into the\n appropriate `str`.)\n leaf_name: `str` used when no maker depends on a particular\n `model` element.\n\n Returns:\n graph: `tuple` of `(str tuple)` pairs representing the name of each\n distribution (maker) and the names of its dependencies.\n\n #### Example\n\n ```python\n d = tfd.JointDistributionSequential([\n tfd.Independent(tfd.Exponential(rate=[100, 120]), 1),\n lambda e: tfd.Gamma(concentration=e[..., 0], rate=e[..., 1]),\n tfd.Normal(loc=0, scale=2.),\n lambda n, g: tfd.Normal(loc=n, scale=g),\n ])\n d._resolve_graph()\n # ==> (\n # ('e', ()),\n # ('g', ('e',)),\n # ('n', ()),\n # ('x', ('n', 'g')),\n # )\n ```"}
{"_id": "q_615", "text": "Decorator function for argument bounds checking.\n\n This decorator is meant to be used with methods that require the first\n argument to be in the support of the distribution. If `validate_args` is\n `True`, the method is wrapped with an assertion that the first argument is\n greater than or equal to `loc`, since the support of the half-Cauchy\n distribution is given by `[loc, infinity)`.\n\n\n Args:\n f: method to be decorated.\n\n Returns:\n Returns a decorated method that, when `validate_args` attribute of the class\n is `True`, will assert that all elements in the first argument are within\n the support of the distribution before executing the original method."}
{"_id": "q_616", "text": "Visualizes sequences as TensorBoard summaries.\n\n Args:\n seqs: A tensor of shape [n, t, h, w, c].\n name: String name of this summary.\n num: Integer for the number of examples to visualize. Defaults to\n all examples."}
{"_id": "q_617", "text": "Visualizes the reconstruction of inputs in TensorBoard.\n\n Args:\n inputs: A tensor of the original inputs, of shape [batch, timesteps,\n h, w, c].\n reconstruct: A tensor of a reconstruction of inputs, of shape\n [batch, timesteps, h, w, c].\n num: Integer for the number of examples to visualize.\n name: String name of this summary."}
{"_id": "q_618", "text": "Summarize the parameters of a distribution.\n\n Args:\n dist: A Distribution object with mean and standard deviation\n parameters.\n name: The name of the distribution.\n name_scope: The name scope of this summary."}
{"_id": "q_619", "text": "Runs the model to generate a distribution for a single timestep.\n\n This generates a batched MultivariateNormalDiag distribution using\n the output of the recurrent model at the current timestep to\n parameterize the distribution.\n\n Args:\n inputs: The sampled value of `z` at the previous timestep, i.e.,\n `z_{t-1}`, of shape [..., dimensions].\n `z_0` should be set to the empty matrix.\n state: A tuple containing the (hidden, cell) state.\n\n Returns:\n A tuple of a MultivariateNormalDiag distribution, and the state of\n the recurrent function at the end of the current timestep. The\n distribution will have event shape [dimensions], batch shape\n [...], and sample shape [sample_shape, ..., dimensions]."}
{"_id": "q_620", "text": "Static batch shape of models represented by this component.\n\n Returns:\n batch_shape: A `tf.TensorShape` giving the broadcast batch shape of\n all model parameters. This should match the batch shape of\n derived state space models, i.e.,\n `self.make_state_space_model(...).batch_shape`. It may be partially\n defined or unknown."}
{"_id": "q_621", "text": "Instantiate this model as a Distribution over specified `num_timesteps`.\n\n Args:\n num_timesteps: Python `int` number of timesteps to model.\n param_vals: a list of `Tensor` parameter values in order corresponding to\n `self.parameters`, or a dict mapping from parameter names to values.\n initial_state_prior: an optional `Distribution` instance overriding the\n default prior on the model's initial state. This is used in forecasting\n (\"today's prior is yesterday's posterior\").\n initial_step: optional `int` specifying the initial timestep to model.\n This is relevant when the model contains time-varying components,\n e.g., holidays or seasonality.\n\n Returns:\n dist: a `LinearGaussianStateSpaceModel` Distribution object."}
{"_id": "q_622", "text": "Sample from the joint prior over model parameters and trajectories.\n\n Args:\n num_timesteps: Scalar `int` `Tensor` number of timesteps to model.\n initial_step: Optional scalar `int` `Tensor` specifying the starting\n timestep.\n Default value: 0.\n params_sample_shape: Number of possible worlds to sample iid from the\n parameter prior, or more generally, `Tensor` `int` shape to fill with\n iid samples.\n Default value: [] (i.e., draw a single sample and don't expand the\n shape).\n trajectories_sample_shape: For each sampled set of parameters, number\n of trajectories to sample, or more generally, `Tensor` `int` shape to\n fill with iid samples.\n Default value: [] (i.e., draw a single sample and don't expand the\n shape).\n seed: Python `int` random seed.\n\n Returns:\n trajectories: `float` `Tensor` of shape\n `trajectories_sample_shape + params_sample_shape + [num_timesteps, 1]`\n containing all sampled trajectories.\n param_samples: list of sampled parameter value `Tensor`s, in order\n corresponding to `self.parameters`, each of shape\n `params_sample_shape + prior.batch_shape + prior.event_shape`."}
{"_id": "q_623", "text": "Numpy implementation of `tf.argsort`."}
{"_id": "q_624", "text": "Numpy implementation of `tf.sort`."}
{"_id": "q_625", "text": "Normal distribution function.\n\n Returns the area under the Gaussian probability density function, integrated\n from minus infinity to x:\n\n ```\n 1 / x\n ndtr(x) = ---------- | exp(-0.5 t**2) dt\n sqrt(2 pi) /-inf\n\n = 0.5 (1 + erf(x / sqrt(2)))\n = 0.5 erfc(x / sqrt(2))\n ```\n\n Args:\n x: `Tensor` of type `float32`, `float64`.\n name: Python string. A name for the operation (default=\"ndtr\").\n\n Returns:\n ndtr: `Tensor` with `dtype=x.dtype`.\n\n Raises:\n TypeError: if `x` is not floating-type."}
{"_id": "q_626", "text": "Implements ndtr core logic."}
{"_id": "q_627", "text": "The inverse of the CDF of the Normal distribution function.\n\n Returns x such that the area under the pdf from minus infinity to x is equal\n to p.\n\n A piece-wise rational approximation is done for the function.\n This is a port of the implementation in netlib.\n\n Args:\n p: `Tensor` of type `float32`, `float64`.\n name: Python string. A name for the operation (default=\"ndtri\").\n\n Returns:\n x: `Tensor` with `dtype=p.dtype`.\n\n Raises:\n TypeError: if `p` is not floating-type."}
{"_id": "q_628", "text": "Log Normal distribution function.\n\n For details of the Normal distribution function see `ndtr`.\n\n This function calculates `(log o ndtr)(x)` by either calling `log(ndtr(x))` or\n using an asymptotic series. Specifically:\n - For `x > upper_segment`, use the approximation `-ndtr(-x)` based on\n `log(1-x) ~= -x, x << 1`.\n - For `lower_segment < x <= upper_segment`, use the existing `ndtr` technique\n and take a log.\n - For `x <= lower_segment`, we use the series approximation of erf to compute\n the log CDF directly.\n\n The `lower_segment` is set based on the precision of the input:\n\n ```\n lower_segment = { -20, x.dtype=float64\n { -10, x.dtype=float32\n upper_segment = { 8, x.dtype=float64\n { 5, x.dtype=float32\n ```\n\n When `x < lower_segment`, the `ndtr` asymptotic series approximation is:\n\n ```\n ndtr(x) = scale * (1 + sum) + R_N\n scale = exp(-0.5 x**2) / (-x sqrt(2 pi))\n sum = Sum{(-1)^n (2n-1)!! / (x**2)^n, n=1:N}\n R_N = O(exp(-0.5 x**2) (2N+1)!! / |x|^{2N+3})\n ```\n\n where `(2n-1)!! = (2n-1) (2n-3) (2n-5) ... (3) (1)` is a\n [double-factorial](https://en.wikipedia.org/wiki/Double_factorial).\n\n\n Args:\n x: `Tensor` of type `float32`, `float64`.\n series_order: Positive Python `integer`. Maximum depth to\n evaluate the asymptotic expansion. This is the `N` above.\n name: Python string. A name for the operation (default=\"log_ndtr\").\n\n Returns:\n log_ndtr: `Tensor` with `dtype=x.dtype`.\n\n Raises:\n TypeError: if `x.dtype` is not handled.\n TypeError: if `series_order` is a not Python `integer.`\n ValueError: if `series_order` is not in `[0, 30]`."}
{"_id": "q_629", "text": "Calculates the asymptotic series used in log_ndtr."}
{"_id": "q_630", "text": "Log Laplace distribution function.\n\n This function calculates `Log[L(x)]`, where `L(x)` is the cumulative\n distribution function of the Laplace distribution, i.e.\n\n ```L(x) := 0.5 * int_{-infty}^x e^{-|t|} dt```\n\n For numerical accuracy, `L(x)` is computed in different ways depending on `x`,\n\n ```\n x <= 0:\n Log[L(x)] = Log[0.5] + x, which is exact\n\n 0 < x:\n Log[L(x)] = Log[1 - 0.5 * e^{-x}], which is exact\n ```\n\n Args:\n x: `Tensor` of type `float32`, `float64`.\n name: Python string. A name for the operation (default=\"log_ndtr\").\n\n Returns:\n `Tensor` with `dtype=x.dtype`.\n\n Raises:\n TypeError: if `x.dtype` is not handled."}
{"_id": "q_631", "text": "Joint log probability function."}
{"_id": "q_632", "text": "Runs HMC on the text-messages unnormalized posterior."}
{"_id": "q_633", "text": "Compute the marginal of this GP over function values at `index_points`.\n\n Args:\n index_points: `float` `Tensor` representing finite (batch of) vector(s) of\n points in the index set over which the GP is defined. Shape has the form\n `[b1, ..., bB, e, f1, ..., fF]` where `F` is the number of feature\n dimensions and must equal `kernel.feature_ndims` and `e` is the number\n (size) of index points in each batch. Ultimately this distribution\n corresponds to a `e`-dimensional multivariate normal. The batch shape\n must be broadcastable with `kernel.batch_shape` and any batch dims\n yielded by `mean_fn`.\n\n Returns:\n marginal: a `Normal` or `MultivariateNormalLinearOperator` distribution,\n according to whether `index_points` consists of one or many index\n points, respectively."}
{"_id": "q_634", "text": "Return `index_points` if not None, else `self._index_points`.\n\n Args:\n index_points: if given, this is what is returned; else,\n `self._index_points`\n\n Returns:\n index_points: the given arg, if not None, else the class member\n `self._index_points`.\n\n Rases:\n ValueError: if `index_points` and `self._index_points` are both `None`."}
{"_id": "q_635", "text": "Creates an stacked IAF bijector.\n\n This bijector operates on vector-valued events.\n\n Args:\n total_event_size: Number of dimensions to operate over.\n num_hidden_layers: How many hidden layers to use in each IAF.\n seed: Random seed for the initializers.\n dtype: DType for the variables.\n\n Returns:\n bijector: The created bijector."}
{"_id": "q_636", "text": "Runs one iteration of NeuTra.\n\n Args:\n current_state: `Tensor` or Python `list` of `Tensor`s representing the\n current state(s) of the Markov chain(s). The first `r` dimensions index\n independent chains, `r = tf.rank(target_log_prob_fn(*current_state))`.\n previous_kernel_results: `collections.namedtuple` containing `Tensor`s\n representing values from previous calls to this function (or from the\n `bootstrap_results` function.)\n\n Returns:\n next_state: Tensor or Python list of `Tensor`s representing the state(s)\n of the Markov chain(s) after taking exactly one step. Has same type and\n shape as `current_state`.\n kernel_results: `collections.namedtuple` of internal calculations used to\n advance the chain."}
{"_id": "q_637", "text": "Trains the bijector and creates initial `previous_kernel_results`.\n\n The supplied `state` is only used to determine the number of chains to run\n in parallel_iterations\n\n Args:\n state: `Tensor` or Python `list` of `Tensor`s representing the initial\n state(s) of the Markov chain(s). The first `r` dimensions index\n independent chains, `r = tf.rank(target_log_prob_fn(*state))`.\n\n Returns:\n kernel_results: Instance of\n `UncalibratedHamiltonianMonteCarloKernelResults` inside\n `MetropolisHastingsResults` inside `TransformedTransitionKernelResults`\n inside `SimpleStepSizeAdaptationResults`."}
{"_id": "q_638", "text": "Convenience function analogous to tf.squared_difference."}
{"_id": "q_639", "text": "Performs distributional transform of the mixture samples.\n\n Distributional transform removes the parameters from samples of a\n multivariate distribution by applying conditional CDFs:\n (F(x_1), F(x_2 | x1_), ..., F(x_d | x_1, ..., x_d-1))\n (the indexing is over the \"flattened\" event dimensions).\n The result is a sample of product of Uniform[0, 1] distributions.\n\n We assume that the components are factorized, so the conditional CDFs become\n F(x_i | x_1, ..., x_i-1) = sum_k w_i^k F_k (x_i),\n where w_i^k is the posterior mixture weight: for i > 0\n w_i^k = w_k prob_k(x_1, ..., x_i-1) / sum_k' w_k' prob_k'(x_1, ..., x_i-1)\n and w_0^k = w_k is the mixture probability of the k-th component.\n\n Arguments:\n x: Sample of mixture distribution\n\n Returns:\n Result of the distributional transform"}
{"_id": "q_640", "text": "Utility method to decompose a joint posterior into components.\n\n Args:\n model: `tfp.sts.Sum` instance defining an additive STS model.\n posterior_means: float `Tensor` of shape `concat(\n [[num_posterior_draws], batch_shape, num_timesteps, latent_size])`\n representing the posterior mean over latents in an\n `AdditiveStateSpaceModel`.\n posterior_covs: float `Tensor` of shape `concat(\n [[num_posterior_draws], batch_shape, num_timesteps,\n latent_size, latent_size])`\n representing the posterior marginal covariances over latents in an\n `AdditiveStateSpaceModel`.\n parameter_samples: Python `list` of `Tensors` representing posterior\n samples of model parameters, with shapes `[concat([\n [num_posterior_draws], param.prior.batch_shape,\n param.prior.event_shape]) for param in model.parameters]`. This may\n optionally also be a map (Python `dict`) of parameter names to\n `Tensor` values.\n\n Returns:\n component_dists: A `collections.OrderedDict` instance mapping\n component StructuralTimeSeries instances (elements of `model.components`)\n to `tfd.Distribution` instances representing the posterior marginal\n distributions on the process modeled by each component. Each distribution\n has batch shape matching that of `posterior_means`/`posterior_covs`, and\n event shape of `[num_timesteps]`."}
{"_id": "q_641", "text": "Decompose a forecast distribution into contributions from each component.\n\n Args:\n model: An instance of `tfp.sts.Sum` representing a structural time series\n model.\n forecast_dist: A `Distribution` instance returned by `tfp.sts.forecast()`.\n (specifically, must be a `tfd.MixtureSameFamily` over a\n `tfd.LinearGaussianStateSpaceModel` parameterized by posterior samples).\n parameter_samples: Python `list` of `Tensors` representing posterior samples\n of model parameters, with shapes `[concat([[num_posterior_draws],\n param.prior.batch_shape, param.prior.event_shape]) for param in\n model.parameters]`. This may optionally also be a map (Python `dict`) of\n parameter names to `Tensor` values.\n Returns:\n component_forecasts: A `collections.OrderedDict` instance mapping\n component StructuralTimeSeries instances (elements of `model.components`)\n to `tfd.Distribution` instances representing the marginal forecast for\n each component. Each distribution has batch and event shape matching\n `forecast_dist` (specifically, the event shape is\n `[num_steps_forecast]`).\n\n #### Examples\n\n Suppose we've built a model, fit it to data, and constructed a forecast\n distribution:\n\n ```python\n day_of_week = tfp.sts.Seasonal(\n num_seasons=7,\n observed_time_series=observed_time_series,\n name='day_of_week')\n local_linear_trend = tfp.sts.LocalLinearTrend(\n observed_time_series=observed_time_series,\n name='local_linear_trend')\n model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],\n observed_time_series=observed_time_series)\n\n num_steps_forecast = 50\n samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series)\n forecast_dist = tfp.sts.forecast(model, observed_time_series,\n parameter_samples=samples,\n num_steps_forecast=num_steps_forecast)\n ```\n\n To extract the forecast for individual components, pass the forecast\n distribution into `decompose_forecast_by_components`:\n\n ```python\n component_forecasts = decompose_forecast_by_component(\n model, forecast_dist, samples)\n\n # Component mean and stddev have shape `[num_steps_forecast]`.\n day_of_week_effect_mean = forecast_components[day_of_week].mean()\n day_of_week_effect_stddev = forecast_components[day_of_week].stddev()\n ```\n\n Using the component forecasts, we can visualize the uncertainty for each\n component:\n\n ```\n from matplotlib import pylab as plt\n num_components = len(component_forecasts)\n xs = np.arange(num_steps_forecast)\n fig = plt.figure(figsize=(12, 3 * num_components))\n for i, (component, component_dist) in enumerate(component_forecasts.items()):\n\n # If in graph mode, replace `.numpy()` with `.eval()` or `sess.run()`.\n component_mean = component_dist.mean().numpy()\n component_stddev = component_dist.stddev().numpy()\n\n ax = fig.add_subplot(num_components, 1, 1 + i)\n ax.plot(xs, component_mean, lw=2)\n ax.fill_between(xs,\n component_mean - 2 * component_stddev,\n component_mean + 2 * component_stddev,\n alpha=0.5)\n ax.set_title(component.name)\n ```"}
{"_id": "q_642", "text": "Get tensor that the random variable corresponds to."}
{"_id": "q_643", "text": "In a session, computes and returns the value of this random variable.\n\n This is not a graph construction method, it does not add ops to the graph.\n\n This convenience method requires a session where the graph\n containing this variable has been launched. If no session is\n passed, the default session is used.\n\n Args:\n session: tf.BaseSession.\n The `tf.Session` to use to evaluate this random variable. If\n none, the default session is used.\n feed_dict: dict.\n A dictionary that maps `tf.Tensor` objects to feed values. See\n `tf.Session.run()` for a description of the valid feed values.\n\n Returns:\n Value of the random variable.\n\n #### Examples\n\n ```python\n x = Normal(0.0, 1.0)\n with tf.Session() as sess:\n # Usage passing the session explicitly.\n print(x.eval(sess))\n # Usage with the default session. The 'with' block\n # above makes 'sess' the default session.\n print(x.eval())\n ```"}
{"_id": "q_644", "text": "Value as NumPy array, only available for TF Eager."}
{"_id": "q_645", "text": "Posterior Normal distribution with conjugate prior on the mean.\n\n This model assumes that `n` observations (with sum `s`) come from a\n Normal with unknown mean `loc` (described by the Normal `prior`)\n and known variance `scale**2`. The \"known scale posterior\" is\n the distribution of the unknown `loc`.\n\n Accepts a prior Normal distribution object, having parameters\n `loc0` and `scale0`, as well as known `scale` values of the predictive\n distribution(s) (also assumed Normal),\n and statistical estimates `s` (the sum(s) of the observations) and\n `n` (the number(s) of observations).\n\n Returns a posterior (also Normal) distribution object, with parameters\n `(loc', scale'**2)`, where:\n\n ```\n mu ~ N(mu', sigma'**2)\n sigma'**2 = 1/(1/sigma0**2 + n/sigma**2),\n mu' = (mu0/sigma0**2 + s/sigma**2) * sigma'**2.\n ```\n\n Distribution parameters from `prior`, as well as `scale`, `s`, and `n`.\n will broadcast in the case of multidimensional sets of parameters.\n\n Args:\n prior: `Normal` object of type `dtype`:\n the prior distribution having parameters `(loc0, scale0)`.\n scale: tensor of type `dtype`, taking values `scale > 0`.\n The known stddev parameter(s).\n s: Tensor of type `dtype`. The sum(s) of observations.\n n: Tensor of type `int`. The number(s) of observations.\n\n Returns:\n A new Normal posterior distribution object for the unknown observation\n mean `loc`.\n\n Raises:\n TypeError: if dtype of `s` does not match `dtype`, or `prior` is not a\n Normal object."}
{"_id": "q_646", "text": "Build a scale-and-shift function using a multi-layer neural network.\n\n This will be wrapped in a make_template to ensure the variables are only\n created once. It takes the `d`-dimensional input x[0:d] and returns the `D-d`\n dimensional outputs `loc` (\"mu\") and `log_scale` (\"alpha\").\n\n The default template does not support conditioning and will raise an\n exception if `condition_kwargs` are passed to it. To use conditioning in\n real nvp bijector, implement a conditioned shift/scale template that\n handles the `condition_kwargs`.\n\n Arguments:\n hidden_layers: Python `list`-like of non-negative integer, scalars\n indicating the number of units in each hidden layer. Default: `[512, 512].\n shift_only: Python `bool` indicating if only the `shift` term shall be\n computed (i.e. NICE bijector). Default: `False`.\n activation: Activation function (callable). Explicitly setting to `None`\n implies a linear activation.\n name: A name for ops managed by this function. Default:\n \"real_nvp_default_template\".\n *args: `tf.layers.dense` arguments.\n **kwargs: `tf.layers.dense` keyword arguments.\n\n Returns:\n shift: `Float`-like `Tensor` of shift terms (\"mu\" in\n [Papamakarios et al. (2016)][1]).\n log_scale: `Float`-like `Tensor` of log(scale) terms (\"alpha\" in\n [Papamakarios et al. (2016)][1]).\n\n Raises:\n NotImplementedError: if rightmost dimension of `inputs` is unknown prior to\n graph execution, or if `condition_kwargs` is not empty.\n\n #### References\n\n [1]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked\n Autoregressive Flow for Density Estimation. In _Neural Information\n Processing Systems_, 2017. https://arxiv.org/abs/1705.07057"}
{"_id": "q_647", "text": "Returns a batch of points chosen uniformly from the unit hypersphere."}
{"_id": "q_648", "text": "Returns the log normalization of an LKJ distribution.\n\n Args:\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n log_z: A Tensor of the same shape and dtype as `concentration`, containing\n the corresponding log normalizers."}
{"_id": "q_649", "text": "Returns explict dtype from `args_list` if exists, else preferred_dtype."}
{"_id": "q_650", "text": "Factory for implementing summary statistics, eg, mean, stddev, mode."}
{"_id": "q_651", "text": "Estimate a lower bound on effective sample size for each independent chain.\n\n Roughly speaking, \"effective sample size\" (ESS) is the size of an iid sample\n with the same variance as `state`.\n\n More precisely, given a stationary sequence of possibly correlated random\n variables `X_1, X_2,...,X_N`, each identically distributed ESS is the number\n such that\n\n ```Variance{ N**-1 * Sum{X_i} } = ESS**-1 * Variance{ X_1 }.```\n\n If the sequence is uncorrelated, `ESS = N`. In general, one should expect\n `ESS <= N`, with more highly correlated sequences having smaller `ESS`.\n\n Args:\n states: `Tensor` or list of `Tensor` objects. Dimension zero should index\n identically distributed states.\n filter_threshold: `Tensor` or list of `Tensor` objects.\n Must broadcast with `state`. The auto-correlation sequence is truncated\n after the first appearance of a term less than `filter_threshold`.\n Setting to `None` means we use no threshold filter. Since `|R_k| <= 1`,\n setting to any number less than `-1` has the same effect.\n filter_beyond_lag: `Tensor` or list of `Tensor` objects. Must be\n `int`-like and scalar valued. The auto-correlation sequence is truncated\n to this length. Setting to `None` means we do not filter based on number\n of lags.\n name: `String` name to prepend to created ops.\n\n Returns:\n ess: `Tensor` or list of `Tensor` objects. The effective sample size of\n each component of `states`. Shape will be `states.shape[1:]`.\n\n Raises:\n ValueError: If `states` and `filter_threshold` or `states` and\n `filter_beyond_lag` are both lists with different lengths.\n\n #### Examples\n\n We use ESS to estimate standard error.\n\n ```\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n target = tfd.MultivariateNormalDiag(scale_diag=[1., 2.])\n\n # Get 1000 states from one chain.\n states = tfp.mcmc.sample_chain(\n num_burnin_steps=200,\n num_results=1000,\n current_state=tf.constant([0., 0.]),\n kernel=tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=target.log_prob,\n step_size=0.05,\n num_leapfrog_steps=20))\n states.shape\n ==> (1000, 2)\n\n ess = effective_sample_size(states)\n ==> Shape (2,) Tensor\n\n mean, variance = tf.nn.moments(states, axis=0)\n standard_error = tf.sqrt(variance / ess)\n ```\n\n Some math shows that, with `R_k` the auto-correlation sequence,\n `R_k := Covariance{X_1, X_{1+k}} / Variance{X_1}`, we have\n\n ```ESS(N) = N / [ 1 + 2 * ( (N - 1) / N * R_1 + ... + 1 / N * R_{N-1} ) ]```\n\n This function estimates the above by first estimating the auto-correlation.\n Since `R_k` must be estimated using only `N - k` samples, it becomes\n progressively noisier for larger `k`. For this reason, the summation over\n `R_k` should be truncated at some number `filter_beyond_lag < N`. Since many\n MCMC methods generate chains where `R_k > 0`, a reasonable criteria is to\n truncate at the first index where the estimated auto-correlation becomes\n negative.\n\n The arguments `filter_beyond_lag`, `filter_threshold` are filters intended to\n remove noisy tail terms from `R_k`. They combine in an \"OR\" manner meaning\n terms are removed if they were to be filtered under the `filter_beyond_lag` OR\n `filter_threshold` criteria."}
{"_id": "q_652", "text": "ESS computation for one single Tensor argument."}
{"_id": "q_653", "text": "potential_scale_reduction for one single state `Tensor`."}
{"_id": "q_654", "text": "Broadcast a listable secondary_arg to that of states."}
{"_id": "q_655", "text": "Use LogNormal quantiles to form quadrature on positive-reals.\n\n Args:\n loc: `float`-like (batch of) scalar `Tensor`; the location parameter of\n the LogNormal prior.\n scale: `float`-like (batch of) scalar `Tensor`; the scale parameter of\n the LogNormal prior.\n quadrature_size: Python `int` scalar representing the number of quadrature\n points.\n validate_args: Python `bool`, default `False`. When `True` distribution\n parameters are checked for validity despite possibly degrading runtime\n performance. When `False` invalid inputs may silently render incorrect\n outputs.\n name: Python `str` name prefixed to Ops created by this class.\n\n Returns:\n grid: (Batch of) length-`quadrature_size` vectors representing the\n `log_rate` parameters of a `Poisson`.\n probs: (Batch of) length-`quadrature_size` vectors representing the\n weight associate with each `grid` value."}
{"_id": "q_656", "text": "Helper to merge which handles merging one value."}
{"_id": "q_657", "text": "Converts nested `tuple`, `list`, or `dict` to nested `tuple`."}
{"_id": "q_658", "text": "Computes the doubling increments for the left end point.\n\n The doubling procedure expands an initial interval to find a superset of the\n true slice. At each doubling iteration, the interval width is doubled to\n either the left or the right hand side with equal probability.\n If, initially, the left end point is at `L(0)` and the width of the\n interval is `w(0)`, then the left end point and the width at the\n k-th iteration (denoted L(k) and w(k) respectively) are given by the following\n recursions:\n\n ```none\n w(k) = 2 * w(k-1)\n L(k) = L(k-1) - w(k-1) * X_k, X_k ~ Bernoulli(0.5)\n or, L(0) - L(k) = w(0) Sum(2^i * X(i+1), 0 <= i < k)\n ```\n\n This function computes the sequence of `L(0)-L(k)` and `w(k)` for k between 0\n and `max_doublings` independently for each chain.\n\n Args:\n batch_shape: Positive int32 `tf.Tensor`. The batch shape.\n max_doublings: Scalar positive int32 `tf.Tensor`. The maximum number of\n doublings to consider.\n step_size: A real `tf.Tensor` with shape compatible with [num_chains].\n The size of the initial interval.\n seed: (Optional) positive int. The random seed. If None, no seed is set.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'find_slice_bounds').\n\n Returns:\n left_increments: A tensor of shape (max_doublings+1, batch_shape). The\n relative position of the left end point after the doublings.\n widths: A tensor of shape (max_doublings+1, ones_like(batch_shape)). The\n widths of the intervals at each stage of the doubling."}
{"_id": "q_659", "text": "Finds the index of the optimal set of bounds for each chain.\n\n For each chain, finds the smallest set of bounds for which both edges lie\n outside the slice. This is equivalent to the point at which a for loop\n implementation (P715 of Neal (2003)) of the algorithm would terminate.\n\n Performs the following calculation, where i is the number of doublings that\n have been performed and k is the max number of doublings:\n\n (2 * k - i) * flag + i\n\n The argmax of the above returns the earliest index where the bounds were\n outside the slice and if there is no such point, the widest bounds.\n\n Args:\n x: A tensor of shape (max_doublings+1, batch_shape). Type int32, with value\n 0 or 1. Indicates if this set of bounds is outside the slice.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'find_slice_bounds').\n\n Returns:\n indices: A tensor of shape batch_shape. Type int32, with the index of the\n first set of bounds outside the slice and if there are none, the index of\n the widest set."}
{"_id": "q_660", "text": "Returns the bounds of the slice at each stage of doubling procedure.\n\n Precomputes the x coordinates of the left (L) and right (R) endpoints of the\n interval `I` produced in the \"doubling\" algorithm [Neal 2003][1] P713. Note\n that we simultaneously compute all possible doubling values for each chain,\n for the reason that at small-medium densities, the gains from parallel\n evaluation might cause a speed-up, but this will be benchmarked against the\n while loop implementation.\n\n Args:\n x_initial: `tf.Tensor` of any shape and any real dtype consumable by\n `target_log_prob`. The initial points.\n target_log_prob: A callable taking a `tf.Tensor` of shape and dtype as\n `x_initial` and returning a tensor of the same shape. The log density of\n the target distribution.\n log_slice_heights: `tf.Tensor` with the same shape as `x_initial` and the\n same dtype as returned by `target_log_prob`. The log of the height of the\n slice for each chain. The values must be bounded above by\n `target_log_prob(x_initial)`.\n max_doublings: Scalar positive int32 `tf.Tensor`. The maximum number of\n doublings to consider.\n step_size: `tf.Tensor` with same dtype as and shape compatible with\n `x_initial`. The size of the initial interval.\n seed: (Optional) positive int. The random seed. If None, no seed is set.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'find_slice_bounds').\n\n Returns:\n upper_bounds: A tensor of same shape and dtype as `x_initial`. Slice upper\n bounds for each chain.\n lower_bounds: A tensor of same shape and dtype as `x_initial`. Slice lower\n bounds for each chain.\n both_ok: A tensor of shape `x_initial` and boolean dtype. Indicates if both\n the chosen upper and lower bound lie outside of the slice.\n\n #### References\n\n [1]: Radford M. Neal. Slice Sampling. The Annals of Statistics. 2003, Vol 31,\n No. 3 , 705-767.\n https://projecteuclid.org/download/pdf_1/euclid.aos/1056562461"}
{"_id": "q_661", "text": "Samples from the slice by applying shrinkage for rejected points.\n\n Implements the one dimensional slice sampling algorithm of Neal (2003), with a\n doubling algorithm (Neal 2003 P715 Fig. 4), which doubles the size of the\n interval at each iteration and shrinkage (Neal 2003 P716 Fig. 5), which\n reduces the width of the slice when a selected point is rejected, by setting\n the relevant bound that that value. Randomly sampled points are checked for\n two criteria: that they lie within the slice and that they pass the\n acceptability check (Neal 2003 P717 Fig. 6), which tests that the new state\n could have generated the previous one.\n\n Args:\n x_initial: A tensor of any shape. The initial positions of the chains. This\n function assumes that all the dimensions of `x_initial` are batch\n dimensions (i.e. the event shape is `[]`).\n target_log_prob: Callable accepting a tensor like `x_initial` and returning\n a tensor containing the log density at that point of the same shape.\n log_slice_heights: Tensor of the same shape and dtype as the return value\n of `target_log_prob` when applied to `x_initial`. The log of the height of\n the chosen slice.\n step_size: A tensor of shape and dtype compatible with `x_initial`. The min\n interval size in the doubling algorithm.\n lower_bounds: Tensor of same shape and dtype as `x_initial`. Slice lower\n bounds for each chain.\n upper_bounds: Tensor of same shape and dtype as `x_initial`. Slice upper\n bounds for each chain.\n seed: (Optional) positive int. The random seed. If None, no seed is set.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'find_slice_bounds').\n\n Returns:\n x_proposed: A tensor of the same shape and dtype as `x_initial`. The next\n proposed state of the chain."}
{"_id": "q_662", "text": "Creates a value-setting interceptor.\n\n This function creates an interceptor that sets values of Edward2 random\n variable objects. This is useful for a range of tasks, including conditioning\n on observed data, sampling from posterior predictive distributions, and as a\n building block of inference primitives such as computing log joint\n probabilities (see examples below).\n\n Args:\n **model_kwargs: dict of str to Tensor. Keys are the names of random\n variables in the model to which this interceptor is being applied. Values\n are Tensors to set their value to. Variables not included in this dict\n will not be set and will maintain their existing value semantics (by\n default, a sample from the parent-conditional distribution).\n\n Returns:\n set_values: function that sets the value of intercepted ops.\n\n #### Examples\n\n Consider for illustration a model with latent `z` and\n observed `x`, and a corresponding trainable posterior model:\n\n ```python\n num_observations = 10\n def model():\n z = ed.Normal(loc=0, scale=1., name='z') # log rate\n x = ed.Poisson(rate=tf.exp(z) * tf.ones(num_observations), name='x')\n return x\n\n def variational_model():\n return ed.Normal(loc=tf.Variable(0.),\n scale=tf.nn.softplus(tf.Variable(-4.)),\n name='z') # for simplicity, match name of the model RV.\n ```\n\n We can use a value-setting interceptor to condition the model on observed\n data. This approach is slightly more cumbersome than that of partially\n evaluating the complete log-joint function, but has the potential advantage\n that it returns a new model callable, which may be used to sample downstream\n variables, passed into additional transformations, etc.\n\n ```python\n x_observed = np.array([6, 3, 1, 8, 7, 0, 6, 4, 7, 5])\n def observed_model():\n with ed.interception(make_value_setter(x=x_observed)):\n model()\n observed_log_joint_fn = ed.make_log_joint_fn(observed_model)\n\n # After fixing 'x', the observed log joint is now only a function of 'z'.\n # This enables us to define a variational lower bound,\n # `E_q[ log p(x, z) - log q(z)]`, simply by evaluating the observed and\n # variational log joints at variational samples.\n variational_log_joint_fn = ed.make_log_joint_fn(variational_model)\n with ed.tape() as variational_sample: # Sample trace from variational model.\n variational_model()\n elbo_loss = -(observed_log_joint_fn(**variational_sample) -\n variational_log_joint_fn(**variational_sample))\n ```\n\n After performing inference by minimizing the variational loss, a value-setting\n interceptor enables simulation from the posterior predictive distribution:\n\n ```python\n with ed.tape() as posterior_samples: # tape is a map {rv.name : rv}\n variational_model()\n with ed.interception(ed.make_value_setter(**posterior_samples)):\n x = model()\n # x is a sample from p(X | Z = z') where z' ~ q(z) (the variational model)\n ```\n\n As another example, using a value setter inside of `ed.tape` enables\n computing the log joint probability, by setting all variables to\n posterior values and then accumulating the log probs of those values under\n the induced parent-conditional distributions. This is one way that we could\n have implemented `ed.make_log_joint_fn`:\n\n ```python\n def make_log_joint_fn_demo(model):\n def log_joint_fn(**model_kwargs):\n with ed.tape() as model_tape:\n with ed.make_value_setter(**model_kwargs):\n model()\n\n # accumulate sum_i log p(X_i = x_i | X_{:i-1} = x_{:i-1})\n log_prob = 0.\n for rv in model_tape.values():\n log_prob += tf.reduce_sum(rv.log_prob(rv.value))\n\n return log_prob\n return log_joint_fn\n ```"}
{"_id": "q_663", "text": "Filters inputs to be compatible with function `f`'s signature.\n\n Args:\n f: Function according to whose input signature we filter arguments.\n src_kwargs: Keyword arguments to filter according to `f`.\n\n Returns:\n kwargs: Dict of key-value pairs in `src_kwargs` which exist in `f`'s\n signature."}
{"_id": "q_664", "text": "Network block for VGG."}
{"_id": "q_665", "text": "Builds a tree at a given tree depth and at a given state.\n\n The `current` state is immediately adjacent to, but outside of,\n the subtrajectory spanned by the returned `forward` and `reverse` states.\n\n Args:\n value_and_gradients_fn: Python callable which takes an argument like\n `*current_state` and returns a tuple of its (possibly unnormalized)\n log-density under the target distribution and its gradient with respect to\n each state.\n current_state: List of `Tensor`s representing the current states of the\n NUTS trajectory.\n current_target_log_prob: Scalar `Tensor` representing the value of\n `target_log_prob_fn` at the `current_state`.\n current_grads_target_log_prob: List of `Tensor`s representing gradient of\n `current_target_log_prob` with respect to `current_state`. Must have same\n shape as `current_state`.\n current_momentum: List of `Tensor`s representing the momentums of\n `current_state`. Must have same shape as `current_state`.\n direction: int that is either -1 or 1. It determines whether to perform\n leapfrog integration backwards (reverse) or forward in time respectively.\n depth: non-negative int that indicates how deep of a tree to build.\n Each call to `_build_tree` takes `2**depth` leapfrog steps.\n step_size: List of `Tensor`s representing the step sizes for the leapfrog\n integrator. Must have same shape as `current_state`.\n log_slice_sample: The log of an auxiliary slice variable. It is used\n together with `max_simulation_error` to avoid simulating trajectories with\n too much numerical error.\n max_simulation_error: Maximum simulation error to tolerate before\n terminating the trajectory. Simulation error is the\n `log_slice_sample` minus the log-joint probability at the simulated state.\n seed: Integer to seed the random number generator.\n\n Returns:\n reverse_state: List of `Tensor`s representing the \"reverse\" states of the\n NUTS trajectory. Has same shape as `current_state`.\n reverse_target_log_prob: Scalar `Tensor` representing the value of\n `target_log_prob_fn` at the `reverse_state`.\n reverse_grads_target_log_prob: List of `Tensor`s representing gradient of\n `reverse_target_log_prob` with respect to `reverse_state`. Has same shape\n as `reverse_state`.\n reverse_momentum: List of `Tensor`s representing the momentums of\n `reverse_state`. Has same shape as `reverse_state`.\n forward_state: List of `Tensor`s representing the \"forward\" states of the\n NUTS trajectory. Has same shape as `current_state`.\n forward_target_log_prob: Scalar `Tensor` representing the value of\n `target_log_prob_fn` at the `forward_state`.\n forward_grads_target_log_prob: List of `Tensor`s representing gradient of\n `forward_target_log_prob` with respect to `forward_state`. Has same shape\n as `forward_state`.\n forward_momentum: List of `Tensor`s representing the momentums of\n `forward_state`. Has same shape as `forward_state`.\n next_state: List of `Tensor`s representing the next states of the NUTS\n trajectory. Has same shape as `current_state`.\n next_target_log_prob: Scalar `Tensor` representing the value of\n `target_log_prob_fn` at `next_state`.\n next_grads_target_log_prob: List of `Tensor`s representing the gradient of\n `next_target_log_prob` with respect to `next_state`.\n num_states: Number of acceptable candidate states in the subtree. A state is\n acceptable if it is \"in the slice\", that is, if its log-joint probability\n with its momentum is greater than `log_slice_sample`.\n continue_trajectory: bool determining whether to continue the simulation\n trajectory. The trajectory is continued if no U-turns are encountered\n within the built subtree, and if the log-probability accumulation due to\n integration error does not exceed `max_simulation_error`."}
{"_id": "q_666", "text": "Wraps value and gradients function to assist with None gradients."}
{"_id": "q_667", "text": "If two given states and momentum do not exhibit a U-turn pattern."}
{"_id": "q_668", "text": "Runs one step of leapfrog integration."}
{"_id": "q_669", "text": "Log-joint probability given a state's log-probability and momentum."}
{"_id": "q_670", "text": "Returns samples from a Bernoulli distribution."}
{"_id": "q_671", "text": "Creates multivariate standard `Normal` distribution.\n\n Args:\n dtype: Type of parameter's event.\n shape: Python `list`-like representing the parameter's event shape.\n name: Python `str` name prepended to any created (or existing)\n `tf.Variable`s.\n trainable: Python `bool` indicating all created `tf.Variable`s should be\n added to the graph collection `GraphKeys.TRAINABLE_VARIABLES`.\n add_variable_fn: `tf.get_variable`-like `callable` used to create (or\n access existing) `tf.Variable`s.\n\n Returns:\n Multivariate standard `Normal` distribution."}
{"_id": "q_672", "text": "Deserializes the Keras-serialized function.\n\n (De)serializing Python functions from/to bytecode is unsafe. Therefore we\n also use the function's type as an anonymous function ('lambda') or named\n function in the Python environment ('function'). In the latter case, this lets\n us use the Python scope to obtain the function rather than reload it from\n bytecode. (Note that both cases are brittle!)\n\n Keras-deserialized functions do not perform lexical scoping. Any modules that\n the function requires must be imported within the function itself.\n\n This serialization mimicks the implementation in `tf.keras.layers.Lambda`.\n\n Args:\n serial: Serialized Keras object: typically a dict, string, or bytecode.\n function_type: Python string denoting 'function' or 'lambda'.\n\n Returns:\n function: Function the serialized Keras object represents.\n\n #### Examples\n\n ```python\n serial, function_type = serialize_function(lambda x: x)\n function = deserialize_function(serial, function_type)\n assert function(2.3) == 2.3 # function is identity\n ```"}
{"_id": "q_673", "text": "Serializes function for Keras.\n\n (De)serializing Python functions from/to bytecode is unsafe. Therefore we\n return the function's type as an anonymous function ('lambda') or named\n function in the Python environment ('function'). In the latter case, this lets\n us use the Python scope to obtain the function rather than reload it from\n bytecode. (Note that both cases are brittle!)\n\n This serialization mimicks the implementation in `tf.keras.layers.Lambda`.\n\n Args:\n func: Python function to serialize.\n\n Returns:\n (serial, function_type): Serialized object, which is a tuple of its\n bytecode (if function is anonymous) or name (if function is named), and its\n function type."}
{"_id": "q_674", "text": "Broadcasts `from_structure` to `to_structure`.\n\n This is useful for downstream usage of `zip` or `tf.nest.map_structure`.\n\n If `from_structure` is a singleton, it is tiled to match the structure of\n `to_structure`. Note that the elements in `from_structure` are not copied if\n this tiling occurs.\n\n Args:\n to_structure: A structure.\n from_structure: A structure.\n\n Returns:\n new_from_structure: Same structure as `to_structure`.\n\n #### Example:\n\n ```python\n a_structure = ['a', 'b', 'c']\n b_structure = broadcast_structure(a_structure, 'd')\n # -> ['d', 'd', 'd']\n c_structure = tf.nest.map_structure(\n lambda a, b: a + b, a_structure, b_structure)\n # -> ['ad', 'bd', 'cd']\n ```"}
{"_id": "q_675", "text": "Eagerly converts struct to Tensor, recursing upon failure."}
{"_id": "q_676", "text": "Returns `Tensor` attributes related to shape and Python builtins."}
{"_id": "q_677", "text": "Creates the mixture of Gaussians prior distribution.\n\n Args:\n latent_size: The dimensionality of the latent representation.\n mixture_components: Number of elements of the mixture.\n\n Returns:\n random_prior: A `tfd.Distribution` instance representing the distribution\n over encodings in the absence of any evidence."}
{"_id": "q_678", "text": "Helper utility to make a field of images."}
{"_id": "q_679", "text": "Downloads a file."}
{"_id": "q_680", "text": "Helper to validate block sizes."}
{"_id": "q_681", "text": "Constructs a trainable `tfd.Bernoulli` distribution.\n\n This function creates a Bernoulli distribution parameterized by logits.\n Using default args, this function is mathematically equivalent to:\n\n ```none\n Y = Bernoulli(logits=matmul(W, x) + b)\n\n where,\n W in R^[d, n]\n b in R^d\n ```\n\n #### Examples\n\n This function can be used as a [logistic regression](\n https://en.wikipedia.org/wiki/Logistic_regression) loss.\n\n ```python\n # This example fits a logistic regression loss.\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n # Create fictitious training data.\n dtype = np.float32\n n = 3000 # number of samples\n x_size = 4 # size of single x\n def make_training_data():\n np.random.seed(142)\n x = np.random.randn(n, x_size).astype(dtype)\n w = np.random.randn(x_size).astype(dtype)\n b = np.random.randn(1).astype(dtype)\n true_logits = np.tensordot(x, w, axes=[[-1], [-1]]) + b\n noise = np.random.logistic(size=n).astype(dtype)\n y = dtype(true_logits + noise > 0.)\n return y, x\n y, x = make_training_data()\n\n # Build TF graph for fitting Bernoulli maximum likelihood estimator.\n bernoulli = tfp.trainable_distributions.bernoulli(x)\n loss = -tf.reduce_mean(bernoulli.log_prob(y))\n train_op = tf.train.AdamOptimizer(learning_rate=2.**-5).minimize(loss)\n mse = tf.reduce_mean(tf.squared_difference(y, bernoulli.mean()))\n init_op = tf.global_variables_initializer()\n\n # Run graph 1000 times.\n num_steps = 1000\n loss_ = np.zeros(num_steps) # Style: `_` to indicate sess.run result.\n mse_ = np.zeros(num_steps)\n with tf.Session() as sess:\n sess.run(init_op)\n for it in xrange(loss_.size):\n _, loss_[it], mse_[it] = sess.run([train_op, loss, mse])\n if it % 200 == 0 or it == loss_.size - 1:\n print(\"iteration:{} loss:{} mse:{}\".format(it, loss_[it], mse_[it]))\n\n # ==> iteration:0 loss:0.635675370693 mse:0.222526371479\n # iteration:200 loss:0.440077394247 mse:0.143687799573\n # iteration:400 loss:0.440077394247 mse:0.143687844276\n # iteration:600 loss:0.440077394247 mse:0.143687844276\n # iteration:800 loss:0.440077424049 mse:0.143687844276\n # iteration:999 loss:0.440077424049 mse:0.143687844276\n ```\n\n Args:\n x: `Tensor` with floating type. Must have statically defined rank and\n statically known right-most dimension.\n layer_fn: Python `callable` which takes input `x` and `int` scalar `d` and\n returns a transformation of `x` with shape\n `tf.concat([tf.shape(x)[:-1], [1]], axis=0)`.\n Default value: `tf.layers.dense`.\n name: A `name_scope` name for operations created by this function.\n Default value: `None` (i.e., \"bernoulli\").\n\n Returns:\n bernoulli: An instance of `tfd.Bernoulli`."}
{"_id": "q_682", "text": "Constructs a trainable `tfd.Normal` distribution.\n\n\n This function creates a Normal distribution parameterized by loc and scale.\n Using default args, this function is mathematically equivalent to:\n\n ```none\n Y = Normal(loc=matmul(W, x) + b, scale=1)\n\n where,\n W in R^[d, n]\n b in R^d\n ```\n\n #### Examples\n\n This function can be used as a [linear regression](\n https://en.wikipedia.org/wiki/Linear_regression) loss.\n\n ```python\n # This example fits a linear regression loss.\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n # Create fictitious training data.\n dtype = np.float32\n n = 3000 # number of samples\n x_size = 4 # size of single x\n def make_training_data():\n np.random.seed(142)\n x = np.random.randn(n, x_size).astype(dtype)\n w = np.random.randn(x_size).astype(dtype)\n b = np.random.randn(1).astype(dtype)\n true_mean = np.tensordot(x, w, axes=[[-1], [-1]]) + b\n noise = np.random.randn(n).astype(dtype)\n y = true_mean + noise\n return y, x\n y, x = make_training_data()\n\n # Build TF graph for fitting Normal maximum likelihood estimator.\n normal = tfp.trainable_distributions.normal(x)\n loss = -tf.reduce_mean(normal.log_prob(y))\n train_op = tf.train.AdamOptimizer(learning_rate=2.**-5).minimize(loss)\n mse = tf.reduce_mean(tf.squared_difference(y, normal.mean()))\n init_op = tf.global_variables_initializer()\n\n # Run graph 1000 times.\n num_steps = 1000\n loss_ = np.zeros(num_steps) # Style: `_` to indicate sess.run result.\n mse_ = np.zeros(num_steps)\n with tf.Session() as sess:\n sess.run(init_op)\n for it in xrange(loss_.size):\n _, loss_[it], mse_[it] = sess.run([train_op, loss, mse])\n if it % 200 == 0 or it == loss_.size - 1:\n print(\"iteration:{} loss:{} mse:{}\".format(it, loss_[it], mse_[it]))\n\n # ==> iteration:0 loss:6.34114170074 mse:10.8444051743\n # iteration:200 loss:1.40146839619 mse:0.965059816837\n # iteration:400 loss:1.40052902699 mse:0.963181257248\n # iteration:600 loss:1.40052902699 mse:0.963181257248\n # iteration:800 loss:1.40052902699 mse:0.963181257248\n # iteration:999 loss:1.40052902699 mse:0.963181257248\n ```\n\n Args:\n x: `Tensor` with floating type. Must have statically defined rank and\n statically known right-most dimension.\n layer_fn: Python `callable` which takes input `x` and `int` scalar `d` and\n returns a transformation of `x` with shape\n `tf.concat([tf.shape(x)[:-1], [1]], axis=0)`.\n Default value: `tf.layers.dense`.\n loc_fn: Python `callable` which transforms the `loc` parameter. Takes a\n (batch of) length-`dims` vectors and returns a `Tensor` of same shape and\n `dtype`.\n Default value: `lambda x: x`.\n scale_fn: Python `callable` or `Tensor`. If a `callable` transforms the\n `scale` parameters; if `Tensor` is the `tfd.Normal` `scale` argument.\n Takes a (batch of) length-`dims` vectors and returns a `Tensor` of same\n size. (Taking a `callable` or `Tensor` is how `tf.Variable` intializers\n behave.)\n Default value: `1`.\n name: A `name_scope` name for operations created by this function.\n Default value: `None` (i.e., \"normal\").\n\n Returns:\n normal: An instance of `tfd.Normal`."}
{"_id": "q_683", "text": "Constructs a trainable `tfd.Poisson` distribution.\n\n This function creates a Poisson distribution parameterized by log rate.\n Using default args, this function is mathematically equivalent to:\n\n ```none\n Y = Poisson(log_rate=matmul(W, x) + b)\n\n where,\n W in R^[d, n]\n b in R^d\n ```\n\n #### Examples\n\n This can be used as a [Poisson regression](\n https://en.wikipedia.org/wiki/Poisson_regression) loss.\n\n ```python\n # This example fits a poisson regression loss.\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n # Create fictitious training data.\n dtype = np.float32\n n = 3000 # number of samples\n x_size = 4 # size of single x\n def make_training_data():\n np.random.seed(142)\n x = np.random.randn(n, x_size).astype(dtype)\n w = np.random.randn(x_size).astype(dtype)\n b = np.random.randn(1).astype(dtype)\n true_log_rate = np.tensordot(x, w, axes=[[-1], [-1]]) + b\n y = np.random.poisson(lam=np.exp(true_log_rate)).astype(dtype)\n return y, x\n y, x = make_training_data()\n\n # Build TF graph for fitting Poisson maximum likelihood estimator.\n poisson = tfp.trainable_distributions.poisson(x)\n loss = -tf.reduce_mean(poisson.log_prob(y))\n train_op = tf.train.AdamOptimizer(learning_rate=2.**-5).minimize(loss)\n mse = tf.reduce_mean(tf.squared_difference(y, poisson.mean()))\n init_op = tf.global_variables_initializer()\n\n # Run graph 1000 times.\n num_steps = 1000\n loss_ = np.zeros(num_steps) # Style: `_` to indicate sess.run result.\n mse_ = np.zeros(num_steps)\n with tf.Session() as sess:\n sess.run(init_op)\n for it in xrange(loss_.size):\n _, loss_[it], mse_[it] = sess.run([train_op, loss, mse])\n if it % 200 == 0 or it == loss_.size - 1:\n print(\"iteration:{} loss:{} mse:{}\".format(it, loss_[it], mse_[it]))\n\n # ==> iteration:0 loss:37.0814208984 mse:6359.41259766\n # iteration:200 loss:1.42010736465 mse:40.7654914856\n # iteration:400 loss:1.39027583599 mse:8.77660560608\n # iteration:600 loss:1.3902695179 mse:8.78443241119\n # iteration:800 loss:1.39026939869 mse:8.78443622589\n # iteration:999 loss:1.39026939869 mse:8.78444766998\n ```\n\n Args:\n x: `Tensor` with floating type. Must have statically defined rank and\n statically known right-most dimension.\n layer_fn: Python `callable` which takes input `x` and `int` scalar `d` and\n returns a transformation of `x` with shape\n `tf.concat([tf.shape(x)[:-1], [1]], axis=0)`.\n Default value: `tf.layers.dense`.\n log_rate_fn: Python `callable` which transforms the `log_rate` parameter.\n Takes a (batch of) length-`dims` vectors and returns a `Tensor` of same\n shape and `dtype`.\n Default value: `lambda x: x`.\n name: A `name_scope` name for operations created by this function.\n Default value: `None` (i.e., \"poisson\").\n\n Returns:\n poisson: An instance of `tfd.Poisson`."}
{"_id": "q_684", "text": "Compute diffusion drift at the current location `current_state`.\n\n The drift of the diffusion at is computed as\n\n ```none\n 0.5 * `step_size` * volatility_parts * `target_log_prob_fn(current_state)`\n + `step_size` * `grads_volatility`\n ```\n\n where `volatility_parts` = `volatility_fn(current_state)**2` and\n `grads_volatility` is a gradient of `volatility_parts` at the `current_state`.\n\n Args:\n step_size_parts: Python `list` of `Tensor`s representing the step size for\n Euler-Maruyama method. Must broadcast with the shape of\n `volatility_parts`. Larger step sizes lead to faster progress, but\n too-large step sizes make rejection exponentially more likely. When\n possible, it's often helpful to match per-variable step sizes to the\n standard deviations of the target distribution in each variable.\n volatility_parts: Python `list` of `Tensor`s representing the value of\n `volatility_fn(*state_parts)`.\n grads_volatility: Python list of `Tensor`s representing the value of the\n gradient of `volatility_parts**2` wrt the state of the chain.\n grads_target_log_prob: Python list of `Tensor`s representing\n gradient of `target_log_prob_fn(*state_parts`) wrt `state_parts`. Must\n have same shape as `volatility_parts`.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'mala_get_drift').\n\n Returns:\n drift_parts: Tensor or Python list of `Tensor`s representing the\n state(s) of the Markov chain(s) at each result step. Has same shape as\n input `current_state_parts`."}
{"_id": "q_685", "text": "r\"\"\"Helper to `kernel` which computes the log acceptance-correction.\n\n Computes `log_acceptance_correction` as described in `MetropolisHastings`\n class. The proposal density is normal. More specifically,\n\n ```none\n q(proposed_state | current_state) \\sim N(current_state + current_drift,\n step_size * current_volatility**2)\n\n q(current_state | proposed_state) \\sim N(proposed_state + proposed_drift,\n step_size * proposed_volatility**2)\n ```\n\n The `log_acceptance_correction` is then\n\n ```none\n log_acceptance_correctio = q(current_state | proposed_state)\n - q(proposed_state | current_state)\n ```\n\n Args:\n current_state_parts: Python `list` of `Tensor`s representing the value(s) of\n the current state of the chain.\n proposed_state_parts: Python `list` of `Tensor`s representing the value(s)\n of the proposed state of the chain. Must broadcast with the shape of\n `current_state_parts`.\n current_volatility_parts: Python `list` of `Tensor`s representing the value\n of `volatility_fn(*current_volatility_parts)`. Must broadcast with the\n shape of `current_state_parts`.\n proposed_volatility_parts: Python `list` of `Tensor`s representing the value\n of `volatility_fn(*proposed_volatility_parts)`. Must broadcast with the\n shape of `current_state_parts`\n current_drift_parts: Python `list` of `Tensor`s representing value of the\n drift `_get_drift(*current_state_parts, ..)`. Must broadcast with the\n shape of `current_state_parts`.\n proposed_drift_parts: Python `list` of `Tensor`s representing value of the\n drift `_get_drift(*proposed_drift_parts, ..)`. Must broadcast with the\n shape of `current_state_parts`.\n step_size_parts: Python `list` of `Tensor`s representing the step size for\n Euler-Maruyama method. Must broadcast with the shape of\n `current_state_parts`.\n independent_chain_ndims: Scalar `int` `Tensor` representing the number of\n leftmost `Tensor` dimensions which index independent chains.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'compute_log_acceptance_correction').\n\n Returns:\n log_acceptance_correction: `Tensor` representing the `log`\n acceptance-correction. (See docstring for mathematical definition.)"}
{"_id": "q_686", "text": "Helper which computes `volatility_fn` results and grads, if needed."}
{"_id": "q_687", "text": "Helper to broadcast `volatility_parts` to the shape of `state_parts`."}
{"_id": "q_688", "text": "Calls `fn`, appropriately reshaping its input `x` and output."}
{"_id": "q_689", "text": "Calls `fn` and appropriately reshapes its output."}
{"_id": "q_690", "text": "The binomial cumulative distribution function.\n\n Args:\n k: floating point `Tensor`.\n n: floating point `Tensor`.\n p: floating point `Tensor`.\n\n Returns:\n `sum_{j=0}^k p^j (1 - p)^(n - j)`."}
{"_id": "q_691", "text": "Executes `model`, creating both samples and distributions."}
{"_id": "q_692", "text": "Latent Dirichlet Allocation in terms of its generative process.\n\n The model posits a distribution over bags of words and is parameterized by\n a concentration and the topic-word probabilities. It collapses per-word\n topic assignments.\n\n Args:\n concentration: A Tensor of shape [1, num_topics], which parameterizes the\n Dirichlet prior over topics.\n topics_words: A Tensor of shape [num_topics, num_words], where each row\n (topic) denotes the probability of each word being in that topic.\n\n Returns:\n bag_of_words: A random variable capturing a sample from the model, of shape\n [1, num_words]. It represents one generated document as a bag of words."}
{"_id": "q_693", "text": "20 newsgroups as a tf.data.Dataset."}
{"_id": "q_694", "text": "Builds fake data for unit testing."}
{"_id": "q_695", "text": "Builds iterators for train and evaluation data.\n\n Each object is represented as a bag-of-words vector.\n\n Arguments:\n data_dir: Folder in which to store the data.\n batch_size: Batch size for both train and evaluation.\n\n Returns:\n train_input_fn: A function that returns an iterator over the training data.\n eval_input_fn: A function that returns an iterator over the evaluation data.\n vocabulary: A mapping of word's integer index to the corresponding string."}
{"_id": "q_696", "text": "Add control dependencies to the commmitment loss to update the codebook.\n\n Args:\n vector_quantizer: An instance of the VectorQuantizer class.\n one_hot_assignments: The one-hot vectors corresponding to the matched\n codebook entry for each code in the batch.\n codes: A `float`-like `Tensor` containing the latent vectors to be compared\n to the codebook.\n commitment_loss: The commitment loss from comparing the encoder outputs to\n their neighboring codebook entries.\n decay: Decay factor for exponential moving average.\n\n Returns:\n commitment_loss: Commitment loss with control dependencies."}
{"_id": "q_697", "text": "Helper method to save a grid of images to a PNG file.\n\n Args:\n x: A numpy array of shape [n_images, height, width].\n fname: The filename to write to (including extension)."}
{"_id": "q_698", "text": "Returns a `np.dtype` based on this `dtype`."}
{"_id": "q_699", "text": "Returns whether this is a boolean data type."}
{"_id": "q_700", "text": "Returns whether this is a complex floating point type."}
{"_id": "q_701", "text": "Returns the string name for this `dtype`."}
{"_id": "q_702", "text": "Validate and return float type based on `tensors` and `dtype`.\n\n For ops such as matrix multiplication, inputs and weights must be of the\n same float type. This function validates that all `tensors` are the same type,\n validates that type is `dtype` (if supplied), and returns the type. Type must\n be a floating point type. If neither `tensors` nor `dtype` is supplied,\n the function will return `dtypes.float32`.\n\n Args:\n tensors: Tensors of input values. Can include `None` elements, which will\n be ignored.\n dtype: Expected type.\n\n Returns:\n Validated type.\n\n Raises:\n ValueError: if neither `tensors` nor `dtype` is supplied, or result is not\n float, or the common type of the inputs is not a floating point type."}
{"_id": "q_703", "text": "Creates the condition function pair for a reflection to be accepted."}
{"_id": "q_704", "text": "Creates the condition function pair for an expansion."}
{"_id": "q_705", "text": "Creates the condition function pair for an outside contraction."}
{"_id": "q_706", "text": "Returns True if the simplex has converged.\n\n If the simplex size is smaller than the `position_tolerance` or the variation\n of the function value over the vertices of the simplex is smaller than the\n `func_tolerance` return True else False.\n\n Args:\n simplex: `Tensor` of real dtype. The simplex to test for convergence. For\n more details, see the docstring for `initial_simplex` argument\n of `minimize`.\n best_vertex: `Tensor` of real dtype and rank one less than `simplex`. The\n vertex with the best (i.e. smallest) objective value.\n best_objective: Scalar `Tensor` of real dtype. The best (i.e. smallest)\n value of the objective function at a vertex.\n worst_objective: Scalar `Tensor` of same dtype as `best_objective`. The\n worst (i.e. largest) value of the objective function at a vertex.\n func_tolerance: Scalar positive `Tensor`. The tolerance for the variation\n of the objective function value over the simplex. If the variation over\n the simplex vertices is below this threshold, convergence is True.\n position_tolerance: Scalar positive `Tensor`. The algorithm stops if the\n lengths (under the supremum norm) of edges connecting to the best vertex\n are below this threshold.\n\n Returns:\n has_converged: A scalar boolean `Tensor` indicating whether the algorithm\n is deemed to have converged."}
{"_id": "q_707", "text": "Computes the initial simplex and the objective values at the simplex.\n\n Args:\n objective_function: A Python callable that accepts a point as a\n real `Tensor` and returns a `Tensor` of real dtype containing\n the value of the function at that point. The function\n to be evaluated at the simplex. If `batch_evaluate_objective` is `True`,\n the callable may be evaluated on a `Tensor` of shape `[n+1] + s `\n where `n` is the dimension of the problem and `s` is the shape of a\n single point in the domain (so `n` is the size of a `Tensor`\n representing a single point).\n In this case, the expected return value is a `Tensor` of shape `[n+1]`.\n initial_simplex: None or `Tensor` of real dtype. The initial simplex to\n start the search. If supplied, should be a `Tensor` of shape `[n+1] + s`\n where `n` is the dimension of the problem and `s` is the shape of a\n single point in the domain. Each row (i.e. the `Tensor` with a given\n value of the first index) is interpreted as a vertex of a simplex and\n hence the rows must be affinely independent. If not supplied, an axes\n aligned simplex is constructed using the `initial_vertex` and\n `step_sizes`. Only one and at least one of `initial_simplex` and\n `initial_vertex` must be supplied.\n initial_vertex: None or `Tensor` of real dtype and any shape that can\n be consumed by the `objective_function`. A single point in the domain that\n will be used to construct an axes aligned initial simplex.\n step_sizes: None or `Tensor` of real dtype and shape broadcasting\n compatible with `initial_vertex`. Supplies the simplex scale along each\n axes. Only used if `initial_simplex` is not supplied. See the docstring\n of `minimize` for more details.\n objective_at_initial_simplex: None or rank `1` `Tensor` of real dtype.\n The value of the objective function at the initial simplex.\n May be supplied only if `initial_simplex` is\n supplied. If not supplied, it will be computed.\n objective_at_initial_vertex: None or scalar `Tensor` of real dtype. The\n value of the objective function at the initial vertex. May be supplied\n only if the `initial_vertex` is also supplied.\n batch_evaluate_objective: Python `bool`. If True, the objective function\n will be evaluated on all the vertices of the simplex packed into a\n single tensor. If False, the objective will be mapped across each\n vertex separately.\n\n Returns:\n prepared_args: A tuple containing the following elements:\n dimension: Scalar `Tensor` of `int32` dtype. The dimension of the problem\n as inferred from the supplied arguments.\n num_vertices: Scalar `Tensor` of `int32` dtype. The number of vertices\n in the simplex.\n simplex: A `Tensor` of same dtype as `initial_simplex`\n (or `initial_vertex`). The first component of the shape of the\n `Tensor` is `num_vertices` and each element represents a vertex of\n the simplex.\n objective_at_simplex: A `Tensor` of same dtype as the dtype of the\n return value of objective_function. The shape is a vector of size\n `num_vertices`. The objective function evaluated at the simplex.\n num_evaluations: An `int32` scalar `Tensor`. The number of points on\n which the objective function was evaluated.\n\n Raises:\n ValueError: If any of the following conditions hold\n 1. If none or more than one of `initial_simplex` and `initial_vertex` are\n supplied.\n 2. If `initial_simplex` and `step_sizes` are both specified."}
{"_id": "q_708", "text": "Evaluates the objective function at the specified initial simplex."}
{"_id": "q_709", "text": "Constructs a standard axes aligned simplex."}
{"_id": "q_710", "text": "Evaluates the objective function on a batch of points.\n\n If `batch_evaluate_objective` is True, returns\n `objective function(arg_batch)` else it maps the `objective_function`\n across the `arg_batch`.\n\n Args:\n objective_function: A Python callable that accepts a single `Tensor` of\n rank 'R > 1' and any shape 's' and returns a scalar `Tensor` of real dtype\n containing the value of the function at that point. If\n `batch a `Tensor` of shape `[batch_size] + s ` where `batch_size` is the\n size of the batch of args. In this case, the expected return value is a\n `Tensor` of shape `[batch_size]`.\n arg_batch: A `Tensor` of real dtype. The batch of arguments at which to\n evaluate the `objective_function`. If `batch_evaluate_objective` is False,\n `arg_batch` will be unpacked along the zeroth axis and the\n `objective_function` will be applied to each element.\n batch_evaluate_objective: `bool`. Whether the `objective_function` can\n evaluate a batch of arguments at once.\n\n Returns:\n A tuple containing:\n objective_values: A `Tensor` of real dtype and shape `[batch_size]`.\n The value of the objective function evaluated at the supplied\n `arg_batch`.\n num_evaluations: An `int32` scalar `Tensor`containing the number of\n points on which the objective function was evaluated (i.e `batch_size`)."}
{"_id": "q_711", "text": "Save a PNG plot visualizing posterior uncertainty on heldout data.\n\n Args:\n input_vals: A `float`-like Numpy `array` of shape\n `[num_heldout] + IMAGE_SHAPE`, containing heldout input images.\n probs: A `float`-like Numpy array of shape `[num_monte_carlo,\n num_heldout, num_classes]` containing Monte Carlo samples of\n class probabilities for each heldout sample.\n fname: Python `str` filename to save the plot to.\n n: Python `int` number of datapoints to vizualize.\n title: Python `str` title for the plot."}
{"_id": "q_712", "text": "Instantiates an initializer from a configuration dictionary."}
{"_id": "q_713", "text": "Compute the log of the exponentially weighted moving mean of the exp.\n\n If `log_value` is a draw from a stationary random variable, this function\n approximates `log(E[exp(log_value)])`, i.e., a weighted log-sum-exp. More\n precisely, a `tf.Variable`, `log_mean_exp_var`, is updated by `log_value`\n using the following identity:\n\n ```none\n log_mean_exp_var =\n = log(decay exp(log_mean_exp_var) + (1 - decay) exp(log_value))\n = log(exp(log_mean_exp_var + log(decay)) + exp(log_value + log1p(-decay)))\n = log_mean_exp_var\n + log( exp(log_mean_exp_var - log_mean_exp_var + log(decay))\n + exp(log_value - log_mean_exp_var + log1p(-decay)))\n = log_mean_exp_var\n + log_sum_exp([log(decay), log_value - log_mean_exp_var + log1p(-decay)]).\n ```\n\n In addition to numerical stability, this formulation is advantageous because\n `log_mean_exp_var` can be updated in a lock-free manner, i.e., using\n `assign_add`. (Note: the updates are not thread-safe; it's just that the\n update to the tf.Variable is presumed efficient due to being lock-free.)\n\n Args:\n log_mean_exp_var: `float`-like `Variable` representing the log of the\n exponentially weighted moving mean of the exp. Same shape as `log_value`.\n log_value: `float`-like `Tensor` representing a new (streaming) observation.\n Same shape as `log_mean_exp_var`.\n decay: A `float`-like `Tensor`. The moving mean decay. Typically close to\n `1.`, e.g., `0.999`.\n name: Optional name of the returned operation.\n\n Returns:\n log_mean_exp_var: A reference to the input 'Variable' tensor with the\n `log_value`-updated log of the exponentially weighted moving mean of exp.\n\n Raises:\n TypeError: if `log_mean_exp_var` does not have float type `dtype`.\n TypeError: if `log_mean_exp_var`, `log_value`, `decay` have different\n `base_dtype`."}
{"_id": "q_714", "text": "Ensures non-scalar input has at least one column.\n\n Example:\n If `x = [1, 2, 3]` then the output is `[[1], [2], [3]]`.\n\n If `x = [[1, 2, 3], [4, 5, 6]]` then the output is unchanged.\n\n If `x = 1` then the output is unchanged.\n\n Args:\n x: `Tensor`.\n\n Returns:\n columnar_x: `Tensor` with at least two dimensions."}
{"_id": "q_715", "text": "Generates `Tensor` consisting of `-1` or `+1`, chosen uniformly at random.\n\n For more details, see [Rademacher distribution](\n https://en.wikipedia.org/wiki/Rademacher_distribution).\n\n Args:\n shape: Vector-shaped, `int` `Tensor` representing shape of output.\n dtype: (Optional) TF `dtype` representing `dtype` of output.\n seed: (Optional) Python integer to seed the random number generator.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'random_rademacher').\n\n Returns:\n rademacher: `Tensor` with specified `shape` and `dtype` consisting of `-1`\n or `+1` chosen uniformly-at-random."}
{"_id": "q_716", "text": "Generates `Tensor` of positive reals drawn from a Rayleigh distributions.\n\n The probability density function of a Rayleigh distribution with `scale`\n parameter is given by:\n\n ```none\n f(x) = x scale**-2 exp(-x**2 0.5 scale**-2)\n ```\n\n For more details, see [Rayleigh distribution](\n https://en.wikipedia.org/wiki/Rayleigh_distribution)\n\n Args:\n shape: Vector-shaped, `int` `Tensor` representing shape of output.\n scale: (Optional) Positive `float` `Tensor` representing `Rayleigh` scale.\n Default value: `None` (i.e., `scale = 1.`).\n dtype: (Optional) TF `dtype` representing `dtype` of output.\n Default value: `tf.float32`.\n seed: (Optional) Python integer to seed the random number generator.\n Default value: `None` (i.e., no seed).\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'random_rayleigh').\n\n Returns:\n rayleigh: `Tensor` with specified `shape` and `dtype` consisting of positive\n real values drawn from a Rayleigh distribution with specified `scale`."}
{"_id": "q_717", "text": "Convenience function which chooses the condition based on the predicate."}
{"_id": "q_718", "text": "Finish computation of log_prob on one element of the inverse image."}
{"_id": "q_719", "text": "Helper which rolls left event_dims left or right event_dims right."}
{"_id": "q_720", "text": "r\"\"\"Inverse of tf.nn.batch_normalization.\n\n Args:\n x: Input `Tensor` of arbitrary dimensionality.\n mean: A mean `Tensor`.\n variance: A variance `Tensor`.\n offset: An offset `Tensor`, often denoted `beta` in equations, or\n None. If present, will be added to the normalized tensor.\n scale: A scale `Tensor`, often denoted `gamma` in equations, or\n `None`. If present, the scale is applied to the normalized tensor.\n variance_epsilon: A small `float` added to the minibatch `variance` to\n prevent dividing by zero.\n name: A name for this operation (optional).\n\n Returns:\n batch_unnormalized: The de-normalized, de-scaled, de-offset `Tensor`."}
{"_id": "q_721", "text": "Check for valid BatchNormalization layer.\n\n Args:\n layer: Instance of `tf.layers.BatchNormalization`.\n Raises:\n ValueError: If batchnorm_layer argument is not an instance of\n `tf.layers.BatchNormalization`, or if `batchnorm_layer.renorm=True` or\n if `batchnorm_layer.virtual_batch_size` is specified."}
{"_id": "q_722", "text": "Applies a single slicing step to `dist`, returning a new instance."}
{"_id": "q_723", "text": "Runs multiple Fisher scoring steps.\n\n Args:\n model_matrix: (Batch of) `float`-like, matrix-shaped `Tensor` where each row\n represents a sample's features.\n response: (Batch of) vector-shaped `Tensor` where each element represents a\n sample's observed response (to the corresponding row of features). Must\n have same `dtype` as `model_matrix`.\n model: `tfp.glm.ExponentialFamily`-like instance which implicitly\n characterizes a negative log-likelihood loss by specifying the\n distribuion's `mean`, `gradient_mean`, and `variance`.\n model_coefficients_start: Optional (batch of) vector-shaped `Tensor`\n representing the initial model coefficients, one for each column in\n `model_matrix`. Must have same `dtype` as `model_matrix`.\n Default value: Zeros.\n predicted_linear_response_start: Optional `Tensor` with `shape`, `dtype`\n matching `response`; represents `offset` shifted initial linear\n predictions based on `model_coefficients_start`.\n Default value: `offset` if `model_coefficients is None`, and\n `tf.linalg.matvec(model_matrix, model_coefficients_start) + offset`\n otherwise.\n l2_regularizer: Optional scalar `Tensor` representing L2 regularization\n penalty, i.e.,\n `loss(w) = sum{-log p(y[i]|x[i],w) : i=1..n} + l2_regularizer ||w||_2^2`.\n Default value: `None` (i.e., no L2 regularization).\n dispersion: Optional (batch of) `Tensor` representing `response` dispersion,\n i.e., as in, `p(y|theta) := exp((y theta - A(theta)) / dispersion)`.\n Must broadcast with rows of `model_matrix`.\n Default value: `None` (i.e., \"no dispersion\").\n offset: Optional `Tensor` representing constant shift applied to\n `predicted_linear_response`. Must broadcast to `response`.\n Default value: `None` (i.e., `tf.zeros_like(response)`).\n convergence_criteria_fn: Python `callable` taking:\n `is_converged_previous`, `iter_`, `model_coefficients_previous`,\n `predicted_linear_response_previous`, `model_coefficients_next`,\n `predicted_linear_response_next`, `response`, `model`, `dispersion` and\n returning a `bool` `Tensor` indicating that Fisher scoring has converged.\n See `convergence_criteria_small_relative_norm_weights_change` as an\n example function.\n Default value: `None` (i.e.,\n `convergence_criteria_small_relative_norm_weights_change`).\n learning_rate: Optional (batch of) scalar `Tensor` used to dampen iterative\n progress. Typically only needed if optimization diverges, should be no\n larger than `1` and typically very close to `1`.\n Default value: `None` (i.e., `1`).\n fast_unsafe_numerics: Optional Python `bool` indicating if faster, less\n numerically accurate methods can be employed for computing the weighted\n least-squares solution.\n Default value: `True` (i.e., \"fast but possibly diminished accuracy\").\n maximum_iterations: Optional maximum number of iterations of Fisher scoring\n to run; \"and-ed\" with result of `convergence_criteria_fn`.\n Default value: `None` (i.e., `infinity`).\n name: Python `str` used as name prefix to ops created by this function.\n Default value: `\"fit\"`.\n\n Returns:\n model_coefficients: (Batch of) vector-shaped `Tensor`; represents the\n fitted model coefficients, one for each column in `model_matrix`.\n predicted_linear_response: `response`-shaped `Tensor` representing linear\n predictions based on new `model_coefficients`, i.e.,\n `tf.linalg.matvec(model_matrix, model_coefficients) + offset`.\n is_converged: `bool` `Tensor` indicating that the returned\n `model_coefficients` met the `convergence_criteria_fn` criteria within the\n `maximum_iterations` limit.\n iter_: `int32` `Tensor` indicating the number of iterations taken.\n\n #### Example\n\n ```python\n from __future__ import print_function\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n def make_dataset(n, d, link, scale=1., dtype=np.float32):\n model_coefficients = tfd.Uniform(\n low=np.array(-1, dtype),\n high=np.array(1, dtype)).sample(d, seed=42)\n radius = np.sqrt(2.)\n model_coefficients *= radius / tf.linalg.norm(model_coefficients)\n model_matrix = tfd.Normal(\n loc=np.array(0, dtype),\n scale=np.array(1, dtype)).sample([n, d], seed=43)\n scale = tf.convert_to_tensor(scale, dtype)\n linear_response = tf.tensordot(\n model_matrix, model_coefficients, axes=[[1], [0]])\n if link == 'linear':\n response = tfd.Normal(loc=linear_response, scale=scale).sample(seed=44)\n elif link == 'probit':\n response = tf.cast(\n tfd.Normal(loc=linear_response, scale=scale).sample(seed=44) > 0,\n dtype)\n elif link == 'logit':\n response = tfd.Bernoulli(logits=linear_response).sample(seed=44)\n else:\n raise ValueError('unrecognized true link: {}'.format(link))\n return model_matrix, response, model_coefficients\n\n X, Y, w_true = make_dataset(n=int(1e6), d=100, link='probit')\n\n w, linear_response, is_converged, num_iter = tfp.glm.fit(\n model_matrix=X,\n response=Y,\n model=tfp.glm.BernoulliNormalCDF())\n log_likelihood = tfp.glm.BernoulliNormalCDF().log_prob(Y, linear_response)\n\n with tf.Session() as sess:\n [w_, linear_response_, is_converged_, num_iter_, Y_, w_true_,\n log_likelihood_] = sess.run([\n w, linear_response, is_converged, num_iter, Y, w_true,\n log_likelihood])\n\n print('is_converged: ', is_converged_)\n print(' num_iter: ', num_iter_)\n print(' accuracy: ', np.mean((linear_response_ > 0.) == Y_))\n print(' deviance: ', 2. * np.mean(log_likelihood_))\n print('||w0-w1||_2 / (1+||w0||_2): ', (np.linalg.norm(w_true_ - w_, ord=2) /\n (1. + np.linalg.norm(w_true_, ord=2))))\n\n # ==>\n # is_converged: True\n # num_iter: 6\n # accuracy: 0.804382\n # deviance: -0.820746600628\n # ||w0-w1||_2 / (1+||w0||_2): 0.00619245105309\n ```"}
{"_id": "q_724", "text": "Returns Python `callable` which indicates fitting procedure has converged.\n\n Writing old, new `model_coefficients` as `w0`, `w1`, this function\n defines convergence as,\n\n ```python\n relative_euclidean_norm = (tf.norm(w0 - w1, ord=2, axis=-1) /\n (1. + tf.norm(w0, ord=2, axis=-1)))\n reduce_all(relative_euclidean_norm < tolerance)\n ```\n\n where `tf.norm(x, ord=2)` denotes the [Euclidean norm](\n https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm) of `x`.\n\n Args:\n tolerance: `float`-like `Tensor` indicating convergence, i.e., when\n max relative Euclidean norm weights difference < tolerance`.\n Default value: `1e-5`.\n norm_order: Order of the norm. Default value: `2` (i.e., \"Euclidean norm\".)\n\n Returns:\n convergence_criteria_fn: Python `callable` which returns `bool` `Tensor`\n indicated fitting procedure has converged. (See inner function\n specification for argument signature.)\n Default value: `1e-5`."}
{"_id": "q_725", "text": "Helper to `fit` which sanitizes input args.\n\n Args:\n model_matrix: (Batch of) `float`-like, matrix-shaped `Tensor` where each row\n represents a sample's features.\n response: (Batch of) vector-shaped `Tensor` where each element represents a\n sample's observed response (to the corresponding row of features). Must\n have same `dtype` as `model_matrix`.\n model_coefficients: Optional (batch of) vector-shaped `Tensor` representing\n the model coefficients, one for each column in `model_matrix`. Must have\n same `dtype` as `model_matrix`.\n Default value: `tf.zeros(tf.shape(model_matrix)[-1], model_matrix.dtype)`.\n predicted_linear_response: Optional `Tensor` with `shape`, `dtype` matching\n `response`; represents `offset` shifted initial linear predictions based\n on current `model_coefficients`.\n Default value: `offset` if `model_coefficients is None`, and\n `tf.linalg.matvec(model_matrix, model_coefficients_start) + offset`\n otherwise.\n offset: Optional `Tensor` with `shape`, `dtype` matching `response`;\n represents constant shift applied to `predicted_linear_response`.\n Default value: `None` (i.e., `tf.zeros_like(response)`).\n name: Python `str` used as name prefix to ops created by this function.\n Default value: `\"prepare_args\"`.\n\n Returns:\n model_matrix: A `Tensor` with `shape`, `dtype` and values of the\n `model_matrix` argument.\n response: A `Tensor` with `shape`, `dtype` and values of the\n `response` argument.\n model_coefficients_start: A `Tensor` with `shape`, `dtype` and\n values of the `model_coefficients_start` argument if specified.\n A (batch of) vector-shaped `Tensors` with `dtype` matching `model_matrix`\n containing the default starting point otherwise.\n predicted_linear_response: A `Tensor` with `shape`, `dtype` and\n values of the `predicted_linear_response` argument if specified.\n A `Tensor` with `shape`, `dtype` matching `response` containing the\n default value otherwise.\n offset: A `Tensor` with `shape`, `dtype` and values of the `offset` argument\n if specified or `None` otherwise."}
{"_id": "q_726", "text": "Helper function for statically evaluating predicates in `cond`."}
{"_id": "q_727", "text": "Computes `rank` given a `Tensor`'s `shape`."}
{"_id": "q_728", "text": "Like tf.case, except attempts to statically evaluate predicates.\n\n If any predicate in `pred_fn_pairs` is a bool or has a constant value, the\n associated callable will be called or omitted depending on its value.\n Otherwise this functions like tf.case.\n\n Args:\n pred_fn_pairs: Dict or list of pairs of a boolean scalar tensor and a\n callable which returns a list of tensors.\n default: Optional callable that returns a list of tensors.\n exclusive: True iff at most one predicate is allowed to evaluate to `True`.\n name: A name for this operation (optional).\n\n Returns:\n The tensors returned by the first pair whose predicate evaluated to True, or\n those returned by `default` if none does.\n\n Raises:\n TypeError: If `pred_fn_pairs` is not a list/dictionary.\n TypeError: If `pred_fn_pairs` is a list but does not contain 2-tuples.\n TypeError: If `fns[i]` is not callable for any i, or `default` is not\n callable."}
{"_id": "q_729", "text": "Helper function to standardize op scope."}
{"_id": "q_730", "text": "Creates a LinearOperator representing a diagonal matrix.\n\n Args:\n loc: Floating-point `Tensor`. This is used for inferring shape in the case\n where only `scale_identity_multiplier` is set.\n scale_diag: Floating-point `Tensor` representing the diagonal matrix.\n `scale_diag` has shape [N1, N2, ... k], which represents a k x k diagonal\n matrix. When `None` no diagonal term is added to the LinearOperator.\n scale_identity_multiplier: floating point rank 0 `Tensor` representing a\n scaling done to the identity matrix. When `scale_identity_multiplier =\n scale_diag = scale_tril = None` then `scale += IdentityMatrix`. Otherwise\n no scaled-identity-matrix is added to `scale`.\n shape_hint: scalar integer `Tensor` representing a hint at the dimension of\n the identity matrix when only `scale_identity_multiplier` is set.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness.\n assert_positive: Python `bool` indicating whether LinearOperator should be\n checked for being positive definite.\n name: Python `str` name given to ops managed by this object.\n dtype: TF `DType` to prefer when converting args to `Tensor`s. Else, we fall\n back to a compatible dtype across all of `loc`, `scale_diag`, and\n `scale_identity_multiplier`.\n\n Returns:\n `LinearOperator` representing a lower triangular matrix.\n\n Raises:\n ValueError: If only `scale_identity_multiplier` is set and `loc` and\n `shape_hint` are both None."}
{"_id": "q_731", "text": "Infer distribution batch and event shapes from a location and scale.\n\n Location and scale family distributions determine their batch/event shape by\n broadcasting the `loc` and `scale` args. This helper does that broadcast,\n statically if possible.\n\n Batch shape broadcasts as per the normal rules.\n We allow the `loc` event shape to broadcast up to that of `scale`. We do not\n allow `scale`'s event shape to change. Therefore, the last dimension of `loc`\n must either be size `1`, or the same as `scale.range_dimension`.\n\n See `MultivariateNormalLinearOperator` for a usage example.\n\n Args:\n loc: `Tensor` (already converted to tensor) or `None`. If `None`, or\n `rank(loc)==0`, both batch and event shape are determined by `scale`.\n scale: A `LinearOperator` instance.\n name: A string name to prepend to created ops.\n\n Returns:\n batch_shape: `TensorShape` (if broadcast is done statically), or `Tensor`.\n event_shape: `TensorShape` (if broadcast is done statically), or `Tensor`.\n\n Raises:\n ValueError: If the last dimension of `loc` is determined statically to be\n different than the range of `scale`."}
{"_id": "q_732", "text": "Returns `True` if `scale` is a `LinearOperator` that is known to be diag.\n\n Args:\n scale: `LinearOperator` instance.\n\n Returns:\n Python `bool`.\n\n Raises:\n TypeError: If `scale` is not a `LinearOperator`."}
{"_id": "q_733", "text": "Convenience function that chooses one of two values based on the predicate.\n\n This utility is equivalent to a version of `tf.where` that accepts only a\n scalar predicate and computes its result statically when possible. It may also\n be used in place of `tf.cond` when both branches yield a `Tensor` of the same\n shape; the operational difference is that `tf.cond` uses control flow to\n evaluate only the branch that's needed, while `tf.where` (and thus\n this method) may evaluate both branches before the predicate's truth is known.\n This means that `tf.cond` is preferred when one of the branches is expensive\n to evaluate (like performing a large matmul), while this method is preferred\n when both branches are cheap, e.g., constants. In the latter case, we expect\n this method to be substantially faster than `tf.cond` on GPU and to give\n similar performance on CPU.\n\n Args:\n pred: Scalar `bool` `Tensor` predicate.\n true_value: `Tensor` to return if `pred` is `True`.\n false_value: `Tensor` to return if `pred` is `False`. Must have the same\n shape as `true_value`.\n name: Python `str` name given to ops managed by this object.\n\n Returns:\n result: a `Tensor` (or `Tensor`-convertible Python value) equal to\n `true_value` if `pred` evaluates to `True` and `false_value` otherwise.\n If the condition can be evaluated statically, the result returned is one\n of the input Python values, with no graph side effects."}
{"_id": "q_734", "text": "Move a single tensor dimension within its shape.\n\n This is a special case of `tf.transpose()`, which applies\n arbitrary permutations to tensor dimensions.\n\n Args:\n x: Tensor of rank `ndims`.\n source_idx: Integer index into `x.shape` (negative indexing is supported).\n dest_idx: Integer index into `x.shape` (negative indexing is supported).\n\n Returns:\n x_perm: Tensor of rank `ndims`, in which the dimension at original\n index `source_idx` has been moved to new index `dest_idx`, with\n all other dimensions retained in their original order.\n\n Example:\n\n ```python\n x = tf.placeholder(shape=[200, 30, 4, 1, 6])\n x_perm = _move_dimension(x, 1, 1) # no-op\n x_perm = _move_dimension(x, 0, 3) # result shape [30, 4, 1, 200, 6]\n x_perm = _move_dimension(x, 0, -2) # equivalent to previous\n x_perm = _move_dimension(x, 4, 2) # result shape [200, 30, 6, 4, 1]\n ```"}
{"_id": "q_735", "text": "Assert x is a non-negative tensor, and optionally of integers."}
{"_id": "q_736", "text": "Helper returning True if dtype is known to be unsigned."}
{"_id": "q_737", "text": "Helper returning True if dtype is known to be signed."}
{"_id": "q_738", "text": "Helper returning the largest integer exactly representable by dtype."}
{"_id": "q_739", "text": "Helper returning the smallest integer exactly representable by dtype."}
{"_id": "q_740", "text": "Circularly moves dims left or right.\n\n Effectively identical to:\n\n ```python\n numpy.transpose(x, numpy.roll(numpy.arange(len(x.shape)), shift))\n ```\n\n When `validate_args=False` additional graph-runtime checks are\n performed. These checks entail moving data from to GPU to CPU.\n\n Example:\n\n ```python\n x = tf.random_normal([1, 2, 3, 4]) # Tensor of shape [1, 2, 3, 4].\n rotate_transpose(x, -1).shape == [2, 3, 4, 1]\n rotate_transpose(x, -2).shape == [3, 4, 1, 2]\n rotate_transpose(x, 1).shape == [4, 1, 2, 3]\n rotate_transpose(x, 2).shape == [3, 4, 1, 2]\n rotate_transpose(x, 7).shape == rotate_transpose(x, 3).shape # [2, 3, 4, 1]\n rotate_transpose(x, -7).shape == rotate_transpose(x, -3).shape # [4, 1, 2, 3]\n ```\n\n Args:\n x: `Tensor`.\n shift: `Tensor`. Number of dimensions to transpose left (shift<0) or\n transpose right (shift>0).\n name: Python `str`. The name to give this op.\n\n Returns:\n rotated_x: Input `Tensor` with dimensions circularly rotated by shift.\n\n Raises:\n TypeError: if shift is not integer type."}
{"_id": "q_741", "text": "Picks possibly different length row `Tensor`s based on condition.\n\n Value `Tensor`s should have exactly one dimension.\n\n If `cond` is a python Boolean or `tf.constant` then either `true_vector` or\n `false_vector` is immediately returned. I.e., no graph nodes are created and\n no validation happens.\n\n Args:\n cond: `Tensor`. Must have `dtype=tf.bool` and be scalar.\n true_vector: `Tensor` of one dimension. Returned when cond is `True`.\n false_vector: `Tensor` of one dimension. Returned when cond is `False`.\n name: Python `str`. The name to give this op.\n Example: ```python pick_vector(tf.less(0, 5), tf.range(10, 12), tf.range(15,\n 18)) # [10, 11] pick_vector(tf.less(5, 0), tf.range(10, 12), tf.range(15,\n 18)) # [15, 16, 17] ```\n\n Returns:\n true_or_false_vector: `Tensor`.\n\n Raises:\n TypeError: if `cond.dtype != tf.bool`\n TypeError: if `cond` is not a constant and\n `true_vector.dtype != false_vector.dtype`"}
{"_id": "q_742", "text": "Generate a new seed, from the given seed and salt."}
{"_id": "q_743", "text": "Creates a matrix with values set above, below, and on the diagonal.\n\n Example:\n\n ```python\n tridiag(below=[1., 2., 3.],\n diag=[4., 5., 6., 7.],\n above=[8., 9., 10.])\n # ==> array([[ 4., 8., 0., 0.],\n # [ 1., 5., 9., 0.],\n # [ 0., 2., 6., 10.],\n # [ 0., 0., 3., 7.]], dtype=float32)\n ```\n\n Warning: This Op is intended for convenience, not efficiency.\n\n Args:\n below: `Tensor` of shape `[B1, ..., Bb, d-1]` corresponding to the below\n diagonal part. `None` is logically equivalent to `below = 0`.\n diag: `Tensor` of shape `[B1, ..., Bb, d]` corresponding to the diagonal\n part. `None` is logically equivalent to `diag = 0`.\n above: `Tensor` of shape `[B1, ..., Bb, d-1]` corresponding to the above\n diagonal part. `None` is logically equivalent to `above = 0`.\n name: Python `str`. The name to give this op.\n\n Returns:\n tridiag: `Tensor` with values set above, below and on the diagonal.\n\n Raises:\n ValueError: if all inputs are `None`."}
{"_id": "q_744", "text": "Validates quadrature grid, probs or computes them as necessary.\n\n Args:\n quadrature_grid_and_probs: Python pair of `float`-like `Tensor`s\n representing the sample points and the corresponding (possibly\n normalized) weight. When `None`, defaults to:\n `np.polynomial.hermite.hermgauss(deg=8)`.\n dtype: The expected `dtype` of `grid` and `probs`.\n validate_args: Python `bool`, default `False`. When `True` distribution\n parameters are checked for validity despite possibly degrading runtime\n performance. When `False` invalid inputs may silently render incorrect\n outputs.\n name: Python `str` name prefixed to Ops created by this class.\n\n Returns:\n quadrature_grid_and_probs: Python pair of `float`-like `Tensor`s\n representing the sample points and the corresponding (possibly\n normalized) weight.\n\n Raises:\n ValueError: if `quadrature_grid_and_probs is not None` and\n `len(quadrature_grid_and_probs[0]) != len(quadrature_grid_and_probs[1])`"}
{"_id": "q_745", "text": "Returns parent frame arguments.\n\n When called inside a function, returns a dictionary with the caller's function\n arguments. These are positional arguments and keyword arguments (**kwargs),\n while variable arguments (*varargs) are excluded.\n\n When called at global scope, this will return an empty dictionary, since there\n are no arguments.\n\n WARNING: If caller function argument names are overloaded before invoking\n this method, then values will reflect the overloaded value. For this reason,\n we recommend calling `parent_frame_arguments` at the beginning of the\n function."}
{"_id": "q_746", "text": "Transform a 0-D or 1-D `Tensor` to be 1-D.\n\n For user convenience, many parts of the TensorFlow Probability API accept\n inputs of rank 0 or 1 -- i.e., allowing an `event_shape` of `[5]` to be passed\n to the API as either `5` or `[5]`. This function can be used to transform\n such an argument to always be 1-D.\n\n NOTE: Python or NumPy values will be converted to `Tensor`s with standard type\n inference/conversion. In particular, an empty list or tuple will become an\n empty `Tensor` with dtype `float32`. Callers should convert values to\n `Tensor`s before calling this function if different behavior is desired\n (e.g. converting empty lists / other values to `Tensor`s with dtype `int32`).\n\n Args:\n x: A 0-D or 1-D `Tensor`.\n tensor_name: Python `str` name for `Tensor`s created by this function.\n op_name: Python `str` name for `Op`s created by this function.\n validate_args: Python `bool, default `False`. When `True`, arguments may be\n checked for validity at execution time, possibly degrading runtime\n performance. When `False`, invalid inputs may silently render incorrect\n outputs.\n Returns:\n vector: a 1-D `Tensor`."}
{"_id": "q_747", "text": "Checks that `rightmost_transposed_ndims` is valid."}
{"_id": "q_748", "text": "Checks that `perm` is valid."}
{"_id": "q_749", "text": "Returns the concatenation of the dimension in `x` and `other`.\n\n *Note:* If either `x` or `other` is completely unknown, concatenation will\n discard information about the other shape. In future, we might support\n concatenation that preserves this information for use with slicing.\n\n For more details, see `help(tf.TensorShape.concatenate)`.\n\n Args:\n x: object representing a shape; convertible to `tf.TensorShape`.\n other: object representing a shape; convertible to `tf.TensorShape`.\n\n Returns:\n new_shape: an object like `x` whose elements are the concatenation of the\n dimensions in `x` and `other`."}
{"_id": "q_750", "text": "Returns a list of dimension sizes, or `None` if `rank` is unknown.\n\n For more details, see `help(tf.TensorShape.dims)`.\n\n Args:\n x: object representing a shape; convertible to `tf.TensorShape`.\n\n Returns:\n shape_as_list: list of sizes or `None` values representing each\n dimensions size if known. A size is `tf.Dimension` if input is a\n `tf.TensorShape` and an `int` otherwise."}
{"_id": "q_751", "text": "Returns a shape combining the information in `x` and `other`.\n\n The dimensions in `x` and `other` are merged elementwise, according to the\n rules defined for `tf.Dimension.merge_with()`.\n\n For more details, see `help(tf.TensorShape.merge_with)`.\n\n Args:\n x: object representing a shape; convertible to `tf.TensorShape`.\n other: object representing a shape; convertible to `tf.TensorShape`.\n\n Returns:\n merged_shape: shape having `type(x)` containing the combined information of\n `x` and `other`.\n\n Raises:\n ValueError: If `x` and `other` are not compatible."}
{"_id": "q_752", "text": "Returns a shape based on `x` with at least the given `rank`.\n\n For more details, see `help(tf.TensorShape.with_rank_at_least)`.\n\n Args:\n x: object representing a shape; convertible to `tf.TensorShape`.\n rank: An `int` representing the minimum rank of `x` or else an assertion is\n raised.\n\n Returns:\n shape: a shape having `type(x)` but guaranteed to have at least the given\n rank (or else an assertion was raised).\n\n Raises:\n ValueError: If `x` does not represent a shape with at least the given\n `rank`."}
{"_id": "q_753", "text": "Check that source and target shape match, statically if possible."}
{"_id": "q_754", "text": "Build a callable that perform one step for backward smoothing.\n\n Args:\n get_transition_matrix_for_timestep: callable taking a timestep\n as an integer `Tensor` argument, and returning a `LinearOperator`\n of shape `[latent_size, latent_size]`.\n\n Returns:\n backward_pass_step: a callable that updates a BackwardPassState\n from timestep `t` to `t-1`."}
{"_id": "q_755", "text": "Build a callable that performs one step of Kalman filtering.\n\n Args:\n get_transition_matrix_for_timestep: callable taking a timestep\n as an integer `Tensor` argument, and returning a `LinearOperator`\n of shape `[latent_size, latent_size]`.\n get_transition_noise_for_timestep: callable taking a timestep as\n an integer `Tensor` argument, and returning a\n `MultivariateNormalLinearOperator` of event shape\n `[latent_size]`.\n get_observation_matrix_for_timestep: callable taking a timestep\n as an integer `Tensor` argument, and returning a `LinearOperator`\n of shape `[observation_size, observation_size]`.\n get_observation_noise_for_timestep: callable taking a timestep as\n an integer `Tensor` argument, and returning a\n `MultivariateNormalLinearOperator` of event shape\n `[observation_size]`.\n\n Returns:\n kalman_filter_step: a callable that updates a KalmanFilterState\n from timestep `t-1` to `t`."}
{"_id": "q_756", "text": "Conjugate update for a linear Gaussian model.\n\n Given a normal prior on a latent variable `z`,\n `p(z) = N(prior_mean, prior_cov) = N(u, P)`,\n for which we observe a linear Gaussian transformation `x`,\n `p(x|z) = N(H * z + c, R)`,\n the posterior is also normal:\n `p(z|x) = N(u*, P*)`.\n\n We can write this update as\n x_expected = H * u + c # pushforward prior mean\n S = R + H * P * H' # pushforward prior cov\n K = P * H' * S^{-1} # optimal Kalman gain\n u* = u + K * (x_observed - x_expected) # posterior mean\n P* = (I - K * H) * P (I - K * H)' + K * R * K' # posterior cov\n (see, e.g., https://en.wikipedia.org/wiki/Kalman_filter#Update)\n\n Args:\n prior_mean: `Tensor` with event shape `[latent_size, 1]` and\n potential batch shape `B = [b1, ..., b_n]`.\n prior_cov: `Tensor` with event shape `[latent_size, latent_size]`\n and batch shape `B` (matching `prior_mean`).\n observation_matrix: `LinearOperator` with shape\n `[observation_size, latent_size]` and batch shape broadcastable\n to `B`.\n observation_noise: potentially-batched\n `MultivariateNormalLinearOperator` instance with event shape\n `[observation_size]` and batch shape broadcastable to `B`.\n x_observed: potentially batched `Tensor` with event shape\n `[observation_size, 1]` and batch shape `B`.\n\n Returns:\n posterior_mean: `Tensor` with event shape `[latent_size, 1]` and\n batch shape `B`.\n posterior_cov: `Tensor` with event shape `[latent_size,\n latent_size]` and batch shape `B`.\n predictive_dist: the prior predictive distribution `p(x|z)`,\n as a `Distribution` instance with event\n shape `[observation_size]` and batch shape `B`. This will\n typically be `tfd.MultivariateNormalTriL`, but when\n `observation_size=1` we return a `tfd.Independent(tfd.Normal)`\n instance as an optimization."}
{"_id": "q_757", "text": "Propagate a filtered distribution through a transition model."}
{"_id": "q_758", "text": "Build a callable that performs one step of Kalman mean recursion.\n\n Args:\n get_transition_matrix_for_timestep: callable taking a timestep\n as an integer `Tensor` argument, and returning a `LinearOperator`\n of shape `[latent_size, latent_size]`.\n get_transition_noise_for_timestep: callable taking a timestep as\n an integer `Tensor` argument, and returning a\n `MultivariateNormalLinearOperator` of event shape\n `[latent_size]`.\n get_observation_matrix_for_timestep: callable taking a timestep\n as an integer `Tensor` argument, and returning a `LinearOperator`\n of shape `[observation_size, observation_size]`.\n get_observation_noise_for_timestep: callable taking a timestep as\n an integer `Tensor` argument, and returning a\n `MultivariateNormalLinearOperator` of event shape\n `[observation_size]`.\n\n Returns:\n kalman_mean_step: a callable that computes latent state and\n observation means at time `t`, given latent mean at time `t-1`."}
{"_id": "q_759", "text": "Propagate a mean through linear Gaussian transformation."}
{"_id": "q_760", "text": "Propagate covariance through linear Gaussian transformation."}
{"_id": "q_761", "text": "Run the backward pass in Kalman smoother.\n\n The backward smoothing is using Rauch, Tung and Striebel smoother as\n as discussed in section 18.3.2 of Kevin P. Murphy, 2012, Machine Learning:\n A Probabilistic Perspective, The MIT Press. The inputs are returned by\n `forward_filter` function.\n\n Args:\n filtered_means: Means of the per-timestep filtered marginal\n distributions p(z_t | x_{:t}), as a Tensor of shape\n `sample_shape(x) + batch_shape + [num_timesteps, latent_size]`.\n filtered_covs: Covariances of the per-timestep filtered marginal\n distributions p(z_t | x_{:t}), as a Tensor of shape\n `batch_shape + [num_timesteps, latent_size, latent_size]`.\n predicted_means: Means of the per-timestep predictive\n distributions over latent states, p(z_{t+1} | x_{:t}), as a\n Tensor of shape `sample_shape(x) + batch_shape +\n [num_timesteps, latent_size]`.\n predicted_covs: Covariances of the per-timestep predictive\n distributions over latent states, p(z_{t+1} | x_{:t}), as a\n Tensor of shape `batch_shape + [num_timesteps, latent_size,\n latent_size]`.\n\n Returns:\n posterior_means: Means of the smoothed marginal distributions\n p(z_t | x_{1:T}), as a Tensor of shape\n `sample_shape(x) + batch_shape + [num_timesteps, latent_size]`,\n which is of the same shape as filtered_means.\n posterior_covs: Covariances of the smoothed marginal distributions\n p(z_t | x_{1:T}), as a Tensor of shape\n `batch_shape + [num_timesteps, latent_size, latent_size]`.\n which is of the same shape as filtered_covs."}
{"_id": "q_762", "text": "Draw a joint sample from the prior over latents and observations."}
{"_id": "q_763", "text": "Run a Kalman smoother to return posterior mean and cov.\n\n Note that the returned values `smoothed_means` depend on the observed\n time series `x`, while the `smoothed_covs` are independent\n of the observed series; i.e., they depend only on the model itself.\n This means that the mean values have shape `concat([sample_shape(x),\n batch_shape, [num_timesteps, {latent/observation}_size]])`,\n while the covariances have shape `concat[(batch_shape, [num_timesteps,\n {latent/observation}_size, {latent/observation}_size]])`, which\n does not depend on the sample shape.\n\n This function only performs smoothing. If the user wants the\n intermediate values, which are returned by filtering pass `forward_filter`,\n one could get it by:\n ```\n (log_likelihoods,\n filtered_means, filtered_covs,\n predicted_means, predicted_covs,\n observation_means, observation_covs) = model.forward_filter(x)\n smoothed_means, smoothed_covs = model.backward_smoothing_pass(x)\n ```\n where `x` is an observation sequence.\n\n Args:\n x: a float-type `Tensor` with rightmost dimensions\n `[num_timesteps, observation_size]` matching\n `self.event_shape`. Additional dimensions must match or be\n broadcastable to `self.batch_shape`; any further dimensions\n are interpreted as a sample shape.\n mask: optional bool-type `Tensor` with rightmost dimension\n `[num_timesteps]`; `True` values specify that the value of `x`\n at that timestep is masked, i.e., not conditioned on. Additional\n dimensions must match or be broadcastable to `self.batch_shape`; any\n further dimensions must match or be broadcastable to the sample\n shape of `x`.\n Default value: `None`.\n\n Returns:\n smoothed_means: Means of the per-timestep smoothed\n distributions over latent states, p(x_{t} | x_{:T}), as a\n Tensor of shape `sample_shape(x) + batch_shape +\n [num_timesteps, observation_size]`.\n smoothed_covs: Covariances of the per-timestep smoothed\n distributions over latent states, p(x_{t} | x_{:T}), as a\n Tensor of shape `sample_shape(mask) + batch_shape + [num_timesteps,\n observation_size, observation_size]`. Note that the covariances depend\n only on the model and the mask, not on the data, so this may have fewer\n dimensions than `filtered_means`."}
{"_id": "q_764", "text": "Compute prior means for all variables via dynamic programming.\n\n Returns:\n latent_means: Prior means of latent states `z_t`, as a `Tensor`\n of shape `batch_shape + [num_timesteps, latent_size]`\n observation_means: Prior covariance matrices of observations\n `x_t`, as a `Tensor` of shape `batch_shape + [num_timesteps,\n observation_size]`"}
{"_id": "q_765", "text": "Compute prior covariances for all variables via dynamic programming.\n\n Returns:\n latent_covs: Prior covariance matrices of latent states `z_t`, as\n a `Tensor` of shape `batch_shape + [num_timesteps,\n latent_size, latent_size]`\n observation_covs: Prior covariance matrices of observations\n `x_t`, as a `Tensor` of shape `batch_shape + [num_timesteps,\n observation_size, observation_size]`"}
{"_id": "q_766", "text": "Create a deep copy of fn.\n\n Args:\n fn: a callable\n\n Returns:\n A `FunctionType`: a deep copy of fn.\n\n Raises:\n TypeError: if `fn` is not a callable."}
{"_id": "q_767", "text": "Removes `dict` keys which have have `self` as value."}
{"_id": "q_768", "text": "Recursively replace `dict`s with `_PrettyDict`."}
{"_id": "q_769", "text": "Helper to `maybe_call_fn_and_grads`."}
{"_id": "q_770", "text": "Calls `fn` and computes the gradient of the result wrt `args_list`."}
{"_id": "q_771", "text": "Construct a for loop, preferring a python loop if `n` is staticaly known.\n\n Given `loop_num_iter` and `body_fn`, return an op corresponding to executing\n `body_fn` `loop_num_iter` times, feeding previous outputs of `body_fn` into\n the next iteration.\n\n If `loop_num_iter` is statically known, the op is constructed via python for\n loop, and otherwise a `tf.while_loop` is used.\n\n Args:\n loop_num_iter: `Integer` `Tensor` representing the number of loop\n iterations.\n body_fn: Callable to be executed `loop_num_iter` times.\n initial_loop_vars: Listlike object of `Tensors` to be passed in to\n `body_fn`'s first execution.\n parallel_iterations: The number of iterations allowed to run in parallel.\n It must be a positive integer. See `tf.while_loop` for more details.\n Default value: `10`.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., \"smart_for_loop\").\n Returns:\n result: `Tensor` representing applying `body_fn` iteratively `n` times."}
{"_id": "q_772", "text": "A simplified version of `tf.scan` that has configurable tracing.\n\n This function repeatedly calls `loop_fn(state, elem)`, where `state` is the\n `initial_state` during the first iteration, and the return value of `loop_fn`\n for every iteration thereafter. `elem` is a slice of `elements` along the\n first dimension, accessed in order. Additionally, it calls `trace_fn` on the\n return value of `loop_fn`. The `Tensor`s in return values of `trace_fn` are\n stacked and returned from this function, such that the first dimension of\n those `Tensor`s matches the size of `elems`.\n\n Args:\n loop_fn: A callable that takes in a `Tensor` or a nested collection of\n `Tensor`s with the same structure as `initial_state`, a slice of `elems`\n and returns the same structure as `initial_state`.\n initial_state: A `Tensor` or a nested collection of `Tensor`s passed to\n `loop_fn` in the first iteration.\n elems: A `Tensor` that is split along the first dimension and each element\n of which is passed to `loop_fn`.\n trace_fn: A callable that takes in the return value of `loop_fn` and returns\n a `Tensor` or a nested collection of `Tensor`s.\n parallel_iterations: Passed to the internal `tf.while_loop`.\n name: Name scope used in this function. Default: 'trace_scan'.\n\n Returns:\n final_state: The final return value of `loop_fn`.\n trace: The same structure as the return value of `trace_fn`, but with each\n `Tensor` being a stack of the corresponding `Tensors` in the return value\n of `trace_fn` for each slice of `elems`."}
{"_id": "q_773", "text": "Wraps a setter so it applies to the inner-most results in `kernel_results`.\n\n The wrapped setter unwraps `kernel_results` and applies `setter` to the first\n results without an `inner_results` attribute.\n\n Args:\n setter: A callable that takes the kernel results as well as some `*args` and\n `**kwargs` and returns a modified copy of those kernel results.\n\n Returns:\n new_setter: A wrapped `setter`."}
{"_id": "q_774", "text": "Enables the `store_parameters_in_results` parameter in a chain of kernels.\n\n This is a temporary utility for use during the transition period of the\n parameter storage methods.\n\n Args:\n kernel: A TransitionKernel.\n\n Returns:\n kernel: The same kernel, but recreated with `store_parameters_in_results`\n recursively set to `True` in its parameters and its inner kernels (as\n appropriate)."}
{"_id": "q_775", "text": "Check that a shape Tensor is int-type and otherwise sane."}
{"_id": "q_776", "text": "Performs the line search step of the BFGS search procedure.\n\n Uses hager_zhang line search procedure to compute a suitable step size\n to advance the current `state.position` along the given `search_direction`.\n Also, if the line search is successful, updates the `state.position` by\n taking the corresponding step.\n\n Args:\n state: A namedtuple instance holding values for the current state of the\n search procedure. The state must include the fields: `position`,\n `objective_value`, `objective_gradient`, `num_iterations`,\n `num_objective_evaluations`, `converged` and `failed`.\n value_and_gradients_function: A Python callable that accepts a point as a\n real `Tensor` of shape `[..., n]` and returns a tuple of two tensors of\n the same dtype: the objective function value, a real `Tensor` of shape\n `[...]`, and its derivative, another real `Tensor` of shape `[..., n]`.\n search_direction: A real `Tensor` of shape `[..., n]`. The direction along\n which to perform line search.\n grad_tolerance: Scalar `Tensor` of real dtype. Specifies the gradient\n tolerance for the procedure.\n f_relative_tolerance: Scalar `Tensor` of real dtype. Specifies the\n tolerance for the relative change in the objective value.\n x_tolerance: Scalar `Tensor` of real dtype. Specifies the tolerance for the\n change in the position.\n stopping_condition: A Python function that takes as input two Boolean\n tensors of shape `[...]`, and returns a Boolean scalar tensor. The input\n tensors are `converged` and `failed`, indicating the current status of\n each respective batch member; the return value states whether the\n algorithm should stop.\n Returns:\n A copy of the input state with the following fields updated:\n converged: a Boolean `Tensor` of shape `[...]` indicating whether the\n convergence criteria has been met.\n failed: a Boolean `Tensor` of shape `[...]` indicating whether the line\n search procedure failed to converge, or if either the updated gradient\n or objective function are no longer finite.\n num_iterations: Increased by 1.\n num_objective_evaluations: Increased by the number of times that the\n objective function got evaluated.\n position, objective_value, objective_gradient: If line search succeeded,\n updated by computing the new position and evaluating the objective\n function at that position."}
{"_id": "q_777", "text": "Updates the state advancing its position by a given position_delta."}
{"_id": "q_778", "text": "Checks if the algorithm satisfies the convergence criteria."}
{"_id": "q_779", "text": "Broadcast a value to match the batching dimensions of a target.\n\n If necessary the value is converted into a tensor. Both value and target\n should be of the same dtype.\n\n Args:\n value: A value to broadcast.\n target: A `Tensor` of shape [b1, ..., bn, d].\n\n Returns:\n A `Tensor` of shape [b1, ..., bn] and same dtype as the target."}
{"_id": "q_780", "text": "field_name from kernel_results or kernel_results.accepted_results."}
{"_id": "q_781", "text": "Makes a function which applies a list of Bijectors' `inverse`s."}
{"_id": "q_782", "text": "Like tf.where but works on namedtuples."}
{"_id": "q_783", "text": "Performs the secant square procedure of Hager Zhang.\n\n Given an interval that brackets a root, this procedure performs an update of\n both end points using two intermediate points generated using the secant\n interpolation. For details see the steps S1-S4 in [Hager and Zhang (2006)][2].\n\n The interval [a, b] must satisfy the opposite slope conditions described in\n the documentation for `update`.\n\n Args:\n value_and_gradients_function: A Python callable that accepts a real scalar\n tensor and returns an object that can be converted to a namedtuple.\n The namedtuple should have fields 'f' and 'df' that correspond to scalar\n tensors of real dtype containing the value of the function and its\n derivative at that point. The other namedtuple fields, if present,\n should be tensors or sequences (possibly nested) of tensors.\n In usual optimization application, this function would be generated by\n projecting the multivariate objective function along some specific\n direction. The direction is determined by some other procedure but should\n be a descent direction (i.e. the derivative of the projected univariate\n function must be negative at 0.).\n Alternatively, the function may represent the batching of `n` such line\n functions (e.g. projecting a single multivariate objective function along\n `n` distinct directions at once) accepting n points as input, i.e. a\n tensor of shape [n], and the fields 'f' and 'df' in the returned\n namedtuple should each be a tensor of shape [n], with the corresponding\n function values and derivatives at the input points.\n val_0: A namedtuple, as returned by value_and_gradients_function evaluated\n at `0.`.\n search_interval: A namedtuple describing the current search interval,\n must include the fields:\n - converged: Boolean `Tensor` of shape [n], indicating batch members\n where search has already converged. Interval for these batch members\n won't be modified.\n - failed: Boolean `Tensor` of shape [n], indicating batch members\n where search has already failed. Interval for these batch members\n wont be modified.\n - iterations: Scalar int32 `Tensor`. Number of line search iterations\n so far.\n - func_evals: Scalar int32 `Tensor`. Number of function evaluations\n so far.\n - left: A namedtuple, as returned by value_and_gradients_function,\n of the left end point of the current search interval.\n - right: A namedtuple, as returned by value_and_gradients_function,\n of the right end point of the current search interval.\n f_lim: Scalar `Tensor` of real dtype. The function value threshold for\n the approximate Wolfe conditions to be checked.\n sufficient_decrease_param: Positive scalar `Tensor` of real dtype.\n Bounded above by the curvature param. Corresponds to 'delta' in the\n terminology of [Hager and Zhang (2006)][2].\n curvature_param: Positive scalar `Tensor` of real dtype. Bounded above\n by `1.`. Corresponds to 'sigma' in the terminology of\n [Hager and Zhang (2006)][2].\n name: (Optional) Python str. The name prefixed to the ops created by this\n function. If not supplied, the default name 'secant2' is used.\n\n Returns:\n A namedtuple containing the following fields.\n active: A boolean `Tensor` of shape [n]. Used internally by the procedure\n to indicate batch members on which there is work left to do.\n converged: A boolean `Tensor` of shape [n]. Indicates whether a point\n satisfying the Wolfe conditions has been found. If this is True, the\n interval will be degenerate (i.e. `left` and `right` below will be\n identical).\n failed: A boolean `Tensor` of shape [n]. Indicates if invalid function or\n gradient values were encountered (i.e. infinity or NaNs).\n num_evals: A scalar int32 `Tensor`. The total number of function\n evaluations made.\n left: Return value of value_and_gradients_function at the updated left\n end point of the interval.\n right: Return value of value_and_gradients_function at the updated right\n end point of the interval."}
{"_id": "q_784", "text": "Helper function for secant-square step."}
{"_id": "q_785", "text": "Brackets the minimum given an initial starting point.\n\n Applies the Hager Zhang bracketing algorithm to find an interval containing\n a region with points satisfying Wolfe conditions. Uses the supplied initial\n step size 'c', the right end point of the provided search interval, to find\n such an interval. The only condition on 'c' is that it should be positive.\n For more details see steps B0-B3 in [Hager and Zhang (2006)][2].\n\n Args:\n value_and_gradients_function: A Python callable that accepts a real scalar\n tensor and returns a namedtuple containing the value filed `f` of the\n function and its derivative value field `df` at that point.\n Alternatively, the function may representthe batching of `n` such line\n functions (e.g. projecting a single multivariate objective function along\n `n` distinct directions at once) accepting n points as input, i.e. a\n tensor of shape [n], and return a tuple of two tensors of shape [n], the\n function values and the corresponding derivatives at the input points.\n search_interval: A namedtuple describing the current search interval,\n must include the fields:\n - converged: Boolean `Tensor` of shape [n], indicating batch members\n where search has already converged. Interval for these batch members\n wont be modified.\n - failed: Boolean `Tensor` of shape [n], indicating batch members\n where search has already failed. Interval for these batch members\n wont be modified.\n - iterations: Scalar int32 `Tensor`. Number of line search iterations\n so far.\n - func_evals: Scalar int32 `Tensor`. Number of function evaluations\n so far.\n - left: A namedtuple, as returned by value_and_gradients_function\n evaluated at 0, the left end point of the current interval.\n - right: A namedtuple, as returned by value_and_gradients_function,\n of the right end point of the current interval (labelled 'c' above).\n f_lim: real `Tensor` of shape [n]. The function value threshold for\n the approximate Wolfe conditions to be checked for each batch member.\n max_iterations: Int32 scalar `Tensor`. The maximum number of iterations\n permitted. The limit applies equally to all batch members.\n expansion_param: Scalar positive `Tensor` of real dtype. Must be greater\n than `1.`. Used to expand the initial interval in case it does not bracket\n a minimum.\n\n Returns:\n A namedtuple with the following fields.\n iteration: An int32 scalar `Tensor`. The number of iterations performed.\n Bounded above by `max_iterations` parameter.\n stopped: A boolean `Tensor` of shape [n]. True for those batch members\n where the algorithm terminated before reaching `max_iterations`.\n failed: A boolean `Tensor` of shape [n]. True for those batch members\n where an error was encountered during bracketing.\n num_evals: An int32 scalar `Tensor`. The number of times the objective\n function was evaluated.\n left: Return value of value_and_gradients_function at the updated left\n end point of the interval found.\n right: Return value of value_and_gradients_function at the updated right\n end point of the interval found."}
{"_id": "q_786", "text": "Bisects an interval and updates to satisfy opposite slope conditions.\n\n Corresponds to the step U3 in [Hager and Zhang (2006)][2].\n\n Args:\n value_and_gradients_function: A Python callable that accepts a real scalar\n tensor and returns a namedtuple containing the value filed `f` of the\n function and its derivative value field `df` at that point.\n Alternatively, the function may representthe batching of `n` such line\n functions (e.g. projecting a single multivariate objective function along\n `n` distinct directions at once) accepting n points as input, i.e. a\n tensor of shape [n], and return a tuple of two tensors of shape [n], the\n function values and the corresponding derivatives at the input points.\n initial_left: Return value of value_and_gradients_function at the left end\n point of the current bracketing interval.\n initial_right: Return value of value_and_gradients_function at the right end\n point of the current bracketing interval.\n f_lim: real `Tensor` of shape [n]. The function value threshold for\n the approximate Wolfe conditions to be checked for each batch member.\n\n Returns:\n A namedtuple containing the following fields:\n iteration: An int32 scalar `Tensor`. The number of iterations performed.\n Bounded above by `max_iterations` parameter.\n stopped: A boolean scalar `Tensor`. True if the bisect algorithm\n terminated.\n failed: A scalar boolean tensor. Indicates whether the objective function\n failed to produce a finite value.\n num_evals: A scalar int32 tensor. The number of value and gradients\n function evaluations.\n left: Return value of value_and_gradients_function at the left end\n point of the bracketing interval found.\n right: Return value of value_and_gradients_function at the right end\n point of the bracketing interval found."}
{"_id": "q_787", "text": "Actual implementation of bisect given initial_args in a _BracketResult."}
{"_id": "q_788", "text": "Checks if the supplied values are finite.\n\n Args:\n val_1: A namedtuple instance with the function value and derivative,\n as returned e.g. by value_and_gradients_function evaluations.\n val_2: (Optional) A namedtuple instance with the function value and\n derivative, as returned e.g. by value_and_gradients_function evaluations.\n\n Returns:\n is_finite: Scalar boolean `Tensor` indicating whether the function value\n and the derivative in `val_1` (and optionally in `val_2`) are all finite."}
{"_id": "q_789", "text": "Checks whether the Wolfe or approx Wolfe conditions are satisfied.\n\n The Wolfe conditions are a set of stopping criteria for an inexact line search\n algorithm. Let f(a) be the function value along the search direction and\n df(a) the derivative along the search direction evaluated a distance 'a'.\n Here 'a' is the distance along the search direction. The Wolfe conditions are:\n\n ```None\n f(a) <= f(0) + delta * a * df(0) (Armijo/Sufficient decrease condition)\n df(a) >= sigma * df(0) (Weak curvature condition)\n ```\n `delta` and `sigma` are two user supplied parameters satisfying:\n `0 < delta < sigma <= 1.`. In the following, delta is called\n `sufficient_decrease_param` and sigma is called `curvature_param`.\n\n On a finite precision machine, the Wolfe conditions are difficult to satisfy\n when one is close to the minimum. Hence, Hager-Zhang propose replacing\n the sufficient decrease condition with the following condition on the\n derivative in the vicinity of a minimum.\n\n ```None\n df(a) <= (2 * delta - 1) * df(0) (Approx Wolfe sufficient decrease)\n ```\n This condition is only used if one is near the minimum. This is tested using\n\n ```None\n f(a) <= f(0) + epsilon * |f(0)|\n ```\n The following function checks both the Wolfe and approx Wolfe conditions.\n Here, `epsilon` is a small positive constant. In the following, the argument\n `f_lim` corresponds to the product: epsilon * |f(0)|.\n\n Args:\n val_0: A namedtuple, as returned by value_and_gradients_function\n evaluated at 0.\n val_c: A namedtuple, as returned by value_and_gradients_function\n evaluated at the point to be tested.\n f_lim: Scalar `Tensor` of real dtype. The function value threshold for\n the approximate Wolfe conditions to be checked.\n sufficient_decrease_param: Positive scalar `Tensor` of real dtype.\n Bounded above by the curvature param. Corresponds to 'delta' in the\n terminology of [Hager and Zhang (2006)][2].\n curvature_param: Positive scalar `Tensor` of real dtype. Bounded above\n by `1.`. Corresponds to 'sigma' in the terminology of\n [Hager Zhang (2005)][1].\n\n Returns:\n is_satisfied: A scalar boolean `Tensor` which is True if either the\n Wolfe or approximate Wolfe conditions are satisfied."}
{"_id": "q_790", "text": "Returns the secant interpolation for the minimum.\n\n The secant method is a technique for finding roots of nonlinear functions.\n When finding the minimum, one applies the secant method to the derivative\n of the function.\n For an arbitrary function and a bounding interval, the secant approximation\n can produce the next point which is outside the bounding interval. However,\n with the assumption of opposite slope condtion on the interval [a,b] the new\n point c is always bracketed by [a,b]. Note that by assumption,\n f'(a) < 0 and f'(b) > 0.\n Hence c is a weighted average of a and b and thus always in [a, b].\n\n Args:\n val_a: A namedtuple with the left end point, function value and derivative,\n of the current interval (i.e. a).\n val_b: A namedtuple with the right end point, function value and derivative,\n of the current interval (i.e. b).\n\n Returns:\n approx_minimum: A scalar real `Tensor`. An approximation to the point\n at which the derivative vanishes."}
{"_id": "q_791", "text": "Create a function implementing a step-size update policy.\n\n The simple policy increases or decreases the `step_size_var` based on the\n average of `exp(minimum(0., log_accept_ratio))`. It is based on\n [Section 4.2 of Andrieu and Thoms (2008)](\n https://people.eecs.berkeley.edu/~jordan/sail/readings/andrieu-thoms.pdf).\n\n The `num_adaptation_steps` argument is set independently of any burnin\n for the overall chain. In general, adaptation prevents the chain from\n reaching a stationary distribution, so obtaining consistent samples requires\n `num_adaptation_steps` be set to a value [somewhat smaller](\n http://andrewgelman.com/2017/12/15/burn-vs-warm-iterative-simulation-algorithms/#comment-627745)\n than the number of burnin steps. However, it may sometimes be helpful to set\n `num_adaptation_steps` to a larger value during development in order to\n inspect the behavior of the chain during adaptation.\n\n Args:\n num_adaptation_steps: Scalar `int` `Tensor` number of initial steps to\n during which to adjust the step size. This may be greater, less than, or\n equal to the number of burnin steps. If `None`, the step size is adapted\n on every step (note this breaks stationarity of the chain!).\n target_rate: Scalar `Tensor` representing desired `accept_ratio`.\n Default value: `0.75` (i.e., [center of asymptotically optimal\n rate](https://arxiv.org/abs/1411.6669)).\n decrement_multiplier: `Tensor` representing amount to downscale current\n `step_size`.\n Default value: `0.01`.\n increment_multiplier: `Tensor` representing amount to upscale current\n `step_size`.\n Default value: `0.01`.\n step_counter: Scalar `int` `Variable` specifying the current step. The step\n size is adapted iff `step_counter < num_adaptation_steps`.\n Default value: if `None`, an internal variable\n `step_size_adaptation_step_counter` is created and initialized to `-1`.\n\n Returns:\n step_size_simple_update_fn: Callable that takes args\n `step_size_var, kernel_results` and returns updated step size(s)."}
{"_id": "q_792", "text": "Creates initial `previous_kernel_results` using a supplied `state`."}
{"_id": "q_793", "text": "Constructs a ResNet18 model.\n\n Args:\n input_shape: A `tuple` indicating the Tensor shape.\n num_classes: `int` representing the number of class labels.\n kernel_posterior_scale_mean: Python `int` number for the kernel\n posterior's scale (log variance) mean. The smaller the mean the closer\n is the initialization to a deterministic network.\n kernel_posterior_scale_stddev: Python `float` number for the initial kernel\n posterior's scale stddev.\n ```\n q(W|x) ~ N(mu, var),\n log_var ~ N(kernel_posterior_scale_mean, kernel_posterior_scale_stddev)\n ````\n kernel_posterior_scale_constraint: Python `float` number for the log value\n to constrain the log variance throughout training.\n i.e. log_var <= log(kernel_posterior_scale_constraint).\n\n Returns:\n tf.keras.Model."}
{"_id": "q_794", "text": "Create the encoder function.\n\n Args:\n activation: Activation function to use.\n num_topics: The number of topics.\n layer_sizes: The number of hidden units per layer in the encoder.\n\n Returns:\n encoder: A `callable` mapping a bag-of-words `Tensor` to a\n `tfd.Distribution` instance over topics."}
{"_id": "q_795", "text": "Create the decoder function.\n\n Args:\n num_topics: The number of topics.\n num_words: The number of words.\n\n Returns:\n decoder: A `callable` mapping a `Tensor` of encodings to a\n `tfd.Distribution` instance over words."}
{"_id": "q_796", "text": "Implements Markov chain Monte Carlo via repeated `TransitionKernel` steps.\n\n This function samples from an Markov chain at `current_state` and whose\n stationary distribution is governed by the supplied `TransitionKernel`\n instance (`kernel`).\n\n This function can sample from multiple chains, in parallel. (Whether or not\n there are multiple chains is dictated by the `kernel`.)\n\n The `current_state` can be represented as a single `Tensor` or a `list` of\n `Tensors` which collectively represent the current state.\n\n Since MCMC states are correlated, it is sometimes desirable to produce\n additional intermediate states, and then discard them, ending up with a set of\n states with decreased autocorrelation. See [Owen (2017)][1]. Such \"thinning\"\n is made possible by setting `num_steps_between_results > 0`. The chain then\n takes `num_steps_between_results` extra steps between the steps that make it\n into the results. The extra steps are never materialized (in calls to\n `sess.run`), and thus do not increase memory requirements.\n\n Warning: when setting a `seed` in the `kernel`, ensure that `sample_chain`'s\n `parallel_iterations=1`, otherwise results will not be reproducible.\n\n In addition to returning the chain state, this function supports tracing of\n auxiliary variables used by the kernel. The traced values are selected by\n specifying `trace_fn`. By default, all kernel results are traced but in the\n future the default will be changed to no results being traced, so plan\n accordingly. See below for some examples of this feature.\n\n Args:\n num_results: Integer number of Markov chain draws.\n current_state: `Tensor` or Python `list` of `Tensor`s representing the\n current state(s) of the Markov chain(s).\n previous_kernel_results: A `Tensor` or a nested collection of `Tensor`s\n representing internal calculations made within the previous call to this\n function (or as returned by `bootstrap_results`).\n kernel: An instance of `tfp.mcmc.TransitionKernel` which implements one step\n of the Markov chain.\n num_burnin_steps: Integer number of chain steps to take before starting to\n collect results.\n Default value: 0 (i.e., no burn-in).\n num_steps_between_results: Integer number of chain steps between collecting\n a result. Only one out of every `num_steps_between_samples + 1` steps is\n included in the returned results. The number of returned chain states is\n still equal to `num_results`. Default value: 0 (i.e., no thinning).\n trace_fn: A callable that takes in the current chain state and the previous\n kernel results and return a `Tensor` or a nested collection of `Tensor`s\n that is then traced along with the chain state.\n return_final_kernel_results: If `True`, then the final kernel results are\n returned alongside the chain state and the trace specified by the\n `trace_fn`.\n parallel_iterations: The number of iterations allowed to run in parallel. It\n must be a positive integer. See `tf.while_loop` for more details.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., \"mcmc_sample_chain\").\n\n Returns:\n checkpointable_states_and_trace: if `return_final_kernel_results` is\n `True`. The return value is an instance of\n `CheckpointableStatesAndTrace`.\n all_states: if `return_final_kernel_results` is `False` and `trace_fn` is\n `None`. The return value is a `Tensor` or Python list of `Tensor`s\n representing the state(s) of the Markov chain(s) at each result step. Has\n same shape as input `current_state` but with a prepended\n `num_results`-size dimension.\n states_and_trace: if `return_final_kernel_results` is `False` and\n `trace_fn` is not `None`. The return value is an instance of\n `StatesAndTrace`.\n\n #### Examples\n\n ##### Sample from a diagonal-variance Gaussian.\n\n I.e.,\n\n ```none\n for i=1..n:\n x[i] ~ MultivariateNormal(loc=0, scale=diag(true_stddev)) # likelihood\n ```\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n dims = 10\n true_stddev = np.sqrt(np.linspace(1., 3., dims))\n likelihood = tfd.MultivariateNormalDiag(loc=0., scale_diag=true_stddev)\n\n states = tfp.mcmc.sample_chain(\n num_results=1000,\n num_burnin_steps=500,\n current_state=tf.zeros(dims),\n kernel=tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=likelihood.log_prob,\n step_size=0.5,\n num_leapfrog_steps=2),\n trace_fn=None)\n\n sample_mean = tf.reduce_mean(states, axis=0)\n # ==> approx all zeros\n\n sample_stddev = tf.sqrt(tf.reduce_mean(\n tf.squared_difference(states, sample_mean),\n axis=0))\n # ==> approx equal true_stddev\n ```\n\n ##### Sampling from factor-analysis posteriors with known factors.\n\n I.e.,\n\n ```none\n # prior\n w ~ MultivariateNormal(loc=0, scale=eye(d))\n for i=1..n:\n # likelihood\n x[i] ~ Normal(loc=w^T F[i], scale=1)\n ```\n\n where `F` denotes factors.\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n # Specify model.\n def make_prior(dims):\n return tfd.MultivariateNormalDiag(\n loc=tf.zeros(dims))\n\n def make_likelihood(weights, factors):\n return tfd.MultivariateNormalDiag(\n loc=tf.matmul(weights, factors, adjoint_b=True))\n\n def joint_log_prob(num_weights, factors, x, w):\n return (make_prior(num_weights).log_prob(w) +\n make_likelihood(w, factors).log_prob(x))\n\n def unnormalized_log_posterior(w):\n # Posterior is proportional to: `p(W, X=x | factors)`.\n return joint_log_prob(num_weights, factors, x, w)\n\n # Setup data.\n num_weights = 10 # == d\n num_factors = 40 # == n\n num_chains = 100\n\n weights = make_prior(num_weights).sample(1)\n factors = tf.random_normal([num_factors, num_weights])\n x = make_likelihood(weights, factors).sample()\n\n # Sample from Hamiltonian Monte Carlo Markov Chain.\n\n # Get `num_results` samples from `num_chains` independent chains.\n chains_states, kernels_results = tfp.mcmc.sample_chain(\n num_results=1000,\n num_burnin_steps=500,\n current_state=tf.zeros([num_chains, num_weights], name='init_weights'),\n kernel=tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=unnormalized_log_posterior,\n step_size=0.1,\n num_leapfrog_steps=2))\n\n # Compute sample stats.\n sample_mean = tf.reduce_mean(chains_states, axis=[0, 1])\n # ==> approx equal to weights\n\n sample_var = tf.reduce_mean(\n tf.squared_difference(chains_states, sample_mean),\n axis=[0, 1])\n # ==> less than 1\n ```\n\n ##### Custom tracing functions.\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n likelihood = tfd.Normal(loc=0., scale=1.)\n\n def sample_chain(trace_fn):\n return tfp.mcmc.sample_chain(\n num_results=1000,\n num_burnin_steps=500,\n current_state=0.,\n kernel=tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=likelihood.log_prob,\n step_size=0.5,\n num_leapfrog_steps=2),\n trace_fn=trace_fn)\n\n def trace_log_accept_ratio(states, previous_kernel_results):\n return previous_kernel_results.log_accept_ratio\n\n def trace_everything(states, previous_kernel_results):\n return previous_kernel_results\n\n _, log_accept_ratio = sample_chain(trace_fn=trace_log_accept_ratio)\n _, kernel_results = sample_chain(trace_fn=trace_everything)\n\n acceptance_prob = tf.exp(tf.minimum(log_accept_ratio_, 0.))\n # Equivalent to, but more efficient than:\n acceptance_prob = tf.exp(tf.minimum(kernel_results.log_accept_ratio_, 0.))\n ```\n\n #### References\n\n [1]: Art B. Owen. Statistically efficient thinning of a Markov chain sampler.\n _Technical Report_, 2017.\n http://statweb.stanford.edu/~owen/reports/bestthinning.pdf"}
{"_id": "q_797", "text": "A multi-layered topic model over a documents-by-terms matrix."}
{"_id": "q_798", "text": "Learnable Gamma via concentration and scale parameterization."}
{"_id": "q_799", "text": "Get the KL function registered for classes a and b."}
{"_id": "q_800", "text": "Returns an image tensor."}
{"_id": "q_801", "text": "Creates a character sprite from a set of attribute sprites."}
{"_id": "q_802", "text": "Creates a random sequence."}
{"_id": "q_803", "text": "Creates a tf.data pipeline for the sprites dataset.\n\n Args:\n characters: A list of (skin, hair, top, pants) tuples containing\n relative paths to the sprite png image for each attribute.\n actions: A list of Actions.\n directions: A list of Directions.\n channels: Number of image channels to yield.\n length: Desired length of the sequences.\n shuffle: Whether or not to shuffle the characters and sequences\n start frame.\n fake_data: Boolean for whether or not to yield synthetic data.\n\n Returns:\n A tf.data.Dataset yielding (seq, skin label index, hair label index,\n top label index, pants label index, action label index, skin label\n name, hair label_name, top label name, pants label name, action\n label name) tuples."}
{"_id": "q_804", "text": "Checks that `distributions` satisfies all assumptions."}
{"_id": "q_805", "text": "Counts the number of occurrences of each value in an integer array `arr`.\n\n Works like `tf.math.bincount`, but provides an `axis` kwarg that specifies\n dimensions to reduce over. With\n `~axis = [i for i in range(arr.ndim) if i not in axis]`,\n this function returns a `Tensor` of shape `[K] + arr.shape[~axis]`.\n\n If `minlength` and `maxlength` are not given, `K = tf.reduce_max(arr) + 1`\n if `arr` is non-empty, and 0 otherwise.\n If `weights` are non-None, then index `i` of the output stores the sum of the\n value in `weights` at each index where the corresponding value in `arr` is\n `i`.\n\n Args:\n arr: An `int32` `Tensor` of non-negative values.\n weights: If non-None, must be the same shape as arr. For each value in\n `arr`, the bin will be incremented by the corresponding weight instead of\n 1.\n minlength: If given, ensures the output has length at least `minlength`,\n padding with zeros at the end if necessary.\n maxlength: If given, skips values in `arr` that are equal or greater than\n `maxlength`, ensuring that the output has length at most `maxlength`.\n axis: A `0-D` or `1-D` `int32` `Tensor` (with static values) designating\n dimensions in `arr` to reduce over.\n `Default value:` `None`, meaning reduce over all dimensions.\n dtype: If `weights` is None, determines the type of the output bins.\n name: A name scope for the associated operations (optional).\n\n Returns:\n A vector with the same dtype as `weights` or the given `dtype`. The bin\n values."}
{"_id": "q_806", "text": "Bin values into discrete intervals.\n\n Given `edges = [c0, ..., cK]`, defining intervals\n `I0 = [c0, c1)`, `I1 = [c1, c2)`, ..., `I_{K-1} = [c_{K-1}, cK]`,\n This function returns `bins`, such that:\n `edges[bins[i]] <= x[i] < edges[bins[i] + 1]`.\n\n Args:\n x: Numeric `N-D` `Tensor` with `N > 0`.\n edges: `Tensor` of same `dtype` as `x`. The first dimension indexes edges\n of intervals. Must either be `1-D` or have\n `x.shape[1:] == edges.shape[1:]`. If `rank(edges) > 1`, `edges[k]`\n designates a shape `edges.shape[1:]` `Tensor` of bin edges for the\n corresponding dimensions of `x`.\n extend_lower_interval: Python `bool`. If `True`, extend the lowest\n interval `I0` to `(-inf, c1]`.\n extend_upper_interval: Python `bool`. If `True`, extend the upper\n interval `I_{K-1}` to `[c_{K-1}, +inf)`.\n dtype: The output type (`int32` or `int64`). `Default value:` `x.dtype`.\n This effects the output values when `x` is below/above the intervals,\n which will be `-1/K+1` for `int` types and `NaN` for `float`s.\n At indices where `x` is `NaN`, the output values will be `0` for `int`\n types and `NaN` for floats.\n name: A Python string name to prepend to created ops. Default: 'find_bins'\n\n Returns:\n bins: `Tensor` with same `shape` as `x` and `dtype`.\n Has whole number values. `bins[i] = k` means the `x[i]` falls into the\n `kth` bin, ie, `edges[bins[i]] <= x[i] < edges[bins[i] + 1]`.\n\n Raises:\n ValueError: If `edges.shape[0]` is determined to be less than 2.\n\n #### Examples\n\n Cut a `1-D` array\n\n ```python\n x = [0., 5., 6., 10., 20.]\n edges = [0., 5., 10.]\n tfp.stats.find_bins(x, edges)\n ==> [0., 0., 1., 1., np.nan]\n ```\n\n Cut `x` into its deciles\n\n ```python\n x = tf.random_uniform(shape=(100, 200))\n decile_edges = tfp.stats.quantiles(x, num_quantiles=10)\n bins = tfp.stats.find_bins(x, edges=decile_edges)\n bins.shape\n ==> (100, 200)\n tf.reduce_mean(bins == 0.)\n ==> approximately 0.1\n tf.reduce_mean(bins == 1.)\n ==> approximately 0.1\n ```"}
{"_id": "q_807", "text": "Count how often `x` falls in intervals defined by `edges`.\n\n Given `edges = [c0, ..., cK]`, defining intervals\n `I0 = [c0, c1)`, `I1 = [c1, c2)`, ..., `I_{K-1} = [c_{K-1}, cK]`,\n This function counts how often `x` falls into each interval.\n\n Values of `x` outside of the intervals cause errors. Consider using\n `extend_lower_interval`, `extend_upper_interval` to deal with this.\n\n Args:\n x: Numeric `N-D` `Tensor` with `N > 0`. If `axis` is not\n `None`, must have statically known number of dimensions. The\n `axis` kwarg determines which dimensions index iid samples.\n Other dimensions of `x` index \"events\" for which we will compute different\n histograms.\n edges: `Tensor` of same `dtype` as `x`. The first dimension indexes edges\n of intervals. Must either be `1-D` or have `edges.shape[1:]` the same\n as the dimensions of `x` excluding `axis`.\n If `rank(edges) > 1`, `edges[k]` designates a shape `edges.shape[1:]`\n `Tensor` of interval edges for the corresponding dimensions of `x`.\n axis: Optional `0-D` or `1-D` integer `Tensor` with constant\n values. The axis in `x` that index iid samples.\n `Default value:` `None` (treat every dimension as sample dimension).\n extend_lower_interval: Python `bool`. If `True`, extend the lowest\n interval `I0` to `(-inf, c1]`.\n extend_upper_interval: Python `bool`. If `True`, extend the upper\n interval `I_{K-1}` to `[c_{K-1}, +inf)`.\n dtype: The output type (`int32` or `int64`). `Default value:` `x.dtype`.\n name: A Python string name to prepend to created ops.\n `Default value:` 'histogram'\n\n Returns:\n counts: `Tensor` of type `dtype` and, with\n `~axis = [i for i in range(arr.ndim) if i not in axis]`,\n `counts.shape = [edges.shape[0]] + x.shape[~axis]`.\n With `I` a multi-index into `~axis`, `counts[k][I]` is the number of times\n event(s) fell into the `kth` interval of `edges`.\n\n #### Examples\n\n ```python\n # x.shape = [1000, 2]\n # x[:, 0] ~ Uniform(0, 1), x[:, 1] ~ Uniform(1, 2).\n x = tf.stack([tf.random_uniform([1000]), 1 + tf.random_uniform([1000])],\n axis=-1)\n\n # edges ==> bins [0, 0.5), [0.5, 1.0), [1.0, 1.5), [1.5, 2.0].\n edges = [0., 0.5, 1.0, 1.5, 2.0]\n\n tfp.stats.histogram(x, edges)\n ==> approximately [500, 500, 500, 500]\n\n tfp.stats.histogram(x, edges, axis=0)\n ==> approximately [[500, 500, 0, 0], [0, 0, 500, 500]]\n ```"}
{"_id": "q_808", "text": "Get static number of dimensions and assert that some expectations are met.\n\n This function returns the number of dimensions 'ndims' of x, as a Python int.\n\n The optional expect arguments are used to check the ndims of x, but this is\n only done if the static ndims of x is not None.\n\n Args:\n x: A Tensor.\n expect_static: Expect `x` to have statically defined `ndims`.\n expect_ndims: Optional Python integer. If provided, assert that x has\n number of dimensions equal to this.\n expect_ndims_no_more_than: Optional Python integer. If provided, assert\n that x has no more than this many dimensions.\n expect_ndims_at_least: Optional Python integer. If provided, assert that x\n has at least this many dimensions.\n\n Returns:\n ndims: A Python integer.\n\n Raises:\n ValueError: If any of the expectations above are violated."}
{"_id": "q_809", "text": "Insert the dims in `axis` back as singletons after being removed.\n\n Args:\n x: `Tensor`.\n axis: Python list of integers.\n\n Returns:\n `Tensor` with same values as `x`, but additional singleton dimensions."}
{"_id": "q_810", "text": "Convert possibly negatively indexed axis to non-negative list of ints.\n\n Args:\n axis: Integer Tensor.\n ndims: Number of dimensions into which axis indexes.\n\n Returns:\n A list of non-negative Python integers.\n\n Raises:\n ValueError: If `axis` is not statically defined."}
{"_id": "q_811", "text": "Use `top_k` to sort a `Tensor` along the last dimension."}
{"_id": "q_812", "text": "Build an ordered list of Distribution instances for component models.\n\n Args:\n num_timesteps: Python `int` number of timesteps to model.\n param_vals: a list of `Tensor` parameter values in order corresponding to\n `self.parameters`, or a dict mapping from parameter names to values.\n initial_step: optional `int` specifying the initial timestep to model.\n This is relevant when the model contains time-varying components,\n e.g., holidays or seasonality.\n\n Returns:\n component_ssms: a Python list of `LinearGaussianStateSpaceModel`\n Distribution objects, in order corresponding to `self.components`."}
{"_id": "q_813", "text": "The Amari-alpha Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n When `self_normalized = True`, the Amari-alpha Csiszar-function is:\n\n ```none\n f(u) = { -log(u) + (u - 1), alpha = 0\n { u log(u) - (u - 1), alpha = 1\n { [(u**alpha - 1) - alpha (u - 1)] / (alpha (alpha - 1)), otherwise\n ```\n\n When `self_normalized = False` the `(u - 1)` terms are omitted.\n\n Warning: when `alpha != 0` and/or `self_normalized = True` this function makes\n non-log-space calculations and may therefore be numerically unstable for\n `|logu| >> 0`.\n\n For more information, see:\n A. Cichocki and S. Amari. \"Families of Alpha-Beta-and GammaDivergences:\n Flexible and Robust Measures of Similarities.\" Entropy, vol. 12, no. 6, pp.\n 1532-1568, 2010.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n alpha: `float`-like Python scalar. (See Mathematical Details for meaning.)\n self_normalized: Python `bool` indicating whether `f'(u=1)=0`. When\n `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even\n when `p, q` are unnormalized measures.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n amari_alpha_of_u: `float`-like `Tensor` of the Csiszar-function evaluated\n at `u = exp(logu)`.\n\n Raises:\n TypeError: if `alpha` is `None` or a `Tensor`.\n TypeError: if `self_normalized` is `None` or a `Tensor`."}
{"_id": "q_814", "text": "The reverse Kullback-Leibler Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n When `self_normalized = True`, the KL-reverse Csiszar-function is:\n\n ```none\n f(u) = -log(u) + (u - 1)\n ```\n\n When `self_normalized = False` the `(u - 1)` term is omitted.\n\n Observe that as an f-Divergence, this Csiszar-function implies:\n\n ```none\n D_f[p, q] = KL[q, p]\n ```\n\n The KL is \"reverse\" because in maximum likelihood we think of minimizing `q`\n as in `KL[p, q]`.\n\n Warning: when self_normalized = True` this function makes non-log-space\n calculations and may therefore be numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n self_normalized: Python `bool` indicating whether `f'(u=1)=0`. When\n `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even\n when `p, q` are unnormalized measures.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n kl_reverse_of_u: `float`-like `Tensor` of the Csiszar-function evaluated at\n `u = exp(logu)`.\n\n Raises:\n TypeError: if `self_normalized` is `None` or a `Tensor`."}
{"_id": "q_815", "text": "The Jensen-Shannon Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n When `self_normalized = True`, the Jensen-Shannon Csiszar-function is:\n\n ```none\n f(u) = u log(u) - (1 + u) log(1 + u) + (u + 1) log(2)\n ```\n\n When `self_normalized = False` the `(u + 1) log(2)` term is omitted.\n\n Observe that as an f-Divergence, this Csiszar-function implies:\n\n ```none\n D_f[p, q] = KL[p, m] + KL[q, m]\n m(x) = 0.5 p(x) + 0.5 q(x)\n ```\n\n In a sense, this divergence is the \"reverse\" of the Arithmetic-Geometric\n f-Divergence.\n\n This Csiszar-function induces a symmetric f-Divergence, i.e.,\n `D_f[p, q] = D_f[q, p]`.\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n For more information, see:\n Lin, J. \"Divergence measures based on the Shannon entropy.\" IEEE Trans.\n Inf. Th., 37, 145-151, 1991.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n self_normalized: Python `bool` indicating whether `f'(u=1)=0`. When\n `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even\n when `p, q` are unnormalized measures.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n jensen_shannon_of_u: `float`-like `Tensor` of the Csiszar-function\n evaluated at `u = exp(logu)`."}
{"_id": "q_816", "text": "The Pearson Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Pearson Csiszar-function is:\n\n ```none\n f(u) = (u - 1)**2\n ```\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n pearson_of_u: `float`-like `Tensor` of the Csiszar-function evaluated at\n `u = exp(logu)`."}
{"_id": "q_817", "text": "The Squared-Hellinger Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Squared-Hellinger Csiszar-function is:\n\n ```none\n f(u) = (sqrt(u) - 1)**2\n ```\n\n This Csiszar-function induces a symmetric f-Divergence, i.e.,\n `D_f[p, q] = D_f[q, p]`.\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n squared_hellinger_of_u: `float`-like `Tensor` of the Csiszar-function\n evaluated at `u = exp(logu)`."}
{"_id": "q_818", "text": "The T-Power Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n When `self_normalized = True` the T-Power Csiszar-function is:\n\n ```none\n f(u) = s [ u**t - 1 - t(u - 1) ]\n s = { -1 0 < t < 1\n { +1 otherwise\n ```\n\n When `self_normalized = False` the `- t(u - 1)` term is omitted.\n\n This is similar to the `amari_alpha` Csiszar-function, with the associated\n divergence being the same up to factors depending only on `t`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n t: `Tensor` of same `dtype` as `logu` and broadcastable shape.\n self_normalized: Python `bool` indicating whether `f'(u=1)=0`.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n t_power_of_u: `float`-like `Tensor` of the Csiszar-function evaluated\n at `u = exp(logu)`."}
{"_id": "q_819", "text": "The log1p-abs Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Log1p-Abs Csiszar-function is:\n\n ```none\n f(u) = u**(sign(u-1)) - 1\n ```\n\n This function is so-named because it was invented from the following recipe.\n Choose a convex function g such that g(0)=0 and solve for f:\n\n ```none\n log(1 + f(u)) = g(log(u)).\n <=>\n f(u) = exp(g(log(u))) - 1\n ```\n\n That is, the graph is identically `g` when y-axis is `log1p`-domain and x-axis\n is `log`-domain.\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n log1p_abs_of_u: `float`-like `Tensor` of the Csiszar-function evaluated\n at `u = exp(logu)`."}
{"_id": "q_820", "text": "The Jeffreys Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Jeffreys Csiszar-function is:\n\n ```none\n f(u) = 0.5 ( u log(u) - log(u) )\n = 0.5 kl_forward + 0.5 kl_reverse\n = symmetrized_csiszar_function(kl_reverse)\n = symmetrized_csiszar_function(kl_forward)\n ```\n\n This Csiszar-function induces a symmetric f-Divergence, i.e.,\n `D_f[p, q] = D_f[q, p]`.\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n jeffreys_of_u: `float`-like `Tensor` of the Csiszar-function evaluated\n at `u = exp(logu)`."}
{"_id": "q_821", "text": "The Modified-GAN Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n When `self_normalized = True` the modified-GAN (Generative/Adversarial\n Network) Csiszar-function is:\n\n ```none\n f(u) = log(1 + u) - log(u) + 0.5 (u - 1)\n ```\n\n When `self_normalized = False` the `0.5 (u - 1)` is omitted.\n\n The unmodified GAN Csiszar-function is identical to Jensen-Shannon (with\n `self_normalized = False`).\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n self_normalized: Python `bool` indicating whether `f'(u=1)=0`. When\n `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even\n when `p, q` are unnormalized measures.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n chi_square_of_u: `float`-like `Tensor` of the Csiszar-function evaluated\n at `u = exp(logu)`."}
{"_id": "q_822", "text": "Calculates the dual Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Csiszar-dual is defined as:\n\n ```none\n f^*(u) = u f(1 / u)\n ```\n\n where `f` is some other Csiszar-function.\n\n For example, the dual of `kl_reverse` is `kl_forward`, i.e.,\n\n ```none\n f(u) = -log(u)\n f^*(u) = u f(1 / u) = -u log(1 / u) = u log(u)\n ```\n\n The dual of the dual is the original function:\n\n ```none\n f^**(u) = {u f(1/u)}^*(u) = u (1/u) f(1/(1/u)) = f(u)\n ```\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n csiszar_function: Python `callable` representing a Csiszar-function over\n log-domain.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n dual_f_of_u: `float`-like `Tensor` of the result of calculating the dual of\n `f` at `u = exp(logu)`."}
{"_id": "q_823", "text": "Monte-Carlo approximation of the Csiszar f-Divergence.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Csiszar f-Divergence for Csiszar-function f is given by:\n\n ```none\n D_f[p(X), q(X)] := E_{q(X)}[ f( p(X) / q(X) ) ]\n ~= m**-1 sum_j^m f( p(x_j) / q(x_j) ),\n where x_j ~iid q(X)\n ```\n\n Tricks: Reparameterization and Score-Gradient\n\n When q is \"reparameterized\", i.e., a diffeomorphic transformation of a\n parameterless distribution (e.g.,\n `Normal(Y; m, s) <=> Y = sX + m, X ~ Normal(0,1)`), we can swap gradient and\n expectation, i.e.,\n `grad[Avg{ s_i : i=1...n }] = Avg{ grad[s_i] : i=1...n }` where `S_n=Avg{s_i}`\n and `s_i = f(x_i), x_i ~iid q(X)`.\n\n However, if q is not reparameterized, TensorFlow's gradient will be incorrect\n since the chain-rule stops at samples of unreparameterized distributions. In\n this circumstance using the Score-Gradient trick results in an unbiased\n gradient, i.e.,\n\n ```none\n grad[ E_q[f(X)] ]\n = grad[ int dx q(x) f(x) ]\n = int dx grad[ q(x) f(x) ]\n = int dx [ q'(x) f(x) + q(x) f'(x) ]\n = int dx q(x) [q'(x) / q(x) f(x) + f'(x) ]\n = int dx q(x) grad[ f(x) q(x) / stop_grad[q(x)] ]\n = E_q[ grad[ f(x) q(x) / stop_grad[q(x)] ] ]\n ```\n\n Unless `q.reparameterization_type != tfd.FULLY_REPARAMETERIZED` it is\n usually preferable to set `use_reparametrization = True`.\n\n Example Application:\n\n The Csiszar f-Divergence is a useful framework for variational inference.\n I.e., observe that,\n\n ```none\n f(p(x)) = f( E_{q(Z | x)}[ p(x, Z) / q(Z | x) ] )\n <= E_{q(Z | x)}[ f( p(x, Z) / q(Z | x) ) ]\n := D_f[p(x, Z), q(Z | x)]\n ```\n\n The inequality follows from the fact that the \"perspective\" of `f`, i.e.,\n `(s, t) |-> t f(s / t))`, is convex in `(s, t)` when `s/t in domain(f)` and\n `t` is a real. Since the above framework includes the popular Evidence Lower\n BOund (ELBO) as a special case, i.e., `f(u) = -log(u)`, we call this framework\n \"Evidence Divergence Bound Optimization\" (EDBO).\n\n Args:\n f: Python `callable` representing a Csiszar-function in log-space, i.e.,\n takes `p_log_prob(q_samples) - q.log_prob(q_samples)`.\n p_log_prob: Python `callable` taking (a batch of) samples from `q` and\n returning the natural-log of the probability under distribution `p`.\n (In variational inference `p` is the joint distribution.)\n q: `tf.Distribution`-like instance; must implement:\n `reparameterization_type`, `sample(n, seed)`, and `log_prob(x)`.\n (In variational inference `q` is the approximate posterior distribution.)\n num_draws: Integer scalar number of draws used to approximate the\n f-Divergence expectation.\n use_reparametrization: Python `bool`. When `None` (the default),\n automatically set to:\n `q.reparameterization_type == tfd.FULLY_REPARAMETERIZED`.\n When `True` uses the standard Monte-Carlo average. When `False` uses the\n score-gradient trick. (See above for details.) When `False`, consider\n using `csiszar_vimco`.\n seed: Python `int` seed for `q.sample`.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n monte_carlo_csiszar_f_divergence: `float`-like `Tensor` Monte Carlo\n approximation of the Csiszar f-Divergence.\n\n Raises:\n ValueError: if `q` is not a reparameterized distribution and\n `use_reparametrization = True`. A distribution `q` is said to be\n \"reparameterized\" when its samples are generated by transforming the\n samples of another distribution which does not depend on the\n parameterization of `q`. This property ensures the gradient (with respect\n to parameters) is valid.\n TypeError: if `p_log_prob` is not a Python `callable`."}
{"_id": "q_824", "text": "Helper to `csiszar_vimco`; computes `log_avg_u`, `log_sooavg_u`.\n\n `axis = 0` of `logu` is presumed to correspond to iid samples from `q`, i.e.,\n\n ```none\n logu[j] = log(u[j])\n u[j] = p(x, h[j]) / q(h[j] | x)\n h[j] iid~ q(H | x)\n ```\n\n Args:\n logu: Floating-type `Tensor` representing `log(p(x, h) / q(h | x))`.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n log_avg_u: `logu.dtype` `Tensor` corresponding to the natural-log of the\n average of `u`. The sum of the gradient of `log_avg_u` is `1`.\n log_sooavg_u: `logu.dtype` `Tensor` characterized by the natural-log of the\n average of `u`` except that the average swaps-out `u[i]` for the\n leave-`i`-out Geometric-average. The mean of the gradient of\n `log_sooavg_u` is `1`. Mathematically `log_sooavg_u` is,\n ```none\n log_sooavg_u[i] = log(Avg{h[j ; i] : j=0, ..., m-1})\n h[j ; i] = { u[j] j!=i\n { GeometricAverage{u[k] : k != i} j==i\n ```"}
{"_id": "q_825", "text": "Like batch_gather, but broadcasts to the left of axis."}
{"_id": "q_826", "text": "Broadcasts the event or distribution parameters."}
{"_id": "q_827", "text": "r\"\"\"Importance sampling with a positive function, in log-space.\n\n With \\\\(p(z) := exp^{log_p(z)}\\\\), and \\\\(f(z) = exp{log_f(z)}\\\\),\n this `Op` returns\n\n \\\\(Log[ n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ] ], z_i ~ q,\\\\)\n \\\\(\\approx Log[ E_q[ f(Z) p(Z) / q(Z) ] ]\\\\)\n \\\\(= Log[E_p[f(Z)]]\\\\)\n\n This integral is done in log-space with max-subtraction to better handle the\n often extreme values that `f(z) p(z) / q(z)` can take on.\n\n In contrast to `expectation_importance_sampler`, this `Op` returns values in\n log-space.\n\n\n User supplies either `Tensor` of samples `z`, or number of samples to draw `n`\n\n Args:\n log_f: Callable mapping samples from `sampling_dist_q` to `Tensors` with\n shape broadcastable to `q.batch_shape`.\n For example, `log_f` works \"just like\" `sampling_dist_q.log_prob`.\n log_p: Callable mapping samples from `sampling_dist_q` to `Tensors` with\n shape broadcastable to `q.batch_shape`.\n For example, `log_p` works \"just like\" `q.log_prob`.\n sampling_dist_q: The sampling distribution.\n `tfp.distributions.Distribution`.\n `float64` `dtype` recommended.\n `log_p` and `q` should be supported on the same set.\n z: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.\n n: Integer `Tensor`. Number of samples to generate if `z` is not provided.\n seed: Python integer to seed the random number generator.\n name: A name to give this `Op`.\n\n Returns:\n Logarithm of the importance sampling estimate. `Tensor` with `shape` equal\n to batch shape of `q`, and `dtype` = `q.dtype`."}
{"_id": "q_828", "text": "Broadcasts the event or samples."}
{"_id": "q_829", "text": "Applies the BFGS algorithm to minimize a differentiable function.\n\n Performs unconstrained minimization of a differentiable function using the\n BFGS scheme. For details of the algorithm, see [Nocedal and Wright(2006)][1].\n\n ### Usage:\n\n The following example demonstrates the BFGS optimizer attempting to find the\n minimum for a simple two dimensional quadratic objective function.\n\n ```python\n minimum = np.array([1.0, 1.0]) # The center of the quadratic bowl.\n scales = np.array([2.0, 3.0]) # The scales along the two axes.\n\n # The objective function and the gradient.\n def quadratic(x):\n value = tf.reduce_sum(scales * (x - minimum) ** 2)\n return value, tf.gradients(value, x)[0]\n\n start = tf.constant([0.6, 0.8]) # Starting point for the search.\n optim_results = tfp.optimizer.bfgs_minimize(\n quadratic, initial_position=start, tolerance=1e-8)\n\n with tf.Session() as session:\n results = session.run(optim_results)\n # Check that the search converged\n assert(results.converged)\n # Check that the argmin is close to the actual value.\n np.testing.assert_allclose(results.position, minimum)\n # Print out the total number of function evaluations it took. Should be 6.\n print (\"Function evaluations: %d\" % results.num_objective_evaluations)\n ```\n\n ### References:\n [1]: Jorge Nocedal, Stephen Wright. Numerical Optimization. Springer Series in\n Operations Research. pp 136-140. 2006\n http://pages.mtu.edu/~struther/Courses/OLD/Sp2013/5630/Jorge_Nocedal_Numerical_optimization_267490.pdf\n\n Args:\n value_and_gradients_function: A Python callable that accepts a point as a\n real `Tensor` and returns a tuple of `Tensor`s of real dtype containing\n the value of the function and its gradient at that point. The function\n to be minimized. The input should be of shape `[..., n]`, where `n` is\n the size of the domain of input points, and all others are batching\n dimensions. The first component of the return value should be a real\n `Tensor` of matching shape `[...]`. The second component (the gradient)\n should also be of shape `[..., n]` like the input value to the function.\n initial_position: real `Tensor` of shape `[..., n]`. The starting point, or\n points when using batching dimensions, of the search procedure. At these\n points the function value and the gradient norm should be finite.\n tolerance: Scalar `Tensor` of real dtype. Specifies the gradient tolerance\n for the procedure. If the supremum norm of the gradient vector is below\n this number, the algorithm is stopped.\n x_tolerance: Scalar `Tensor` of real dtype. If the absolute change in the\n position between one iteration and the next is smaller than this number,\n the algorithm is stopped.\n f_relative_tolerance: Scalar `Tensor` of real dtype. If the relative change\n in the objective value between one iteration and the next is smaller\n than this value, the algorithm is stopped.\n initial_inverse_hessian_estimate: Optional `Tensor` of the same dtype\n as the components of the output of the `value_and_gradients_function`.\n If specified, the shape should broadcastable to shape `[..., n, n]`; e.g.\n if a single `[n, n]` matrix is provided, it will be automatically\n broadcasted to all batches. Alternatively, one can also specify a\n different hessian estimate for each batch member.\n For the correctness of the algorithm, it is required that this parameter\n be symmetric and positive definite. Specifies the starting estimate for\n the inverse of the Hessian at the initial point. If not specified,\n the identity matrix is used as the starting estimate for the\n inverse Hessian.\n max_iterations: Scalar positive int32 `Tensor`. The maximum number of\n iterations for BFGS updates.\n parallel_iterations: Positive integer. The number of iterations allowed to\n run in parallel.\n stopping_condition: (Optional) A Python function that takes as input two\n Boolean tensors of shape `[...]`, and returns a Boolean scalar tensor.\n The input tensors are `converged` and `failed`, indicating the current\n status of each respective batch member; the return value states whether\n the algorithm should stop. The default is tfp.optimizer.converged_all\n which only stops when all batch members have either converged or failed.\n An alternative is tfp.optimizer.converged_any which stops as soon as one\n batch member has converged, or when all have failed.\n name: (Optional) Python str. The name prefixed to the ops created by this\n function. If not supplied, the default name 'minimize' is used.\n\n Returns:\n optimizer_results: A namedtuple containing the following items:\n converged: boolean tensor of shape `[...]` indicating for each batch\n member whether the minimum was found within tolerance.\n failed: boolean tensor of shape `[...]` indicating for each batch\n member whether a line search step failed to find a suitable step size\n satisfying Wolfe conditions. In the absence of any constraints on the\n number of objective evaluations permitted, this value will\n be the complement of `converged`. However, if there is\n a constraint and the search stopped due to available\n evaluations being exhausted, both `failed` and `converged`\n will be simultaneously False.\n num_objective_evaluations: The total number of objective\n evaluations performed.\n position: A tensor of shape `[..., n]` containing the last argument value\n found during the search from each starting point. If the search\n converged, then this value is the argmin of the objective function.\n objective_value: A tensor of shape `[...]` with the value of the\n objective function at the `position`. If the search converged, then\n this is the (local) minimum of the objective function.\n objective_gradient: A tensor of shape `[..., n]` containing the gradient\n of the objective function at the `position`. If the search converged\n the max-norm of this tensor should be below the tolerance.\n inverse_hessian_estimate: A tensor of shape `[..., n, n]` containing the\n inverse of the estimated Hessian."}
{"_id": "q_830", "text": "Computes control inputs to validate a provided inverse Hessian.\n\n These ensure that the provided inverse Hessian is positive definite and\n symmetric.\n\n Args:\n inv_hessian: The starting estimate for the inverse of the Hessian at the\n initial point.\n\n Returns:\n A list of tf.Assert ops suitable for use with tf.control_dependencies."}
{"_id": "q_831", "text": "Update the BGFS state by computing the next inverse hessian estimate."}
{"_id": "q_832", "text": "Applies the BFGS update to the inverse Hessian estimate.\n\n The BFGS update rule is (note A^T denotes the transpose of a vector/matrix A).\n\n ```None\n rho = 1/(grad_delta^T * position_delta)\n U = (I - rho * position_delta * grad_delta^T)\n H_1 = U * H_0 * U^T + rho * position_delta * position_delta^T\n ```\n\n Here, `H_0` is the inverse Hessian estimate at the previous iteration and\n `H_1` is the next estimate. Note that `*` should be interpreted as the\n matrix multiplication (with the understanding that matrix multiplication for\n scalars is usual multiplication and for matrix with vector is the action of\n the matrix on the vector.).\n\n The implementation below utilizes an expanded version of the above formula\n to avoid the matrix multiplications that would be needed otherwise. By\n expansion it is easy to see that one only needs matrix-vector or\n vector-vector operations. The expanded version is:\n\n ```None\n f = 1 + rho * (grad_delta^T * H_0 * grad_delta)\n H_1 - H_0 = - rho * [position_delta * (H_0 * grad_delta)^T +\n (H_0 * grad_delta) * position_delta^T] +\n rho * f * [position_delta * position_delta^T]\n ```\n\n All the terms in square brackets are matrices and are constructed using\n vector outer products. All the other terms on the right hand side are scalars.\n Also worth noting that the first and second lines are both rank 1 updates\n applied to the current inverse Hessian estimate.\n\n Args:\n grad_delta: Real `Tensor` of shape `[..., n]`. The difference between the\n gradient at the new position and the old position.\n position_delta: Real `Tensor` of shape `[..., n]`. The change in position\n from the previous iteration to the current one.\n normalization_factor: Real `Tensor` of shape `[...]`. Should be equal to\n `grad_delta^T * position_delta`, i.e. `1/rho` as defined above.\n inv_hessian_estimate: Real `Tensor` of shape `[..., n, n]`. The previous\n estimate of the inverse Hessian. Should be positive definite and\n symmetric.\n\n Returns:\n A tuple containing the following fields\n is_valid: A Boolean `Tensor` of shape `[...]` indicating batch members\n where the update succeeded. The update can fail if the position change\n becomes orthogonal to the gradient change.\n next_inv_hessian_estimate: A `Tensor` of shape `[..., n, n]`. The next\n Hessian estimate updated using the BFGS update scheme. If the\n `inv_hessian_estimate` is symmetric and positive definite, the\n `next_inv_hessian_estimate` is guaranteed to satisfy the same\n conditions."}
{"_id": "q_833", "text": "Transpose a possibly batched matrix.\n\n Args:\n mat: A `tf.Tensor` of shape `[..., n, m]`.\n\n Returns:\n A tensor of shape `[..., m, n]` with matching batch dimensions."}
{"_id": "q_834", "text": "Maybe add `ndims` ones to `x.shape` on the right.\n\n If `ndims` is zero, this is a no-op; otherwise, we will create and return a\n new `Tensor` whose shape is that of `x` with `ndims` ones concatenated on the\n right side. If the shape of `x` is known statically, the shape of the return\n value will be as well.\n\n Args:\n x: The `Tensor` we'll return a reshaping of.\n ndims: Python `integer` number of ones to pad onto `x.shape`.\n Returns:\n If `ndims` is zero, `x`; otherwise, a `Tensor` whose shape is that of `x`\n with `ndims` ones concatenated on the right side. If possible, returns a\n `Tensor` whose shape is known statically.\n Raises:\n ValueError: if `ndims` is not a Python `integer` greater than or equal to\n zero."}
{"_id": "q_835", "text": "Return `Tensor` with right-most ndims summed.\n\n Args:\n x: the `Tensor` whose right-most `ndims` dimensions to sum\n ndims: number of right-most dimensions to sum.\n\n Returns:\n A `Tensor` resulting from calling `reduce_sum` on the `ndims` right-most\n dimensions. If the shape of `x` is statically known, the result will also\n have statically known shape. Otherwise, the resulting shape will only be\n known at runtime."}
{"_id": "q_836", "text": "A sqrt function whose gradient at zero is very large but finite.\n\n Args:\n x: a `Tensor` whose sqrt is to be computed.\n name: a Python `str` prefixed to all ops created by this function.\n Default `None` (i.e., \"sqrt_with_finite_grads\").\n\n Returns:\n sqrt: the square root of `x`, with an overridden gradient at zero\n grad: a gradient function, which is the same as sqrt's gradient everywhere\n except at zero, where it is given a large finite value, instead of `inf`.\n\n Raises:\n TypeError: if `tf.convert_to_tensor(x)` is not a `float` type.\n\n Often in kernel functions, we need to compute the L2 norm of the difference\n between two vectors, `x` and `y`: `sqrt(sum_i((x_i - y_i) ** 2))`. In the\n case where `x` and `y` are identical, e.g., on the diagonal of a kernel\n matrix, we get `NaN`s when we take gradients with respect to the inputs. To\n see, this consider the forward pass:\n\n ```\n [x_1 ... x_N] --> [x_1 ** 2 ... x_N ** 2] -->\n (x_1 ** 2 + ... + x_N ** 2) --> sqrt((x_1 ** 2 + ... + x_N ** 2))\n ```\n\n When we backprop through this forward pass, the `sqrt` yields an `inf` because\n `grad_z(sqrt(z)) = 1 / (2 * sqrt(z))`. Continuing the backprop to the left, at\n the `x ** 2` term, we pick up a `2 * x`, and when `x` is zero, we get\n `0 * inf`, which is `NaN`.\n\n We'd like to avoid these `NaN`s, since they infect the rest of the connected\n computation graph. Practically, when two inputs to a kernel function are\n equal, we are in one of two scenarios:\n 1. We are actually computing k(x, x), in which case norm(x - x) is\n identically zero, independent of x. In this case, we'd like the\n gradient to reflect this independence: it should be zero.\n 2. We are computing k(x, y), and x just *happens* to have the same value\n as y. The gradient at such inputs is in fact ill-defined (there is a\n cusp in the sqrt((x - y) ** 2) surface along the line x = y). There are,\n however, an infinite number of sub-gradients, all of which are valid at\n all such inputs. By symmetry, there is exactly one which is \"special\":\n zero, and we elect to use that value here. In practice, having two\n identical inputs to a kernel matrix is probably a pathological\n situation to be avoided, but that is better resolved at a higher level\n than this.\n\n To avoid the infinite gradient at zero, we use tf.custom_gradient to redefine\n the gradient at zero. We assign it to be a very large value, specifically\n the sqrt of the max value of the floating point dtype of the input. We use\n the sqrt (as opposed to just using the max floating point value) to avoid\n potential overflow when combining this value with others downstream."}
{"_id": "q_837", "text": "Applies the L-BFGS algorithm to minimize a differentiable function.\n\n Performs unconstrained minimization of a differentiable function using the\n L-BFGS scheme. See [Nocedal and Wright(2006)][1] for details of the algorithm.\n\n ### Usage:\n\n The following example demonstrates the L-BFGS optimizer attempting to find the\n minimum for a simple high-dimensional quadratic objective function.\n\n ```python\n # A high-dimensional quadratic bowl.\n ndims = 60\n minimum = np.ones([ndims], dtype='float64')\n scales = np.arange(ndims, dtype='float64') + 1.0\n\n # The objective function and the gradient.\n def quadratic(x):\n value = tf.reduce_sum(scales * (x - minimum) ** 2)\n return value, tf.gradients(value, x)[0]\n\n start = np.arange(ndims, 0, -1, dtype='float64')\n optim_results = tfp.optimizer.lbfgs_minimize(\n quadratic, initial_position=start, num_correction_pairs=10,\n tolerance=1e-8)\n\n with tf.Session() as session:\n results = session.run(optim_results)\n # Check that the search converged\n assert(results.converged)\n # Check that the argmin is close to the actual value.\n np.testing.assert_allclose(results.position, minimum)\n ```\n\n ### References:\n\n [1] Jorge Nocedal, Stephen Wright. Numerical Optimization. Springer Series\n in Operations Research. pp 176-180. 2006\n\n http://pages.mtu.edu/~struther/Courses/OLD/Sp2013/5630/Jorge_Nocedal_Numerical_optimization_267490.pdf\n\n Args:\n value_and_gradients_function: A Python callable that accepts a point as a\n real `Tensor` and returns a tuple of `Tensor`s of real dtype containing\n the value of the function and its gradient at that point. The function\n to be minimized. The input is of shape `[..., n]`, where `n` is the size\n of the domain of input points, and all others are batching dimensions.\n The first component of the return value is a real `Tensor` of matching\n shape `[...]`. The second component (the gradient) is also of shape\n `[..., n]` like the input value to the function.\n initial_position: Real `Tensor` of shape `[..., n]`. The starting point, or\n points when using batching dimensions, of the search procedure. At these\n points the function value and the gradient norm should be finite.\n num_correction_pairs: Positive integer. Specifies the maximum number of\n (position_delta, gradient_delta) correction pairs to keep as implicit\n approximation of the Hessian matrix.\n tolerance: Scalar `Tensor` of real dtype. Specifies the gradient tolerance\n for the procedure. If the supremum norm of the gradient vector is below\n this number, the algorithm is stopped.\n x_tolerance: Scalar `Tensor` of real dtype. If the absolute change in the\n position between one iteration and the next is smaller than this number,\n the algorithm is stopped.\n f_relative_tolerance: Scalar `Tensor` of real dtype. If the relative change\n in the objective value between one iteration and the next is smaller\n than this value, the algorithm is stopped.\n initial_inverse_hessian_estimate: None. Option currently not supported.\n max_iterations: Scalar positive int32 `Tensor`. The maximum number of\n iterations for L-BFGS updates.\n parallel_iterations: Positive integer. The number of iterations allowed to\n run in parallel.\n stopping_condition: (Optional) A Python function that takes as input two\n Boolean tensors of shape `[...]`, and returns a Boolean scalar tensor.\n The input tensors are `converged` and `failed`, indicating the current\n status of each respective batch member; the return value states whether\n the algorithm should stop. The default is tfp.optimizer.converged_all\n which only stops when all batch members have either converged or failed.\n An alternative is tfp.optimizer.converged_any which stops as soon as one\n batch member has converged, or when all have failed.\n name: (Optional) Python str. The name prefixed to the ops created by this\n function. If not supplied, the default name 'minimize' is used.\n\n Returns:\n optimizer_results: A namedtuple containing the following items:\n converged: Scalar boolean tensor indicating whether the minimum was\n found within tolerance.\n failed: Scalar boolean tensor indicating whether a line search\n step failed to find a suitable step size satisfying Wolfe\n conditions. In the absence of any constraints on the\n number of objective evaluations permitted, this value will\n be the complement of `converged`. However, if there is\n a constraint and the search stopped due to available\n evaluations being exhausted, both `failed` and `converged`\n will be simultaneously False.\n num_objective_evaluations: The total number of objective\n evaluations performed.\n position: A tensor containing the last argument value found\n during the search. If the search converged, then\n this value is the argmin of the objective function.\n objective_value: A tensor containing the value of the objective\n function at the `position`. If the search converged, then this is\n the (local) minimum of the objective function.\n objective_gradient: A tensor containing the gradient of the objective\n function at the `position`. If the search converged the\n max-norm of this tensor should be below the tolerance.\n position_deltas: A tensor encoding information about the latest\n changes in `position` during the algorithm execution.\n gradient_deltas: A tensor encoding information about the latest\n changes in `objective_gradient` during the algorithm execution."}
{"_id": "q_838", "text": "Create LBfgsOptimizerResults with initial state of search procedure."}
{"_id": "q_839", "text": "Computes the search direction to follow at the current state.\n\n On the `k`-th iteration of the main L-BFGS algorithm, the state has collected\n the most recent `m` correction pairs in position_deltas and gradient_deltas,\n where `k = state.num_iterations` and `m = min(k, num_correction_pairs)`.\n\n Assuming these, the code below is an implementation of the L-BFGS two-loop\n recursion algorithm given by [Nocedal and Wright(2006)][1]:\n\n ```None\n q_direction = objective_gradient\n for i in reversed(range(m)): # First loop.\n inv_rho[i] = gradient_deltas[i]^T * position_deltas[i]\n alpha[i] = position_deltas[i]^T * q_direction / inv_rho[i]\n q_direction = q_direction - alpha[i] * gradient_deltas[i]\n\n kth_inv_hessian_factor = (gradient_deltas[-1]^T * position_deltas[-1] /\n gradient_deltas[-1]^T * gradient_deltas[-1])\n r_direction = kth_inv_hessian_factor * I * q_direction\n\n for i in range(m): # Second loop.\n beta = gradient_deltas[i]^T * r_direction / inv_rho[i]\n r_direction = r_direction + position_deltas[i] * (alpha[i] - beta)\n\n return -r_direction # Approximates - H_k * objective_gradient.\n ```\n\n Args:\n state: A `LBfgsOptimizerResults` tuple with the current state of the\n search procedure.\n\n Returns:\n A real `Tensor` of the same shape as the `state.position`. The direction\n along which to perform line search."}
{"_id": "q_840", "text": "Creates a `tf.Tensor` suitable to hold `k` element-shaped tensors.\n\n For example:\n\n ```python\n element = tf.constant([[0., 1., 2., 3., 4.],\n [5., 6., 7., 8., 9.]])\n\n # A queue capable of holding 3 elements.\n _make_empty_queue_for(3, element)\n # => [[[ 0., 0., 0., 0., 0.],\n # [ 0., 0., 0., 0., 0.]],\n #\n # [[ 0., 0., 0., 0., 0.],\n # [ 0., 0., 0., 0., 0.]],\n #\n # [[ 0., 0., 0., 0., 0.],\n # [ 0., 0., 0., 0., 0.]]]\n ```\n\n Args:\n k: A positive scalar integer, number of elements that each queue will hold.\n element: A `tf.Tensor`, only its shape and dtype information are relevant.\n\n Returns:\n A zero-filed `tf.Tensor` of shape `(k,) + tf.shape(element)` and same dtype\n as `element`."}
{"_id": "q_841", "text": "Computes whether each square matrix in the input is positive semi-definite.\n\n Args:\n x: A floating-point `Tensor` of shape `[B1, ..., Bn, M, M]`.\n\n Returns:\n mask: A floating-point `Tensor` of shape `[B1, ... Bn]`. Each\n scalar is 1 if the corresponding matrix was PSD, otherwise 0."}
{"_id": "q_842", "text": "Returns rejection samples from trying to get good correlation matrices.\n\n The proposal being rejected from is the uniform distribution on\n \"correlation-like\" matrices. We say a matrix is \"correlation-like\"\n if it is a symmetric square matrix with all entries between -1 and 1\n (inclusive) and 1s on the main diagonal. Of these, the ones that\n are positive semi-definite are exactly the correlation matrices.\n\n The rejection algorithm, then, is to sample a `Tensor` of\n `sample_shape` correlation-like matrices of dimensions `dim` by\n `dim`, and check each one for (i) being a correlation matrix (i.e.,\n PSD), and (ii) having determinant at least the corresponding entry\n of `det_bounds`.\n\n Args:\n det_bounds: A `Tensor` of lower bounds on the determinants of\n acceptable matrices. The shape must broadcast with `sample_shape`.\n dim: A Python `int` dimension of correlation matrices to sample.\n sample_shape: Python `tuple` of `int` shape of the samples to\n compute, excluding the two matrix dimensions.\n dtype: The `dtype` in which to do the computation.\n seed: Random seed.\n\n Returns:\n weights: A `Tensor` of shape `sample_shape`. Each entry is 0 if the\n corresponding matrix was not a correlation matrix, or had too\n small of a determinant. Otherwise, the entry is the\n multiplicative inverse of the density of proposing that matrix\n uniformly, i.e., the volume of the set of `dim` by `dim`\n correlation-like matrices.\n volume: The volume of the set of `dim` by `dim` correlation-like\n matrices."}
{"_id": "q_843", "text": "Computes a confidence interval for the mean of the given 1-D distribution.\n\n Assumes (and checks) that the given distribution is Bernoulli, i.e.,\n takes only two values. This licenses using the CDF of the binomial\n distribution for the confidence, which is tighter (for extreme\n probabilities) than the DKWM inequality. The method is known as the\n [Clopper-Pearson method]\n (https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval).\n\n Assumes:\n\n - The given samples were drawn iid from the distribution of interest.\n\n - The given distribution is a Bernoulli, i.e., supported only on\n low and high.\n\n Guarantees:\n\n - The probability (over the randomness of drawing the given sample)\n that the true mean is outside the returned interval is no more\n than the given error_rate.\n\n Args:\n samples: `np.ndarray` of samples drawn iid from the distribution\n of interest.\n error_rate: Python `float` admissible rate of mistakes.\n\n Returns:\n low: Lower bound of confidence interval.\n high: Upper bound of confidence interval.\n\n Raises:\n ValueError: If `samples` has rank other than 1 (batch semantics\n are not implemented), or if `samples` contains values other than\n `low` or `high` (as that makes the distribution not Bernoulli)."}
{"_id": "q_844", "text": "Computes the von Mises CDF and its derivative via series expansion."}
{"_id": "q_845", "text": "Computes the von Mises CDF and its derivative via Normal approximation."}
{"_id": "q_846", "text": "Performs one step of the differential evolution algorithm.\n\n Args:\n objective_function: A Python callable that accepts a batch of possible\n solutions and returns the values of the objective function at those\n arguments as a rank 1 real `Tensor`. This specifies the function to be\n minimized. The input to this callable may be either a single `Tensor`\n or a Python `list` of `Tensor`s. The signature must match the format of\n the argument `population`. (i.e. objective_function(*population) must\n return the value of the function to be minimized).\n population: `Tensor` or Python `list` of `Tensor`s representing the\n current population vectors. Each `Tensor` must be of the same real dtype.\n The first dimension indexes individual population members while the\n rest of the dimensions are consumed by the value function. For example,\n if the population is a single `Tensor` of shape [n, m1, m2], then `n` is\n the population size and the output of `objective_function` applied to the\n population is a `Tensor` of shape [n]. If the population is a python\n list of `Tensor`s then each `Tensor` in the list should have the first\n axis of a common size, say `n` and `objective_function(*population)`\n should return a `Tensor of shape [n]. The population must have at least\n 4 members for the algorithm to work correctly.\n population_values: A `Tensor` of rank 1 and real dtype. The result of\n applying `objective_function` to the `population`. If not supplied it is\n computed using the `objective_function`.\n Default value: None.\n differential_weight: Real scalar `Tensor`. Must be positive and less than\n 2.0. The parameter controlling the strength of mutation.\n Default value: 0.5\n crossover_prob: Real scalar `Tensor`. Must be between 0 and 1. The\n probability of recombination per site.\n Default value: 0.9\n seed: `int` or None. The random seed for this `Op`. If `None`, no seed is\n applied.\n Default value: None.\n name: (Optional) Python str. The name prefixed to the ops created by this\n function. If not supplied, the default name 'one_step' is\n used.\n Default value: None\n\n Returns:\n A sequence containing the following elements (in order):\n next_population: A `Tensor` or Python `list` of `Tensor`s of the same\n structure as the input population. The population at the next generation.\n next_population_values: A `Tensor` of same shape and dtype as input\n `population_values`. The function values for the `next_population`."}
{"_id": "q_847", "text": "Processes initial args."}
{"_id": "q_848", "text": "Finds the population member with the lowest value."}
{"_id": "q_849", "text": "Computes the mutatated vectors for each population member.\n\n Args:\n population: Python `list` of `Tensor`s representing the\n current population vectors. Each `Tensor` must be of the same real dtype.\n The first dimension of each `Tensor` indexes individual\n population members. For example, if the population is a list with a\n single `Tensor` of shape [n, m1, m2], then `n` is the population size and\n the shape of an individual solution is [m1, m2].\n If there is more than one element in the population, then each `Tensor`\n in the list should have the first axis of the same size.\n population_size: Scalar integer `Tensor`. The size of the population.\n mixing_indices: `Tensor` of integral dtype and shape [n, 3] where `n` is the\n number of members in the population. Each element of the `Tensor` must be\n a valid index into the first dimension of the population (i.e range\n between `0` and `n-1` inclusive).\n differential_weight: Real scalar `Tensor`. Must be positive and less than\n 2.0. The parameter controlling the strength of mutation.\n\n Returns:\n mutants: `Tensor` or Python `list` of `Tensor`s of the same shape and dtype\n as the input population. The mutated vectors."}
{"_id": "q_850", "text": "Generates an array of indices suitable for mutation operation.\n\n The mutation operation in differential evolution requires that for every\n element of the population, three distinct other elements be chosen to produce\n a trial candidate. This function generates an array of shape [size, 3]\n satisfying the properties that:\n (a). array[i, :] does not contain the index 'i'.\n (b). array[i, :] does not contain any overlapping indices.\n (c). All elements in the array are between 0 and size - 1 inclusive.\n\n Args:\n size: Scalar integer `Tensor`. The number of samples as well as a the range\n of the indices to sample from.\n seed: `int` or None. The random seed for this `Op`. If `None`, no seed is\n applied.\n Default value: `None`.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: 'get_mixing_indices'.\n\n Returns:\n sample: A `Tensor` of shape [size, 3] and same dtype as `size` containing\n samples without replacement between 0 and size - 1 (inclusive) with the\n `i`th row not including the number `i`."}
{"_id": "q_851", "text": "Converts the input arg to a list if it is not a list already.\n\n Args:\n tensor_or_list: A `Tensor` or a Python list of `Tensor`s. The argument to\n convert to a list of `Tensor`s.\n\n Returns:\n A tuple of two elements. The first is a Python list of `Tensor`s containing\n the original arguments. The second is a boolean indicating whether\n the original argument was a list or tuple already."}
{"_id": "q_852", "text": "Gets a Tensor of type `dtype`, 0 if `tol` is None, validation optional."}
{"_id": "q_853", "text": "Soft Thresholding operator.\n\n This operator is defined by the equations\n\n ```none\n { x[i] - gamma, x[i] > gamma\n SoftThreshold(x, gamma)[i] = { 0, x[i] == gamma\n { x[i] + gamma, x[i] < -gamma\n ```\n\n In the context of proximal gradient methods, we have\n\n ```none\n SoftThreshold(x, gamma) = prox_{gamma L1}(x)\n ```\n\n where `prox` is the proximity operator. Thus the soft thresholding operator\n is used in proximal gradient descent for optimizing a smooth function with\n (non-smooth) L1 regularization, as outlined below.\n\n The proximity operator is defined as:\n\n ```none\n prox_r(x) = argmin{ r(z) + 0.5 ||x - z||_2**2 : z },\n ```\n\n where `r` is a (weakly) convex function, not necessarily differentiable.\n Because the L2 norm is strictly convex, the above argmin is unique.\n\n One important application of the proximity operator is as follows. Let `L` be\n a convex and differentiable function with Lipschitz-continuous gradient. Let\n `R` be a convex lower semicontinuous function which is possibly\n nondifferentiable. Let `gamma` be an arbitrary positive real. Then\n\n ```none\n x_star = argmin{ L(x) + R(x) : x }\n ```\n\n if and only if the fixed-point equation is satisfied:\n\n ```none\n x_star = prox_{gamma R}(x_star - gamma grad L(x_star))\n ```\n\n Proximal gradient descent thus typically consists of choosing an initial value\n `x^{(0)}` and repeatedly applying the update\n\n ```none\n x^{(k+1)} = prox_{gamma^{(k)} R}(x^{(k)} - gamma^{(k)} grad L(x^{(k)}))\n ```\n\n where `gamma` is allowed to vary from iteration to iteration. Specializing to\n the case where `R(x) = ||x||_1`, we minimize `L(x) + ||x||_1` by repeatedly\n applying the update\n\n ```\n x^{(k+1)} = SoftThreshold(x - gamma grad L(x^{(k)}), gamma)\n ```\n\n (This idea can also be extended to second-order approximations, although the\n multivariate case does not have a known closed form like above.)\n\n Args:\n x: `float` `Tensor` representing the input to the SoftThreshold function.\n threshold: nonnegative scalar, `float` `Tensor` representing the radius of\n the interval on which each coordinate of SoftThreshold takes the value\n zero. Denoted `gamma` above.\n name: Python string indicating the name of the TensorFlow operation.\n Default value: `'soft_threshold'`.\n\n Returns:\n softthreshold: `float` `Tensor` with the same shape and dtype as `x`,\n representing the value of the SoftThreshold function.\n\n #### References\n\n [1]: Yu, Yao-Liang. The Proximity Operator.\n https://www.cs.cmu.edu/~suvrit/teach/yaoliang_proximity.pdf\n\n [2]: Wikipedia Contributors. Proximal gradient methods for learning.\n _Wikipedia, The Free Encyclopedia_, 2018.\n https://en.wikipedia.org/wiki/Proximal_gradient_methods_for_learning"}
{"_id": "q_854", "text": "Clips values to a specified min and max while leaving gradient unaltered.\n\n Like `tf.clip_by_value`, this function returns a tensor of the same type and\n shape as input `t` but with values clamped to be no smaller than to\n `clip_value_min` and no larger than `clip_value_max`. Unlike\n `tf.clip_by_value`, the gradient is unaffected by this op, i.e.,\n\n ```python\n tf.gradients(tfp.math.clip_by_value_preserve_gradient(x), x)[0]\n # ==> ones_like(x)\n ```\n\n Note: `clip_value_min` needs to be smaller or equal to `clip_value_max` for\n correct results.\n\n Args:\n t: A `Tensor`.\n clip_value_min: A scalar `Tensor`, or a `Tensor` with the same shape\n as `t`. The minimum value to clip by.\n clip_value_max: A scalar `Tensor`, or a `Tensor` with the same shape\n as `t`. The maximum value to clip by.\n name: A name for the operation (optional).\n Default value: `'clip_by_value_preserve_gradient'`.\n\n Returns:\n clipped_t: A clipped `Tensor`."}
{"_id": "q_855", "text": "Build an iterator over training batches."}
{"_id": "q_856", "text": "Converts a sequence of productions into a string of terminal symbols.\n\n Args:\n productions: Tensor of shape [1, num_productions, num_production_rules].\n Slices along the `num_productions` dimension represent one-hot vectors.\n\n Returns:\n str that concatenates all terminal symbols from `productions`.\n\n Raises:\n ValueError: If the first production rule does not begin with\n `self.start_symbol`."}
{"_id": "q_857", "text": "Runs the model forward to generate a sequence of productions.\n\n Args:\n inputs: Unused.\n\n Returns:\n productions: Tensor of shape [1, num_productions, num_production_rules].\n Slices along the `num_productions` dimension represent one-hot vectors."}
{"_id": "q_858", "text": "Runs the model forward to return a stochastic encoding.\n\n Args:\n inputs: Tensor of shape [1, num_productions, num_production_rules]. It is\n a sequence of productions of length `num_productions`. Each production\n is a one-hot vector of length `num_production_rules`: it determines\n which production rule the production corresponds to.\n\n Returns:\n latent_code_posterior: A random variable capturing a sample from the\n variational distribution, of shape [1, self.latent_size]."}
{"_id": "q_859", "text": "Integral of the `hat` function, used for sampling.\n\n We choose a `hat` function, h(x) = x^(-power), which is a continuous\n (unnormalized) density touching each positive integer at the (unnormalized)\n pmf. This function implements `hat` integral: H(x) = int_x^inf h(t) dt;\n which is needed for sampling purposes.\n\n Arguments:\n x: A Tensor of points x at which to evaluate H(x).\n\n Returns:\n A Tensor containing evaluation H(x) at x."}
{"_id": "q_860", "text": "Inverse function of _hat_integral."}
{"_id": "q_861", "text": "Compute the matrix rank; the number of non-zero SVD singular values.\n\n Arguments:\n a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be\n pseudo-inverted.\n tol: Threshold below which the singular value is counted as \"zero\".\n Default value: `None` (i.e., `eps * max(rows, cols) * max(singular_val)`).\n validate_args: When `True`, additional assertions might be embedded in the\n graph.\n Default value: `False` (i.e., no graph assertions are added).\n name: Python `str` prefixed to ops created by this function.\n Default value: \"matrix_rank\".\n\n Returns:\n matrix_rank: (Batch of) `int32` scalars representing the number of non-zero\n singular values."}
{"_id": "q_862", "text": "Compute the Moore-Penrose pseudo-inverse of a matrix.\n\n Calculate the [generalized inverse of a matrix](\n https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) using its\n singular-value decomposition (SVD) and including all large singular values.\n\n The pseudo-inverse of a matrix `A`, is defined as: \"the matrix that 'solves'\n [the least-squares problem] `A @ x = b`,\" i.e., if `x_hat` is a solution, then\n `A_pinv` is the matrix such that `x_hat = A_pinv @ b`. It can be shown that if\n `U @ Sigma @ V.T = A` is the singular value decomposition of `A`, then\n `A_pinv = V @ inv(Sigma) U^T`. [(Strang, 1980)][1]\n\n This function is analogous to [`numpy.linalg.pinv`](\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.pinv.html).\n It differs only in default value of `rcond`. In `numpy.linalg.pinv`, the\n default `rcond` is `1e-15`. Here the default is\n `10. * max(num_rows, num_cols) * np.finfo(dtype).eps`.\n\n Args:\n a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be\n pseudo-inverted.\n rcond: `Tensor` of small singular value cutoffs. Singular values smaller\n (in modulus) than `rcond` * largest_singular_value (again, in modulus) are\n set to zero. Must broadcast against `tf.shape(a)[:-2]`.\n Default value: `10. * max(num_rows, num_cols) * np.finfo(a.dtype).eps`.\n validate_args: When `True`, additional assertions might be embedded in the\n graph.\n Default value: `False` (i.e., no graph assertions are added).\n name: Python `str` prefixed to ops created by this function.\n Default value: \"pinv\".\n\n Returns:\n a_pinv: The pseudo-inverse of input `a`. Has same shape as `a` except\n rightmost two dimensions are transposed.\n\n Raises:\n TypeError: if input `a` does not have `float`-like `dtype`.\n ValueError: if input `a` has fewer than 2 dimensions.\n\n #### Examples\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n a = tf.constant([[1., 0.4, 0.5],\n [0.4, 0.2, 0.25],\n [0.5, 0.25, 0.35]])\n tf.matmul(tfp.math.pinv(a), a)\n # ==> array([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.]], dtype=float32)\n\n a = tf.constant([[1., 0.4, 0.5, 1.],\n [0.4, 0.2, 0.25, 2.],\n [0.5, 0.25, 0.35, 3.]])\n tf.matmul(tfp.math.pinv(a), a)\n # ==> array([[ 0.76, 0.37, 0.21, -0.02],\n [ 0.37, 0.43, -0.33, 0.02],\n [ 0.21, -0.33, 0.81, 0.01],\n [-0.02, 0.02, 0.01, 1. ]], dtype=float32)\n ```\n\n #### References\n\n [1]: G. Strang. \"Linear Algebra and Its Applications, 2nd Ed.\" Academic Press,\n Inc., 1980, pp. 139-142."}
{"_id": "q_863", "text": "Solves systems of linear eqns `A X = RHS`, given LU factorizations.\n\n Note: this function does not verify the implied matrix is actually invertible\n nor is this condition checked even when `validate_args=True`.\n\n Args:\n lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if\n `matmul(P, matmul(L, U)) = X` then `lower_upper = L + U - eye`.\n perm: `p` as returned by `tf.linag.lu`, i.e., if\n `matmul(P, matmul(L, U)) = X` then `perm = argmax(P)`.\n rhs: Matrix-shaped float `Tensor` representing targets for which to solve;\n `A X = RHS`. To handle vector cases, use:\n `lu_solve(..., rhs[..., tf.newaxis])[..., 0]`.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness. Note: this function does not verify the implied matrix is\n actually invertible, even when `validate_args=True`.\n Default value: `False` (i.e., don't validate arguments).\n name: Python `str` name given to ops managed by this object.\n Default value: `None` (i.e., \"lu_solve\").\n\n Returns:\n x: The `X` in `A @ X = RHS`.\n\n #### Examples\n\n ```python\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n x = [[[1., 2],\n [3, 4]],\n [[7, 8],\n [3, 4]]]\n inv_x = tfp.math.lu_solve(*tf.linalg.lu(x), rhs=tf.eye(2))\n tf.assert_near(tf.matrix_inverse(x), inv_x)\n # ==> True\n ```"}
{"_id": "q_864", "text": "Computes a matrix inverse given the matrix's LU decomposition.\n\n This op is conceptually identical to,\n\n ````python\n inv_X = tf.lu_matrix_inverse(*tf.linalg.lu(X))\n tf.assert_near(tf.matrix_inverse(X), inv_X)\n # ==> True\n ```\n\n Note: this function does not verify the implied matrix is actually invertible\n nor is this condition checked even when `validate_args=True`.\n\n Args:\n lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if\n `matmul(P, matmul(L, U)) = X` then `lower_upper = L + U - eye`.\n perm: `p` as returned by `tf.linag.lu`, i.e., if\n `matmul(P, matmul(L, U)) = X` then `perm = argmax(P)`.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness. Note: this function does not verify the implied matrix is\n actually invertible, even when `validate_args=True`.\n Default value: `False` (i.e., don't validate arguments).\n name: Python `str` name given to ops managed by this object.\n Default value: `None` (i.e., \"lu_matrix_inverse\").\n\n Returns:\n inv_x: The matrix_inv, i.e.,\n `tf.matrix_inverse(tfp.math.lu_reconstruct(lu, perm))`.\n\n #### Examples\n\n ```python\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n x = [[[3., 4], [1, 2]],\n [[7., 8], [3, 4]]]\n inv_x = tfp.math.lu_matrix_inverse(*tf.linalg.lu(x))\n tf.assert_near(tf.matrix_inverse(x), inv_x)\n # ==> True\n ```"}
{"_id": "q_865", "text": "Returns list of assertions related to `lu_solve` assumptions."}
{"_id": "q_866", "text": "Returns a block diagonal rank 2 SparseTensor from a batch of SparseTensors.\n\n Args:\n sp_a: A rank 3 `SparseTensor` representing a batch of matrices.\n\n Returns:\n sp_block_diag_a: matrix-shaped, `float` `SparseTensor` with the same dtype\n as `sparse_or_matrix`, of shape [B * M, B * N] where `sp_a` has shape\n [B, M, N]. Each [M, N] batch of `sp_a` is lined up along the diagonal."}
{"_id": "q_867", "text": "Computes the neg-log-likelihood gradient and Fisher information for a GLM.\n\n Note that Fisher information is related to the Hessian of the log-likelihood\n by the equation\n\n ```none\n FisherInfo = E[Hessian with respect to model_coefficients of -LogLikelihood(\n Y | model_matrix, model_coefficients)]\n ```\n\n where `LogLikelihood` is the log-likelihood of a generalized linear model\n parameterized by `model_matrix` and `model_coefficients`, and the expectation\n is taken over Y, distributed according to the same GLM with the same parameter\n values.\n\n Args:\n model_matrix: (Batch of) matrix-shaped, `float` `Tensor` or `SparseTensor`\n where each row represents a sample's features. Has shape `[N, n]` where\n `N` is the number of data samples and `n` is the number of features per\n sample.\n linear_response: (Batch of) vector-shaped `Tensor` with the same dtype as\n `model_matrix`, equal to `model_matix @ model_coefficients` where\n `model_coefficients` are the coefficients of the linear component of the\n GLM.\n response: (Batch of) vector-shaped `Tensor` with the same dtype as\n `model_matrix` where each element represents a sample's observed response\n (to the corresponding row of features).\n model: `tfp.glm.ExponentialFamily`-like instance, which specifies the link\n function and distribution of the GLM, and thus characterizes the negative\n log-likelihood. Must have sufficient statistic equal to the response, that\n is, `T(y) = y`.\n\n Returns:\n grad_neg_log_likelihood: (Batch of) vector-shaped `Tensor` with the same\n shape and dtype as a single row of `model_matrix`, representing the\n gradient of the negative log likelihood of `response` given linear\n response `linear_response`.\n fim_middle: (Batch of) vector-shaped `Tensor` with the same shape and dtype\n as a single column of `model_matrix`, satisfying the equation\n `Fisher information =\n Transpose(model_matrix)\n @ diag(fim_middle)\n @ model_matrix`."}
{"_id": "q_868", "text": "r\"\"\"Fits a GLM using coordinate-wise FIM-informed proximal gradient descent.\n\n This function uses a L1- and L2-regularized, second-order quasi-Newton method\n to find maximum-likelihood parameters for the given model and observed data.\n The second-order approximations use negative Fisher information in place of\n the Hessian, that is,\n\n ```none\n FisherInfo = E_Y[Hessian with respect to model_coefficients of -LogLikelihood(\n Y | model_matrix, current value of model_coefficients)]\n ```\n\n For large, sparse data sets, `model_matrix` should be supplied as a\n `SparseTensor`.\n\n Args:\n model_matrix: (Batch of) matrix-shaped, `float` `Tensor` or `SparseTensor`\n where each row represents a sample's features. Has shape `[N, n]` where\n `N` is the number of data samples and `n` is the number of features per\n sample.\n response: (Batch of) vector-shaped `Tensor` with the same dtype as\n `model_matrix` where each element represents a sample's observed response\n (to the corresponding row of features).\n model: `tfp.glm.ExponentialFamily`-like instance, which specifies the link\n function and distribution of the GLM, and thus characterizes the negative\n log-likelihood which will be minimized. Must have sufficient statistic\n equal to the response, that is, `T(y) = y`.\n model_coefficients_start: (Batch of) vector-shaped, `float` `Tensor` with\n the same dtype as `model_matrix`, representing the initial values of the\n coefficients for the GLM regression. Has shape `[n]` where `model_matrix`\n has shape `[N, n]`.\n tolerance: scalar, `float` `Tensor` representing the tolerance for each\n optiization step; see the `tolerance` argument of `fit_sparse_one_step`.\n l1_regularizer: scalar, `float` `Tensor` representing the weight of the L1\n regularization term.\n l2_regularizer: scalar, `float` `Tensor` representing the weight of the L2\n regularization term.\n Default value: `None` (i.e., no L2 regularization).\n maximum_iterations: Python integer specifying maximum number of iterations\n of the outer loop of the optimizer (i.e., maximum number of calls to\n `fit_sparse_one_step`). After this many iterations of the outer loop, the\n algorithm will terminate even if the return value `model_coefficients` has\n not converged.\n Default value: `1`.\n maximum_full_sweeps_per_iteration: Python integer specifying the maximum\n number of coordinate descent sweeps allowed in each iteration.\n Default value: `1`.\n learning_rate: scalar, `float` `Tensor` representing a multiplicative factor\n used to dampen the proximal gradient descent steps.\n Default value: `None` (i.e., factor is conceptually `1`).\n name: Python string representing the name of the TensorFlow operation.\n The default name is `\"fit_sparse\"`.\n\n Returns:\n model_coefficients: (Batch of) `Tensor` of the same shape and dtype as\n `model_coefficients_start`, representing the computed model coefficients\n which minimize the regularized negative log-likelihood.\n is_converged: scalar, `bool` `Tensor` indicating whether the minimization\n procedure converged across all batches within the specified number of\n iterations. Here convergence means that an iteration of the inner loop\n (`fit_sparse_one_step`) returns `True` for its `is_converged` output\n value.\n iter: scalar, `int` `Tensor` indicating the actual number of iterations of\n the outer loop of the optimizer completed (i.e., number of calls to\n `fit_sparse_one_step` before achieving convergence).\n\n #### Example\n\n ```python\n from __future__ import print_function\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n def make_dataset(n, d, link, scale=1., dtype=np.float32):\n model_coefficients = tfd.Uniform(\n low=np.array(-1, dtype), high=np.array(1, dtype)).sample(\n d, seed=42)\n radius = np.sqrt(2.)\n model_coefficients *= radius / tf.linalg.norm(model_coefficients)\n mask = tf.random_shuffle(tf.range(d)) < tf.to_int32(0.5 * tf.to_float(d))\n model_coefficients = tf.where(mask, model_coefficients,\n tf.zeros_like(model_coefficients))\n model_matrix = tfd.Normal(\n loc=np.array(0, dtype), scale=np.array(1, dtype)).sample(\n [n, d], seed=43)\n scale = tf.convert_to_tensor(scale, dtype)\n linear_response = tf.matmul(model_matrix,\n model_coefficients[..., tf.newaxis])[..., 0]\n if link == 'linear':\n response = tfd.Normal(loc=linear_response, scale=scale).sample(seed=44)\n elif link == 'probit':\n response = tf.cast(\n tfd.Normal(loc=linear_response, scale=scale).sample(seed=44) > 0,\n dtype)\n elif link == 'logit':\n response = tfd.Bernoulli(logits=linear_response).sample(seed=44)\n else:\n raise ValueError('unrecognized true link: {}'.format(link))\n return model_matrix, response, model_coefficients, mask\n\n with tf.Session() as sess:\n x_, y_, model_coefficients_true_, _ = sess.run(make_dataset(\n n=int(1e5), d=100, link='probit'))\n\n model = tfp.glm.Bernoulli()\n model_coefficients_start = tf.zeros(x_.shape[-1], np.float32)\n\n model_coefficients, is_converged, num_iter = tfp.glm.fit_sparse(\n model_matrix=tf.convert_to_tensor(x_),\n response=tf.convert_to_tensor(y_),\n model=model,\n model_coefficients_start=model_coefficients_start,\n l1_regularizer=800.,\n l2_regularizer=None,\n maximum_iterations=10,\n maximum_full_sweeps_per_iteration=10,\n tolerance=1e-6,\n learning_rate=None)\n\n model_coefficients_, is_converged_, num_iter_ = sess.run([\n model_coefficients, is_converged, num_iter])\n\n print(\"is_converged:\", is_converged_)\n print(\" num_iter:\", num_iter_)\n print(\"\\nLearned / True\")\n print(np.concatenate(\n [[model_coefficients_], [model_coefficients_true_]], axis=0).T)\n\n # ==>\n # is_converged: True\n # num_iter: 1\n #\n # Learned / True\n # [[ 0. 0. ]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0.11195257 0.12484948]\n # [ 0. 0. ]\n # [ 0.05191106 0.06394956]\n # [-0.15090358 -0.15325639]\n # [-0.18187316 -0.18825999]\n # [-0.06140942 -0.07994166]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0.14474444 0.15810856]\n # [ 0. 0. ]\n # [-0.25249591 -0.24260855]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.03888761 -0.06755984]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.0192222 -0.04169233]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0.01434913 0.03568212]\n # [-0.11336883 -0.12873614]\n # [ 0. 0. ]\n # [-0.24496339 -0.24048163]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0.04088281 0.06565224]\n # [-0.12784363 -0.13359821]\n # [ 0.05618424 0.07396613]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0. -0.01719233]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.00076072 -0.03607186]\n # [ 0.21801499 0.21146794]\n # [-0.02161094 -0.04031265]\n # [ 0.0918689 0.10487888]\n # [ 0.0106154 0.03233612]\n # [-0.07817317 -0.09725142]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.23725343 -0.24194022]\n # [ 0. 0. ]\n # [-0.08725718 -0.1048776 ]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.02114314 -0.04145789]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.02710908 -0.04590397]\n # [ 0.15293184 0.15415154]\n # [ 0.2114463 0.2088728 ]\n # [-0.10969634 -0.12368613]\n # [ 0. -0.01505797]\n # [-0.01140458 -0.03234904]\n # [ 0.16051085 0.1680062 ]\n # [ 0.09816848 0.11094204]\n ```\n\n #### References\n\n [1]: Jerome Friedman, Trevor Hastie and Rob Tibshirani. Regularization Paths\n for Generalized Linear Models via Coordinate Descent. _Journal of\n Statistical Software_, 33(1), 2010.\n https://www.jstatsoft.org/article/view/v033i01/v33i01.pdf\n\n [2]: Guo-Xun Yuan, Chia-Hua Ho and Chih-Jen Lin. An Improved GLMNET for\n L1-regularized Logistic Regression. _Journal of Machine Learning\n Research_, 13, 2012.\n http://www.jmlr.org/papers/volume13/yuan12a/yuan12a.pdf"}
{"_id": "q_869", "text": "Generate the mask for building an autoregressive dense layer."}
{"_id": "q_870", "text": "A autoregressively masked dense layer. Analogous to `tf.layers.dense`.\n\n See [Germain et al. (2015)][1] for detailed explanation.\n\n Arguments:\n inputs: Tensor input.\n units: Python `int` scalar representing the dimensionality of the output\n space.\n num_blocks: Python `int` scalar representing the number of blocks for the\n MADE masks.\n exclusive: Python `bool` scalar representing whether to zero the diagonal of\n the mask, used for the first layer of a MADE.\n kernel_initializer: Initializer function for the weight matrix.\n If `None` (default), weights are initialized using the\n `tf.glorot_random_initializer`.\n reuse: Python `bool` scalar representing whether to reuse the weights of a\n previous layer by the same name.\n name: Python `str` used to describe ops managed by this function.\n *args: `tf.layers.dense` arguments.\n **kwargs: `tf.layers.dense` keyword arguments.\n\n Returns:\n Output tensor.\n\n Raises:\n NotImplementedError: if rightmost dimension of `inputs` is unknown prior to\n graph execution.\n\n #### References\n\n [1]: Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE:\n Masked Autoencoder for Distribution Estimation. In _International\n Conference on Machine Learning_, 2015. https://arxiv.org/abs/1502.03509"}
{"_id": "q_871", "text": "Returns a degree vectors for the input."}
{"_id": "q_872", "text": "Returns a list of binary mask matrices enforcing autoregressivity."}
{"_id": "q_873", "text": "Returns a masked version of the given initializer."}
{"_id": "q_874", "text": "See tfkl.Layer.call."}
{"_id": "q_875", "text": "Sample a multinomial.\n\n The batch shape is given by broadcasting num_trials with\n remove_last_dimension(logits).\n\n Args:\n num_samples: Python int or singleton integer Tensor: number of multinomial\n samples to draw.\n num_classes: Python int or singleton integer Tensor: number of classes.\n logits: Floating Tensor with last dimension k, of (unnormalized) logit\n probabilities per class.\n num_trials: Tensor of number of categorical trials each multinomial consists\n of. num_trials[..., tf.newaxis] must broadcast with logits.\n dtype: dtype at which to emit samples.\n seed: Random seed.\n\n Returns:\n samples: Tensor of given dtype and shape [n] + batch_shape + [k]."}
{"_id": "q_876", "text": "Build a zero-dimensional MVNDiag object."}
{"_id": "q_877", "text": "Computes the number of edges on longest path from node to root."}
{"_id": "q_878", "text": "Creates lists of callables suitable for JDSeq."}
{"_id": "q_879", "text": "Variational loss for the VGP.\n\n Given `observations` and `observation_index_points`, compute the\n negative variational lower bound as specified in [Hensman, 2013][1].\n\n Args:\n observations: `float` `Tensor` representing collection, or batch of\n collections, of observations corresponding to\n `observation_index_points`. Shape has the form `[b1, ..., bB, e]`, which\n must be brodcastable with the batch and example shapes of\n `observation_index_points`. The batch shape `[b1, ..., bB]` must be\n broadcastable with the shapes of all other batched parameters\n (`kernel.batch_shape`, `observation_index_points`, etc.).\n observation_index_points: `float` `Tensor` representing finite (batch of)\n vector(s) of points where observations are defined. Shape has the\n form `[b1, ..., bB, e1, f1, ..., fF]` where `F` is the number of feature\n dimensions and must equal `kernel.feature_ndims` and `e1` is the number\n (size) of index points in each batch (we denote it `e1` to distinguish\n it from the numer of inducing index points, denoted `e2` below). If\n set to `None` uses `index_points` as the origin for observations.\n Default value: None.\n kl_weight: Amount by which to scale the KL divergence loss between prior\n and posterior.\n Default value: 1.\n name: Python `str` name prefixed to Ops created by this class.\n Default value: \"GaussianProcess\".\n Returns:\n loss: Scalar tensor representing the negative variational lower bound.\n Can be directly used in a `tf.Optimizer`.\n Raises:\n ValueError: if `mean_fn` is not `None` and is not callable.\n\n #### References\n\n [1]: Hensman, J., Lawrence, N. \"Gaussian Processes for Big Data\", 2013\n https://arxiv.org/abs/1309.6835"}
{"_id": "q_880", "text": "Model selection for optimal variational hyperparameters.\n\n Given the full training set (parameterized by `observations` and\n `observation_index_points`), compute the optimal variational\n location and scale for the VGP. This is based of the method suggested\n in [Titsias, 2009][1].\n\n Args:\n kernel: `PositiveSemidefiniteKernel`-like instance representing the\n GP's covariance function.\n inducing_index_points: `float` `Tensor` of locations of inducing points in\n the index set. Shape has the form `[b1, ..., bB, e2, f1, ..., fF]`, just\n like `observation_index_points`. The batch shape components needn't be\n identical to those of `observation_index_points`, but must be broadcast\n compatible with them.\n observation_index_points: `float` `Tensor` representing finite (batch of)\n vector(s) of points where observations are defined. Shape has the\n form `[b1, ..., bB, e1, f1, ..., fF]` where `F` is the number of feature\n dimensions and must equal `kernel.feature_ndims` and `e1` is the number\n (size) of index points in each batch (we denote it `e1` to distinguish\n it from the numer of inducing index points, denoted `e2` below).\n observations: `float` `Tensor` representing collection, or batch of\n collections, of observations corresponding to\n `observation_index_points`. Shape has the form `[b1, ..., bB, e]`, which\n must be brodcastable with the batch and example shapes of\n `observation_index_points`. The batch shape `[b1, ..., bB]` must be\n broadcastable with the shapes of all other batched parameters\n (`kernel.batch_shape`, `observation_index_points`, etc.).\n observation_noise_variance: `float` `Tensor` representing the variance\n of the noise in the Normal likelihood distribution of the model. May be\n batched, in which case the batch shape must be broadcastable with the\n shapes of all other batched parameters (`kernel.batch_shape`,\n `index_points`, etc.).\n Default value: `0.`\n mean_fn: Python `callable` that acts on index points to produce a (batch\n of) vector(s) of mean values at those index points. Takes a `Tensor` of\n shape `[b1, ..., bB, f1, ..., fF]` and returns a `Tensor` whose shape is\n (broadcastable with) `[b1, ..., bB]`. Default value: `None` implies\n constant zero function.\n jitter: `float` scalar `Tensor` added to the diagonal of the covariance\n matrix to ensure positive definiteness of the covariance matrix.\n Default value: `1e-6`.\n name: Python `str` name prefixed to Ops created by this class.\n Default value: \"optimal_variational_posterior\".\n Returns:\n loc, scale: Tuple representing the variational location and scale.\n Raises:\n ValueError: if `mean_fn` is not `None` and is not callable.\n\n #### References\n\n [1]: Titsias, M. \"Variational Model Selection for Sparse Gaussian Process\n Regression\", 2009.\n http://proceedings.mlr.press/v5/titsias09a/titsias09a.pdf"}
{"_id": "q_881", "text": "Build utility method to compute whether the season is changing."}
{"_id": "q_882", "text": "Build a function computing transitions for a seasonal effect model."}
{"_id": "q_883", "text": "Build the transition noise model for a SeasonalStateSpaceModel."}
{"_id": "q_884", "text": "Returns `True` if given observation data is empty.\n\n Emptiness means either\n 1. Both `observation_index_points` and `observations` are `None`, or\n 2. the \"number of observations\" shape is 0. The shape of\n `observation_index_points` is `[..., N, f1, ..., fF]`, where `N` is the\n number of observations and the `f`s are feature dims. Thus, we look at the\n shape element just to the left of the leftmost feature dim. If that shape is\n zero, we consider the data empty.\n\n We don't check the shape of observations; validations are checked elsewhere in\n the calling code, to ensure these shapes are consistent.\n\n Args:\n feature_ndims: the number of feature dims, as reported by the GP kernel.\n observation_index_points: the observation data locations in the index set.\n observations: the observation data.\n\n Returns:\n is_empty: True if the data were deemed to be empty."}
{"_id": "q_885", "text": "Ensure that observation data and locations have consistent shapes.\n\n This basically means that the batch shapes are broadcastable. We can only\n ensure this when those shapes are fully statically defined.\n\n\n Args:\n kernel: The GP kernel.\n observation_index_points: the observation data locations in the index set.\n observations: the observation data.\n\n Raises:\n ValueError: if the observations' batch shapes are not broadcastable."}
{"_id": "q_886", "text": "Add a learning rate scheduler to the contained `schedules`\n\n :param scheduler: learning rate scheduler to be add\n :param max_iteration: iteration numbers this scheduler will run"}
{"_id": "q_887", "text": "Configure checkpoint settings.\n\n\n :param checkpoint_trigger: the interval to write snapshots\n :param checkpoint_path: the path to write snapshots into\n :param isOverWrite: whether to overwrite existing snapshots in path.default is True"}
{"_id": "q_888", "text": "Configure constant clipping settings.\n\n\n :param min_value: the minimum value to clip by\n :param max_value: the maxmimum value to clip by"}
{"_id": "q_889", "text": "Do an optimization."}
{"_id": "q_890", "text": "Set validation summary. A ValidationSummary object contains information\n necessary for the optimizer to know how often the logs are recorded,\n where to store the logs and how to retrieve them, etc. For details,\n refer to the docs of ValidationSummary.\n\n\n :param summary: a ValidationSummary object"}
{"_id": "q_891", "text": "Parse or download news20 if source_dir is empty.\n\n :param source_dir: The directory storing news data.\n :return: A list of (tokens, label)"}
{"_id": "q_892", "text": "Parse or download the pre-trained glove word2vec if source_dir is empty.\n\n :param source_dir: The directory storing the pre-trained word2vec\n :param dim: The dimension of a vector\n :return: A dict mapping from word to vector"}
{"_id": "q_893", "text": "Train a model for a fixed number of epochs on a dataset.\n\n # Arguments\n x: Input data. A Numpy array or RDD of Sample or Image DataSet.\n y: Labels. A Numpy array. Default is None if x is already RDD of Sample or Image DataSet.\n batch_size: Number of samples per gradient update.\n nb_epoch: Number of iterations to train.\n validation_data: Tuple (x_val, y_val) where x_val and y_val are both Numpy arrays.\n Or RDD of Sample. Default is None if no validation is involved.\n distributed: Boolean. Whether to train the model in distributed mode or local mode.\n Default is True. In local mode, x and y must both be Numpy arrays."}
{"_id": "q_894", "text": "Evaluate a model on a given dataset in distributed mode.\n\n # Arguments\n x: Input data. A Numpy array or RDD of Sample.\n y: Labels. A Numpy array. Default is None if x is already RDD of Sample.\n batch_size: Number of samples per gradient update."}
{"_id": "q_895", "text": "Get mnist dataset and parallelize into RDDs.\n Data would be downloaded automatically if it doesn't present at the specific location.\n\n :param sc: SparkContext.\n :param data_type: \"train\" for training data and \"test\" for testing data.\n :param location: Location to store mnist dataset.\n :return: RDD of (features: ndarray, label: ndarray)."}
{"_id": "q_896", "text": "Preprocess mnist dataset.\n Normalize and transform into Sample of RDDs."}
{"_id": "q_897", "text": "When to end the optimization based on input option."}
{"_id": "q_898", "text": "Set validation and checkpoint for distributed optimizer."}
{"_id": "q_899", "text": "Return the broadcasted value"}
{"_id": "q_900", "text": "Call Java Function"}
{"_id": "q_901", "text": "Return a JavaRDD of Object by unpickling\n\n\n It will convert each Python object into Java object by Pyrolite, whenever\n the RDD is serialized in batch or not."}
{"_id": "q_902", "text": "Convert Python object into Java"}
{"_id": "q_903", "text": "Convert to a bigdl activation layer\n given the name of the activation as a string"}
{"_id": "q_904", "text": "Convert a ndarray to a DenseTensor which would be used in Java side.\n\n >>> import numpy as np\n >>> from bigdl.util.common import JTensor\n >>> from bigdl.util.common import callBigDlFunc\n >>> np.random.seed(123)\n >>> data = np.random.uniform(0, 1, (2, 3)).astype(\"float32\")\n >>> result = JTensor.from_ndarray(data)\n >>> expected_storage = np.array([[0.69646919, 0.28613934, 0.22685145], [0.55131477, 0.71946895, 0.42310646]])\n >>> expected_shape = np.array([2, 3])\n >>> np.testing.assert_allclose(result.storage, expected_storage, rtol=1e-6, atol=1e-6)\n >>> np.testing.assert_allclose(result.shape, expected_shape)\n >>> data_back = result.to_ndarray()\n >>> (data == data_back).all()\n True\n >>> tensor1 = callBigDlFunc(\"float\", \"testTensor\", JTensor.from_ndarray(data)) # noqa\n >>> array_from_tensor = tensor1.to_ndarray()\n >>> (array_from_tensor == data).all()\n True"}
{"_id": "q_905", "text": "get label as ndarray from ImageFeature"}
{"_id": "q_906", "text": "Read parquet file as DistributedImageFrame"}
{"_id": "q_907", "text": "write ImageFrame as parquet file"}
{"_id": "q_908", "text": "get image from ImageFrame"}
{"_id": "q_909", "text": "get image list from ImageFrame"}
{"_id": "q_910", "text": "get label rdd from ImageFrame"}
{"_id": "q_911", "text": "get prediction rdd from ImageFrame"}
{"_id": "q_912", "text": "Generates output predictions for the input samples,\n processing the samples in a batched way.\n\n # Arguments\n x: the input data, as a Numpy array or list of Numpy array for local mode.\n as RDD[Sample] for distributed mode\n is_distributed: used to control run in local or cluster. the default value is False\n # Returns\n A Numpy array or RDD[Sample] of predictions."}
{"_id": "q_913", "text": "Apply the transformer to the images in \"inputCol\" and store the transformed result\n into \"outputCols\""}
{"_id": "q_914", "text": "Save a Keras model definition to JSON with given path"}
{"_id": "q_915", "text": "Define a convnet model in Keras 1.2.2"}
{"_id": "q_916", "text": "Set weights for this layer\n\n :param weights: a list of numpy arrays which represent weight and bias\n :return:\n\n >>> linear = Linear(3,2)\n creating: createLinear\n >>> linear.set_weights([np.array([[1,2,3],[4,5,6]]), np.array([7,8])])\n >>> weights = linear.get_weights()\n >>> weights[0].shape == (2,3)\n True\n >>> np.testing.assert_allclose(weights[0][0], np.array([1., 2., 3.]))\n >>> np.testing.assert_allclose(weights[1], np.array([7., 8.]))\n >>> relu = ReLU()\n creating: createReLU\n >>> from py4j.protocol import Py4JJavaError\n >>> try:\n ... relu.set_weights([np.array([[1,2,3],[4,5,6]]), np.array([7,8])])\n ... except Py4JJavaError as err:\n ... print(err.java_exception)\n ...\n java.lang.IllegalArgumentException: requirement failed: this layer does not have weight/bias\n >>> relu.get_weights()\n The layer does not have weight/bias\n >>> add = Add(2)\n creating: createAdd\n >>> try:\n ... add.set_weights([np.array([7,8]), np.array([1,2])])\n ... except Py4JJavaError as err:\n ... print(err.java_exception)\n ...\n java.lang.IllegalArgumentException: requirement failed: the number of input weight/bias is not consistant with number of weight/bias of this layer, number of input 1, number of output 2\n >>> cAdd = CAdd([4, 1])\n creating: createCAdd\n >>> cAdd.set_weights(np.ones([4, 1]))\n >>> (cAdd.get_weights()[0] == np.ones([4, 1])).all()\n True"}
{"_id": "q_917", "text": "Load a pre-trained Torch model.\n\n :param path: The path containing the pre-trained model.\n :return: A pre-trained model."}
{"_id": "q_918", "text": "Load a pre-trained Keras model.\n\n :param json_path: The json path containing the keras model definition.\n :param hdf5_path: The HDF5 path containing the pre-trained keras model weights with or without the model architecture.\n :return: A bigdl model."}
{"_id": "q_919", "text": "Create a python Criterion by a java criterion object\n\n :param jcriterion: A java criterion object which created by Py4j\n :return: a criterion."}
{"_id": "q_920", "text": "The file path can be stored in a local file system, HDFS, S3,\n or any Hadoop-supported file system."}
{"_id": "q_921", "text": "Load IMDB dataset\n Transform input data into an RDD of Sample"}
{"_id": "q_922", "text": "Define a recurrent convolutional model in Keras 1.2.2"}
{"_id": "q_923", "text": "Return a list of shape tuples if there are multiple inputs.\n Return one shape tuple otherwise."}
{"_id": "q_924", "text": "Return a list of shape tuples if there are multiple outputs.\n Return one shape tuple otherwise."}
{"_id": "q_925", "text": "Get mnist dataset with features and label as ndarray.\n Data would be downloaded automatically if it doesn't present at the specific location.\n\n :param data_type: \"train\" for training data and \"test\" for testing data.\n :param location: Location to store mnist dataset.\n :return: (features: ndarray, label: ndarray)"}
{"_id": "q_926", "text": "Parse or download movielens 1m data if train_dir is empty.\n\n :param data_dir: The directory storing the movielens data\n :return: a 2D numpy array with user index and item index in each row"}
{"_id": "q_927", "text": "Get and return the jar path for bigdl if exists."}
{"_id": "q_928", "text": "Export variable tensors from the checkpoint files.\n\n :param checkpoint_path: tensorflow checkpoint path\n :return: dictionary of tensor. The key is the variable name and the value is the numpy"}
{"_id": "q_929", "text": "Save a variable dictionary to a Java object file, so it can be read by BigDL\n\n :param tensors: tensor dictionary\n :param target_path: where is the Java object file store\n :param bigdl_type: model variable numeric type\n :return: nothing"}
{"_id": "q_930", "text": "Expand and tile tensor along given axis\n\n Args:\n units: tf tensor with dimensions [batch_size, time_steps, n_input_features]\n axis: axis along which expand and tile. Must be 1 or 2"}
{"_id": "q_931", "text": "Simple attention without any conditions.\n\n Computes weighted sum of memory elements."}
{"_id": "q_932", "text": "Computes weighted sum of inputs conditioned on state"}
{"_id": "q_933", "text": "Computes BLEU score of translated segments against one or more references.\n\n Args:\n reference_corpus: list of lists of references for each translation. Each\n reference should be tokenized into a list of tokens.\n translation_corpus: list of translations to score. Each translation\n should be tokenized into a list of tokens.\n max_order: Maximum n-gram order to use when computing BLEU score.\n smooth: Whether or not to apply Lin et al. 2004 smoothing.\n\n Returns:\n 3-Tuple with the BLEU score, n-gram precisions, geometric mean of n-gram\n precisions and brevity penalty."}
{"_id": "q_934", "text": "Dump the trained weights from a model to a HDF5 file."}
{"_id": "q_935", "text": "Convert labels to one-hot vectors for multi-class multi-label classification\n\n Args:\n labels: list of samples where each sample is a class or a list of classes which sample belongs with\n classes: array of classes' names\n\n Returns:\n 2d array with one-hot representation of given samples"}
{"_id": "q_936", "text": "Checks existence of the model file, loads the model if the file exists"}
{"_id": "q_937", "text": "Extract values of momentum variables from optimizer\n\n Returns:\n optimizer's `rho` or `beta_1`"}
{"_id": "q_938", "text": "Update graph variables setting giving `learning_rate` and `momentum`\n\n Args:\n learning_rate: learning rate value to be set in graph (set if not None)\n momentum: momentum value to be set in graph (set if not None)\n\n Returns:\n None"}
{"_id": "q_939", "text": "Converts word to a tuple of symbols, optionally converts it to lowercase\n and adds capitalization label.\n\n Args:\n word: input word\n to_lower: whether to lowercase\n append_case: whether to add case mark\n ('<FIRST_UPPER>' for first capital and '<ALL_UPPER>' for all caps)\n\n Returns:\n a preprocessed word"}
{"_id": "q_940", "text": "Number of convolutional layers stacked on top of each other\n\n Args:\n units: a tensorflow tensor with dimensionality [None, n_tokens, n_features]\n n_hidden_list: list with number of hidden units at the ouput of each layer\n filter_width: width of the kernel in tokens\n use_batch_norm: whether to use batch normalization between layers\n use_dilation: use power of 2 dilation scheme [1, 2, 4, 8 .. ] for layers 1, 2, 3, 4 ...\n training_ph: boolean placeholder determining whether is training phase now or not.\n It is used only for batch normalization to determine whether to use\n current batch average (std) or memory stored average (std)\n add_l2_losses: whether to add l2 losses on network kernels to\n tf.GraphKeys.REGULARIZATION_LOSSES or not\n\n Returns:\n units: tensor at the output of the last convolutional layer"}
{"_id": "q_941", "text": "Highway convolutional network. Skip connection with gating\n mechanism.\n\n Args:\n units: a tensorflow tensor with dimensionality [None, n_tokens, n_features]\n n_hidden_list: list with number of hidden units at the output of each layer\n filter_width: width of the kernel in tokens\n use_batch_norm: whether to use batch normalization between layers\n use_dilation: use power of 2 dilation scheme [1, 2, 4, 8 .. ] for layers 1, 2, 3, 4 ...\n training_ph: boolean placeholder determining whether is training phase now or not.\n It is used only for batch normalization to determine whether to use\n current batch average (std) or memory stored average (std)\n Returns:\n units: tensor at the output of the last convolutional layer\n with dimensionality [None, n_tokens, n_hidden_list[-1]]"}
{"_id": "q_942", "text": "Token embedding layer. Create matrix of for token embeddings.\n Can be initialized with given matrix (for example pre-trained\n with word2ve algorithm\n\n Args:\n token_indices: token indices tensor of type tf.int32\n token_embedding_matrix: matrix of embeddings with dimensionality\n [n_tokens, embeddings_dimension]\n n_tokens: total number of unique tokens\n token_embedding_dim: dimensionality of embeddings, typical 100..300\n name: embedding matrix name (variable name)\n trainable: whether to set the matrix trainable or not\n\n Returns:\n embedded_tokens: tf tensor of size [B, T, E], where B - batch size\n T - number of tokens, E - token_embedding_dim"}
{"_id": "q_943", "text": "Fast CuDNN GRU implementation\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n\n n_hidden: dimensionality of hidden state\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n seq_lengths: tensor of sequence lengths with dimension [B]\n n_layers: number of layers\n input_initial_h: initial hidden state, tensor\n name: name of the variable scope to use\n reuse:whether to reuse already initialized variable\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x F]\n h_last - last hidden state, tf.Tensor with dimensionality [B x H]"}
{"_id": "q_944", "text": "CuDNN Compatible GRU implementation.\n It should be used to load models saved with CudnnGRUCell to run on CPU.\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n\n n_hidden: dimensionality of hidden state\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n seq_lengths: tensor of sequence lengths with dimension [B]\n n_layers: number of layers\n input_initial_h: initial hidden state, tensor\n name: name of the variable scope to use\n reuse:whether to reuse already initialized variable\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x F]\n h_last - last hidden state, tf.Tensor with dimensionality [B x H]"}
{"_id": "q_945", "text": "CuDNN Compatible LSTM implementation.\n It should be used to load models saved with CudnnLSTMCell to run on CPU.\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n n_hidden: dimensionality of hidden state\n n_layers: number of layers\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n seq_lengths: tensor of sequence lengths with dimension [B]\n initial_h: optional initial hidden state, masks trainable_initial_states\n if provided\n initial_c: optional initial cell state, masks trainable_initial_states\n if provided\n name: name of the variable scope to use\n reuse:whether to reuse already initialized variable\n\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x F]\n h_last - last hidden state, tf.Tensor with dimensionality [B x H]\n where H - number of hidden units\n c_last - last cell state, tf.Tensor with dimensionality [B x H]\n where H - number of hidden units"}
{"_id": "q_946", "text": "Fast CuDNN Bi-GRU implementation\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n n_hidden: dimensionality of hidden state\n seq_lengths: number of tokens in each sample in the batch\n n_layers: number of layers\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n name: name of the variable scope to use\n reuse:whether to reuse already initialized variable\n\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x F]\n h_last - last hidden state, tf.Tensor with dimensionality [B x H * 2]\n where H - number of hidden units"}
{"_id": "q_947", "text": "Fast CuDNN Bi-LSTM implementation\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n n_hidden: dimensionality of hidden state\n seq_lengths: number of tokens in each sample in the batch\n n_layers: number of layers\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n name: name of the variable scope to use\n reuse:whether to reuse already initialized variable\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x F]\n h_last - last hidden state, tf.Tensor with dimensionality [B x H * 2]\n where H - number of hidden units\n c_last - last cell state, tf.Tensor with dimensionality [B x H * 2]\n where H - number of hidden units"}
{"_id": "q_948", "text": "Fast CuDNN Stacked Bi-GRU implementation\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n n_hidden: dimensionality of hidden state\n seq_lengths: number of tokens in each sample in the batch\n n_stacks: number of stacked Bi-GRU\n keep_prob: dropout keep_prob between Bi-GRUs (intra-layer dropout)\n concat_stacked_outputs: return last Bi-GRU output or concat outputs from every Bi-GRU,\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n name: name of the variable scope to use\n reuse: whether to reuse already initialized variable\n\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x ((n_hidden * 2) * n_stacks)]"}
{"_id": "q_949", "text": "Dropout with the same drop mask for all fixed_mask_dims\n\n Args:\n units: a tensor, usually with shapes [B x T x F], where\n B - batch size\n T - tokens dimension\n F - feature dimension\n keep_prob: keep probability\n fixed_mask_dims: in these dimensions the mask will be the same\n\n Returns:\n dropped units tensor"}
{"_id": "q_950", "text": "Builds the network using Keras."}
{"_id": "q_951", "text": "Builds word-level network"}
{"_id": "q_952", "text": "Makes predictions on a single batch\n\n Args:\n data: a batch of word sequences together with additional inputs\n return_indexes: whether to return tag indexes in vocabulary or tags themselves\n\n Returns:\n a batch of label sequences"}
{"_id": "q_953", "text": "Transforms a sentence to Numpy array, which will be the network input.\n\n Args:\n sent: input sentence\n bucket_length: the width of the bucket\n\n Returns:\n A 3d array, answer[i][j][k] contains the index of k-th letter\n in j-th word of i-th input sentence."}
{"_id": "q_954", "text": "Calculate BLEU score\n\n Parameters:\n y_true: list of reference tokens\n y_predicted: list of query tokens\n weights: n-gram weights\n smoothing_function: SmoothingFunction\n auto_reweigh: Option to re-normalize the weights uniformly\n penalty: either enable brevity penalty or not\n\n Return:\n BLEU score"}
{"_id": "q_955", "text": "Verify signature certificate URL against Amazon Alexa requirements.\n\n Each call of Agent passes incoming utterances batch through skills filter,\n agent skills, skills processor. Batch of dialog IDs can be provided, in\n other case utterances indexes in incoming batch are used as dialog IDs.\n\n Args:\n url: Signature certificate URL from SignatureCertChainUrl HTTP header.\n\n Returns:\n result: True if verification was successful, False if not."}
{"_id": "q_956", "text": "Extracts pycrypto X509 objects from SSL certificates chain string.\n\n Args:\n certs_txt: SSL certificates chain string.\n\n Returns:\n result: List of pycrypto X509 objects."}
{"_id": "q_957", "text": "Verifies if Amazon and additional certificates creates chain of trust to a root CA.\n\n Args:\n certs_chain: List of pycrypto X509 intermediate certificates from signature chain URL.\n amazon_cert: Pycrypto X509 Amazon certificate.\n\n Returns:\n result: True if verification was successful, False if not."}
{"_id": "q_958", "text": "Verifies Alexa request signature.\n\n Args:\n amazon_cert: Pycrypto X509 Amazon certificate.\n signature: Base64 decoded Alexa request signature from Signature HTTP header.\n request_body: full HTTPS request body\n Returns:\n result: True if verification was successful, False if not."}
{"_id": "q_959", "text": "Conducts series of Alexa SSL certificate verifications against Amazon Alexa requirements.\n\n Args:\n signature_chain_url: Signature certificate URL from SignatureCertChainUrl HTTP header.\n Returns:\n result: Amazon certificate if verification was successful, None if not."}
{"_id": "q_960", "text": "Returns list of Telegram compatible states of the RichMessage\n instance nested controls.\n\n Returns:\n telegram_controls: Telegram representation of RichMessage instance nested\n controls."}
{"_id": "q_961", "text": "DeepPavlov console configuration utility."}
{"_id": "q_962", "text": "Constructs function encapsulated in the graph."}
{"_id": "q_963", "text": "Calculate accuracy in terms of absolute coincidence\n\n Args:\n y_true: array of true values\n y_predicted: array of predicted values\n\n Returns:\n portion of absolutely coincidental samples"}
{"_id": "q_964", "text": "Builds agent based on PatternMatchingSkill and HighestConfidenceSelector.\n\n This is agent building tutorial. You can use this .py file to check how hello-bot agent works.\n\n Returns:\n agent: Agent capable of handling several simple greetings."}
{"_id": "q_965", "text": "Takes an array of integers and transforms it\n to an array of one-hot encoded vectors"}
{"_id": "q_966", "text": "Populate settings directory with default settings files\n\n Args:\n force: if ``True``, replace existing settings files with default ones\n\n Returns:\n ``True`` if any files were copied and ``False`` otherwise"}
{"_id": "q_967", "text": "Load model parameters from self.load_path"}
{"_id": "q_968", "text": "Get train operation for given loss\n\n Args:\n loss: loss, tf tensor or scalar\n learning_rate: scalar or placeholder.\n clip_norm: clip gradients norm by clip_norm.\n learnable_scopes: which scopes are trainable (None for all).\n optimizer: instance of tf.train.Optimizer, default Adam.\n **kwargs: parameters passed to tf.train.Optimizer object\n (scalars or placeholders).\n\n Returns:\n train_op"}
{"_id": "q_969", "text": "Finds all dictionary words in d-window from word"}
{"_id": "q_970", "text": "Initiates self-destruct timer."}
{"_id": "q_971", "text": "Infers DeepPavlov agent with raw user input extracted from Alexa request.\n\n Args:\n utterance: Raw user input extracted from Alexa request.\n Returns:\n response: DeepPavlov agent response."}
{"_id": "q_972", "text": "Populates generated response with additional data conforming Alexa response specification.\n\n Args:\n response: Raw user input extracted from Alexa request.\n request: Alexa request.\n Returns:\n response: Response conforming Alexa response specification."}
{"_id": "q_973", "text": "Handles LaunchRequest Alexa request.\n\n Args:\n request: Alexa request.\n Returns:\n response: \"response\" part of response dict conforming Alexa specification."}
{"_id": "q_974", "text": "Handles all unsupported types of Alexa requests. Returns standard message.\n\n Args:\n request: Alexa request.\n Returns:\n response: \"response\" part of response dict conforming Alexa specification."}
{"_id": "q_975", "text": "Calculates perplexity by loss\n\n Args:\n losses: list of numpy arrays of model losses\n\n Returns:\n perplexity : float"}
{"_id": "q_976", "text": "Build and return the model described in corresponding configuration file."}
{"_id": "q_977", "text": "Start interaction with the model described in corresponding configuration file."}
{"_id": "q_978", "text": "Reads input file in CONLL-U format\n\n Args:\n infile: a path to a file\n word_column: column containing words (default=1)\n pos_column: column containing part-of-speech labels (default=3)\n tag_column: column containing fine-grained tags (default=5)\n max_sents: maximal number of sents to read\n read_only_words: whether to read only words\n\n Returns:\n a list of sentences. Each item contains a word sequence and a tag sequence, which is ``None``\n in case ``read_only_words = True``"}
{"_id": "q_979", "text": "Decorator for metric registration."}
{"_id": "q_980", "text": "Find the best value according to given losses\n\n Args:\n values: list of considered values\n losses: list of obtained loss values corresponding to `values`\n max_loss_div: maximal divergence of loss to be considered significant\n min_val_div: minimum divergence of loss to be considered significant\n\n Returns:\n best value divided by `min_val_div`"}
{"_id": "q_981", "text": "Embed one text sample\n\n Args:\n tokens: tokenized text sample\n mean: whether to return mean embedding of tokens per sample\n\n Returns:\n list of embedded tokens or array of mean values"}
{"_id": "q_982", "text": "parses requirements from requirements.txt"}
{"_id": "q_983", "text": "Exports a TF-Hub module"}
{"_id": "q_984", "text": "Make an agent\n\n Returns:\n agent: created Ecommerce agent"}
{"_id": "q_985", "text": "Parse parameters and run ms bot framework"}
{"_id": "q_986", "text": "Download a file from URL to one or several target locations\n\n Args:\n dest_file_path: path or list of paths to the file destination files (including file name)\n source_url: the source URL\n force_download: download file if it already exists, or not"}
{"_id": "q_987", "text": "Simple tar archive extractor\n\n Args:\n file_path: path to the tar file to be extracted\n extract_folder: folder to which the files will be extracted"}
{"_id": "q_988", "text": "Download and extract .tar.gz or .gz file to one or several target locations.\n The archive is deleted if extraction was successful.\n\n Args:\n url: URL for file downloading\n download_path: path to the directory where downloaded file will be stored\n until the end of extraction\n extract_paths: path or list of paths where contents of archive will be extracted"}
{"_id": "q_989", "text": "Updates dict recursively\n\n You need to use this function to update dictionary if depth of editing_dict is more then 1\n\n Args:\n editable_dict: dictionary, that will be edited\n editing_dict: dictionary, that contains edits\n Returns:\n None"}
{"_id": "q_990", "text": "Given a URL, set or replace a query parameter and return the modified URL.\n\n Args:\n url: a given URL\n param_name: the parameter name to add\n param_value: the parameter value\n Returns:\n URL with the added parameter"}
{"_id": "q_991", "text": "Returns Amazon Alexa compatible state of the PlainText instance.\n\n Creating Amazon Alexa response blank with populated \"outputSpeech\" and\n \"card sections.\n\n Returns:\n response: Amazon Alexa representation of PlainText state."}
{"_id": "q_992", "text": "Returns json compatible state of the Button instance.\n\n Returns:\n control_json: Json representation of Button state."}
{"_id": "q_993", "text": "Returns json compatible state of the ButtonsFrame instance.\n\n Returns json compatible state of the ButtonsFrame instance including\n all nested buttons.\n\n Returns:\n control_json: Json representation of ButtonsFrame state."}
{"_id": "q_994", "text": "Returns MS Bot Framework compatible state of the ButtonsFrame instance.\n\n Creating MS Bot Framework activity blank with RichCard in \"attachments\". RichCard\n is populated with CardActions corresponding buttons embedded in ButtonsFrame.\n\n Returns:\n control_json: MS Bot Framework representation of ButtonsFrame state."}
{"_id": "q_995", "text": "Calculates recall at k ranking metric.\n\n Args:\n y_true: Labels. Not used in the calculation of the metric.\n y_predicted: Predictions.\n Each prediction contains ranking score of all ranking candidates for the particular data sample.\n It is supposed that the ranking score for the true candidate goes first in the prediction.\n\n Returns:\n Recall at k"}
{"_id": "q_996", "text": "Recursively apply config's variables values to its property"}
{"_id": "q_997", "text": "Convert relative paths to absolute with resolving user directory."}
{"_id": "q_998", "text": "Builds and returns the Component from corresponding dictionary of parameters."}
{"_id": "q_999", "text": "Thread run method implementation."}
{"_id": "q_1000", "text": "Deletes Conversation instance.\n\n Args:\n conversation_key: Conversation key."}
{"_id": "q_1001", "text": "Conducts cleanup of periodical certificates with expired validation."}
{"_id": "q_1002", "text": "Conducts series of Alexa request verifications against Amazon Alexa requirements.\n\n Args:\n signature_chain_url: Signature certificate URL from SignatureCertChainUrl HTTP header.\n signature: Base64 decoded Alexa request signature from Signature HTTP header.\n request_body: full HTTPS request body\n Returns:\n result: True if verification was successful, False if not."}
{"_id": "q_1003", "text": "Extract full regularization path explored during lambda search from glm model.\n\n :param model: source lambda search model"}
{"_id": "q_1004", "text": "Create a custom GLM model using the given coefficients.\n\n Needs to be passed source model trained on the dataset to extract the dataset information from.\n\n :param model: source model, used for extracting dataset information\n :param coefs: dictionary containing model coefficients\n :param threshold: (optional, only for binomial) decision threshold used for classification"}
{"_id": "q_1005", "text": "Determine if the H2O cluster is running or not.\n\n :returns: True if the cluster is up; False otherwise"}
{"_id": "q_1006", "text": "List all jobs performed by the cluster."}
{"_id": "q_1007", "text": "Return the list of all known timezones."}
{"_id": "q_1008", "text": "Update information in this object from another H2OCluster instance.\n\n :param H2OCluster other: source of the new information for this object."}
{"_id": "q_1009", "text": "Parameters for metalearner algorithm\n\n Type: ``dict`` (default: ``None``).\n Example: metalearner_gbm_params = {'max_depth': 2, 'col_sample_rate': 0.3}"}
{"_id": "q_1010", "text": "Repeatedly test a function waiting for it to return True.\n\n Arguments:\n test_func -- A function that will be run repeatedly\n error -- A function that will be run to produce an error message\n it will be called with (node, timeTakenSecs, numberOfRetries)\n OR\n -- A string that will be interpolated with a dictionary of\n { 'timeTakenSecs', 'numberOfRetries' }\n timeoutSecs -- How long in seconds to keep trying before declaring a failure\n retryDelaySecs -- How long to wait between retry attempts"}
{"_id": "q_1011", "text": "Return the summary for a single column for a single Frame in the h2o cluster."}
{"_id": "q_1012", "text": "Delete a frame on the h2o cluster, given its key."}
{"_id": "q_1013", "text": "Return a model builder or all of the model builders known to the\n h2o cluster. The model builders are contained in a dictionary\n called \"model_builders\" at the top level of the result. The\n dictionary maps algorithm names to parameters lists. Each of the\n parameters contains all the metdata required by a client to\n present a model building interface to the user.\n\n if parameters = True, return the parameters?"}
{"_id": "q_1014", "text": "Check a dictionary of model builder parameters on the h2o cluster \n using the given algorithm and model parameters."}
{"_id": "q_1015", "text": "Score a model on the h2o cluster on the given Frame and return only the model metrics."}
{"_id": "q_1016", "text": "ModelMetrics list."}
{"_id": "q_1017", "text": "Create a new reservation for count instances"}
{"_id": "q_1018", "text": "terminate all the instances given by its ids"}
{"_id": "q_1019", "text": "Reboot all the instances given by its ids"}
{"_id": "q_1020", "text": "Return fully qualified function name.\n\n This method will attempt to find \"full name\" of the given function object. This full name is either of\n the form \"<class name>.<method name>\" if the function is a class method, or \"<module name>.<func name>\"\n if it's a regular function. Thus, this is an attempt to back-port func.__qualname__ to Python 2.\n\n :param func: a function object.\n\n :returns: string with the function's full name as explained above."}
{"_id": "q_1021", "text": "Return function's declared arguments as a string.\n\n For example for this function it returns \"func, highlight=None\"; for the ``_wrap`` function it returns\n \"text, wrap_at=120, indent=4\". This should usually coincide with the function's declaration (the part\n which is inside the parentheses)."}
{"_id": "q_1022", "text": "Return piece of text, wrapped around if needed.\n\n :param text: text that may be too long and then needs to be wrapped.\n :param wrap_at: the maximum line length.\n :param indent: number of spaces to prepend to all subsequent lines after the first."}
{"_id": "q_1023", "text": "Wait until job's completion."}
{"_id": "q_1024", "text": "Fit an H2O model as part of a scikit-learn pipeline or grid search.\n\n A warning will be issued if a caller other than sklearn attempts to use this method.\n\n :param H2OFrame X: An H2OFrame consisting of the predictor variables.\n :param H2OFrame y: An H2OFrame consisting of the response variable.\n :param params: Extra arguments.\n :returns: The current instance of H2OEstimator for method chaining."}
{"_id": "q_1025", "text": "Obtain parameters for this estimator.\n\n Used primarily for sklearn Pipelines and sklearn grid search.\n\n :param deep: If True, return parameters of all sub-objects that are estimators.\n\n :returns: A dict of parameters"}
{"_id": "q_1026", "text": "This function is written to remove sandbox directories if they exist under the\n parent_dir.\n\n :param parent_dir: string denoting full parent directory path\n :param dir_name: string denoting directory path which could be a sandbox\n :return: None"}
{"_id": "q_1027", "text": "Look at the stdout log and figure out which port the JVM chose.\n\n If successful, port number is stored in self.port; otherwise the\n program is terminated. This call is blocking, and will wait for\n up to 30s for the server to start up."}
{"_id": "q_1028", "text": "Look at the stdout log and wait until the cluster of proper size is formed.\n This call is blocking.\n Exit if this fails.\n\n :param nodes_per_cloud:\n :return none"}
{"_id": "q_1029", "text": "Normal node shutdown.\n Ignore failures for now.\n\n :return none"}
{"_id": "q_1030", "text": "Return an ip to use to talk to this cluster."}
{"_id": "q_1031", "text": "Return a port to use to talk to this cluster."}
{"_id": "q_1032", "text": "Mean absolute error regression loss.\n\n :param y_actual: H2OFrame of actual response.\n :param y_predicted: H2OFrame of predicted response.\n :param weights: (Optional) sample weights\n :returns: mean absolute error loss (best is 0.0)."}
{"_id": "q_1033", "text": "Explained variance regression score function.\n\n :param y_actual: H2OFrame of actual response.\n :param y_predicted: H2OFrame of predicted response.\n :param weights: (Optional) sample weights\n :returns: the explained variance score."}
{"_id": "q_1034", "text": "Assert that string variable matches the provided regular expression.\n\n :param v: variable to check.\n :param regex: regular expression to check against (can be either a string, or compiled regexp)."}
{"_id": "q_1035", "text": "Assert that variable satisfies the provided condition.\n\n :param v: variable to check. Its value is only used for error reporting.\n :param bool cond: condition that must be satisfied. Should be somehow related to the variable ``v``.\n :param message: message string to use instead of the default."}
{"_id": "q_1036", "text": "Magic variable name retrieval.\n\n This function is designed as a helper for assert_is_type() function. Typically such assertion is used like this::\n\n assert_is_type(num_threads, int)\n\n If the variable `num_threads` turns out to be non-integer, we would like to raise an exception such as\n\n H2OTypeError(\"`num_threads` is expected to be integer, but got <str>\")\n\n and in order to compose an error message like that, we need to know that the variables that was passed to\n assert_is_type() carries a name \"num_threads\". Naturally, the variable itself knows nothing about that.\n\n This is where this function comes in: we walk up the stack trace until the first frame outside of this\n file, find the original line that called the assert_is_type() function, and extract the variable name from\n that line. This is slightly fragile, in particular we assume that only one assert_is_type statement can be per line,\n or that this statement does not spill over multiple lines, etc."}
{"_id": "q_1037", "text": "Return True if the variable is of the specified type, and False otherwise.\n\n :param var: variable to check\n :param vtype: expected variable's type"}
{"_id": "q_1038", "text": "Attempt to find the source code of the ``lambda_fn`` within the string ``src``."}
{"_id": "q_1039", "text": "Return True if the variable does not match any of the types, and False otherwise."}
{"_id": "q_1040", "text": "Retrieve the config as a dictionary of key-value pairs."}
{"_id": "q_1041", "text": "Return possible locations for the .h2oconfig file, one at a time."}
{"_id": "q_1042", "text": "Start the progress bar, and return only when the progress reaches 100%.\n\n :param progress_fn: the executor function (or a generator). This function should take no arguments\n and return either a single number -- the current progress level, or a tuple (progress level, delay),\n where delay is the time interval for when the progress should be checked again. This function may at\n any point raise the ``StopIteration(message)`` exception, which will interrupt the progress bar,\n display the ``message`` in red font, and then re-raise the exception.\n :raises StopIteration: if the job is interrupted. The reason for interruption is provided in the exception's\n message. The message will say \"cancelled\" if the job was interrupted by the user by pressing Ctrl+C."}
{"_id": "q_1043", "text": "Save the current model progress into ``self._progress_data``, and update ``self._next_poll_time``.\n\n :param res: tuple (progress level, poll delay).\n :param now: current timestamp."}
{"_id": "q_1044", "text": "Compute t0, x0, v0, ve."}
{"_id": "q_1045", "text": "Estimate the moment when the underlying process is expected to reach completion.\n\n This function should only return future times. Also this function is not allowed to return time moments less\n than self._next_poll_time if the actual progress is below 100% (this is because we won't know that the\n process have finished until we poll the external progress function)."}
{"_id": "q_1046", "text": "Determine when to query the progress status next.\n\n This function is used if the external progress function did not return time interval for when it should be\n queried next."}
{"_id": "q_1047", "text": "Return the projected time when progress level `x_target` will be reached.\n\n Since the underlying progress model is nonlinear, we need to do use Newton method to find a numerical solution\n to the equation x(t) = x_target."}
{"_id": "q_1048", "text": "Print the rendered string to the stdout."}
{"_id": "q_1049", "text": "Initial rendering stage, done in order to compute widths of all widgets."}
{"_id": "q_1050", "text": "Find current STDOUT's width, in characters."}
{"_id": "q_1051", "text": "Returns encoding map as an object that maps 'column_name' -> 'frame_with_encoding_map_for_this_column_name'\n\n :param frame frame: An H2OFrame object with which to create the target encoding map"}
{"_id": "q_1052", "text": "Reload frame information from the backend H2O server."}
{"_id": "q_1053", "text": "The type for the given column.\n\n :param col: either a name, or an index of the column to look up\n :returns: type of the column, one of: ``str``, ``int``, ``real``, ``enum``, ``time``, ``bool``.\n :raises H2OValueError: if such column does not exist in the frame."}
{"_id": "q_1054", "text": "Extract columns of the specified type from the frame.\n\n :param str coltype: A character string indicating which column type to filter by. This must be\n one of the following:\n\n - ``\"numeric\"`` - Numeric, but not categorical or time\n - ``\"categorical\"`` - Integer, with a categorical/factor String mapping\n - ``\"string\"`` - String column\n - ``\"time\"`` - Long msec since the Unix Epoch - with a variety of display/parse options\n - ``\"uuid\"`` - UUID\n - ``\"bad\"`` - No none-NA rows (triple negative! all NAs or zero rows)\n\n :returns: list of indices of columns that have the requested type"}
{"_id": "q_1055", "text": "Display summary information about the frame.\n\n Summary includes min/mean/max/sigma and other rollup data.\n\n :param bool return_data: Return a dictionary of the summary output"}
{"_id": "q_1056", "text": "Generate an in-depth description of this H2OFrame.\n\n This will print to the console the dimensions of the frame; names/types/summary statistics for each column;\n and finally first ten rows of the frame.\n\n :param bool chunk_summary: Retrieve the chunk summary along with the distribution summary"}
{"_id": "q_1057", "text": "Get the factor levels.\n\n :returns: A list of lists, one list per column, of levels."}
{"_id": "q_1058", "text": "Change names of columns in the frame.\n\n Dict key is an index or name of the column whose name is to be set.\n Dict value is the new name of the column.\n\n :param columns: dict-like transformations to apply to the column names"}
{"_id": "q_1059", "text": "Test whether elements of an H2OFrame are contained in the ``item``.\n\n :param items: An item or a list of items to compare the H2OFrame against.\n\n :returns: An H2OFrame of 0s and 1s showing whether each element in the original H2OFrame is contained in item."}
{"_id": "q_1060", "text": "Build a fold assignments column for cross-validation.\n\n Rows are assigned a fold according to the current row number modulo ``n_folds``.\n\n :param int n_folds: An integer specifying the number of validation sets to split the training data into.\n :returns: A single-column H2OFrame with the fold assignments."}
{"_id": "q_1061", "text": "Compactly display the internal structure of an H2OFrame."}
{"_id": "q_1062", "text": "Obtain the dataset as a python-local object.\n\n :param bool use_pandas: If True (default) then return the H2OFrame as a pandas DataFrame (requires that the\n ``pandas`` library was installed). If False, then return the contents of the H2OFrame as plain nested\n list, in a row-wise order.\n :param bool header: If True (default), then column names will be appended as the first row in list\n\n :returns: A python object (a list of lists of strings, each list is a row, if use_pandas=False, otherwise\n a pandas DataFrame) containing this H2OFrame instance's data."}
{"_id": "q_1063", "text": "Compute quantiles.\n\n :param List[float] prob: list of probabilities for which quantiles should be computed.\n :param str combine_method: for even samples this setting determines how to combine quantiles. This can be\n one of ``\"interpolate\"``, ``\"average\"``, ``\"low\"``, ``\"high\"``.\n :param weights_column: optional weights for each row. If not given, all rows are assumed to have equal\n importance. This parameter can be either the name of column containing the observation weights in\n this frame, or a single-column separate H2OFrame of observation weights.\n\n :returns: a new H2OFrame containing the quantiles and probabilities."}
{"_id": "q_1064", "text": "Append data to this frame column-wise.\n\n :param H2OFrame data: append columns of frame ``data`` to the current frame. You can also cbind a number,\n in which case it will get converted into a constant column.\n\n :returns: new H2OFrame with all frames in ``data`` appended column-wise."}
{"_id": "q_1065", "text": "Split a frame into distinct subsets of size determined by the given ratios.\n\n The number of subsets is always 1 more than the number of ratios given. Note that\n this does not give an exact split. H2O is designed to be efficient on big data\n using a probabilistic splitting method rather than an exact split. For example\n when specifying a split of 0.75/0.25, H2O will produce a test/train split with\n an expected value of 0.75/0.25 rather than exactly 0.75/0.25. On small datasets,\n the sizes of the resulting splits will deviate from the expected value more than\n on big data, where they will be very close to exact.\n\n :param List[float] ratios: The fractions of rows for each split.\n :param List[str] destination_frames: The names of the split frames.\n :param int seed: seed for the random number generator\n\n :returns: A list of H2OFrames"}
{"_id": "q_1066", "text": "Return a new Frame that fills NA along a given axis and along a given direction with a maximum fill length\n\n :param method: ``\"forward\"`` or ``\"backward\"``\n :param axis: 0 for columnar-wise or 1 for row-wise fill\n :param maxlen: Max number of consecutive NA's to fill\n \n :return:"}
{"_id": "q_1067", "text": "Impute missing values into the frame, modifying it in-place.\n\n :param int column: Index of the column to impute, or -1 to impute the entire frame.\n :param str method: The method of imputation: ``\"mean\"``, ``\"median\"``, or ``\"mode\"``.\n :param str combine_method: When the method is ``\"median\"``, this setting dictates how to combine quantiles\n for even samples. One of ``\"interpolate\"``, ``\"average\"``, ``\"low\"``, ``\"high\"``.\n :param by: The list of columns to group on.\n :param H2OFrame group_by_frame: Impute the values with this pre-computed grouped frame.\n :param List values: The list of impute values, one per column. None indicates to skip the column.\n\n :returns: A list of values used in the imputation or the group-by result used in imputation."}
{"_id": "q_1068", "text": "Insert missing values into the current frame, modifying it in-place.\n\n Randomly replaces a user-specified fraction of entries in a H2O dataset with missing\n values.\n\n :param float fraction: A number between 0 and 1 indicating the fraction of entries to replace with missing.\n :param int seed: The seed for the random number generator used to determine which values to make missing.\n\n :returns: the original H2OFrame with missing values inserted."}
{"_id": "q_1069", "text": "Compute the variance-covariance matrix of one or two H2OFrames.\n\n :param H2OFrame y: If this parameter is given, then a covariance matrix between the columns of the target\n frame and the columns of ``y`` is computed. If this parameter is not provided then the covariance matrix\n of the target frame is returned. If target frame has just a single column, then return the scalar variance\n instead of the matrix. Single rows are treated as single columns.\n :param str use: A string indicating how to handle missing values. This could be one of the following:\n\n - ``\"everything\"``: outputs NaNs whenever one of its contributing observations is missing\n - ``\"all.obs\"``: presence of missing observations will throw an error\n - ``\"complete.obs\"``: discards missing values along with all observations in their rows so that only\n complete observations are used\n :param bool na_rm: an alternative to ``use``: when this is True then default value for ``use`` is\n ``\"everything\"``; and if False then default ``use`` is ``\"complete.obs\"``. This parameter has no effect\n if ``use`` is given explicitly.\n\n :returns: An H2OFrame of the covariance matrix of the columns of this frame (if ``y`` is not given),\n or with the columns of ``y`` (if ``y`` is given). However when this frame and ``y`` are both single rows\n or single columns, then the variance is returned as a scalar."}
{"_id": "q_1070", "text": "Compute the correlation matrix of one or two H2OFrames.\n\n :param H2OFrame y: If this parameter is provided, then compute correlation between the columns of ``y``\n and the columns of the current frame. If this parameter is not given, then just compute the correlation\n matrix for the columns of the current frame.\n :param str use: A string indicating how to handle missing values. This could be one of the following:\n\n - ``\"everything\"``: outputs NaNs whenever one of its contributing observations is missing\n - ``\"all.obs\"``: presence of missing observations will throw an error\n - ``\"complete.obs\"``: discards missing values along with all observations in their rows so that only\n complete observations are used\n :param bool na_rm: an alternative to ``use``: when this is True then default value for ``use`` is\n ``\"everything\"``; and if False then default ``use`` is ``\"complete.obs\"``. This parameter has no effect\n if ``use`` is given explicitly.\n\n :returns: An H2OFrame of the correlation matrix of the columns of this frame (if ``y`` is not given),\n or with the columns of ``y`` (if ``y`` is given). However when this frame and ``y`` are both single rows\n or single columns, then the correlation is returned as a scalar."}
{"_id": "q_1071", "text": "Convert columns in the current frame to categoricals.\n\n :returns: new H2OFrame with columns of the \"enum\" type."}
{"_id": "q_1072", "text": "Split the strings in the target column on the given regular expression pattern.\n\n :param str pattern: The split pattern.\n :returns: H2OFrame containing columns of the split strings."}
{"_id": "q_1073", "text": "For each string in the frame, count the occurrences of the provided pattern. If countmathces is applied to\n a frame, all columns of the frame must be type string, otherwise, the returned frame will contain errors.\n\n The pattern here is a plain string, not a regular expression. We will search for the occurrences of the\n pattern as a substring in element of the frame. This function is applicable to frames containing only\n string or categorical columns.\n\n :param str pattern: The pattern to count matches on in each string. This can also be a list of strings,\n in which case all of them will be searched for.\n :returns: numeric H2OFrame with the same shape as the original, containing counts of matches of the\n pattern for each cell in the original frame."}
{"_id": "q_1074", "text": "For each string, return a new string that is a substring of the original string.\n\n If end_index is not specified, then the substring extends to the end of the original string. If the start_index\n is longer than the length of the string, or is greater than or equal to the end_index, an empty string is\n returned. Negative start_index is coerced to 0.\n\n :param int start_index: The index of the original string at which to start the substring, inclusive.\n :param int end_index: The index of the original string at which to end the substring, exclusive.\n :returns: An H2OFrame containing the specified substrings."}
{"_id": "q_1075", "text": "Return a copy of the column with leading characters removed.\n\n The set argument is a string specifying the set of characters to be removed.\n If omitted, the set argument defaults to removing whitespace.\n\n :param character set: The set of characters to lstrip from strings in column.\n :returns: a new H2OFrame with the same shape as the original frame and having all its values\n trimmed from the left (equivalent of Python's ``str.lstrip()``)."}
{"_id": "q_1076", "text": "Compute the counts of values appearing in a column, or co-occurence counts between two columns.\n\n :param H2OFrame data2: An optional single column to aggregate counts by.\n :param bool dense: If True (default) then use dense representation, which lists only non-zero counts,\n 1 combination per row. Set to False to expand counts across all combinations.\n\n :returns: H2OFrame of the counts at each combination of factor levels"}
{"_id": "q_1077", "text": "Compute a histogram over a numeric column.\n\n :param breaks: Can be one of ``\"sturges\"``, ``\"rice\"``, ``\"sqrt\"``, ``\"doane\"``, ``\"fd\"``, ``\"scott\"``;\n or a single number for the number of breaks; or a list containing the split points, e.g:\n ``[-50, 213.2123, 9324834]``. If breaks is \"fd\", the MAD is used over the IQR in computing bin width.\n :param bool plot: If True (default), then a plot will be generated using ``matplotlib``.\n\n :returns: If ``plot`` is False, return H2OFrame with these columns: breaks, counts, mids_true,\n mids, and density; otherwise this method draws a plot and returns nothing."}
{"_id": "q_1078", "text": "Substitute the first occurrence of pattern in a string with replacement.\n\n :param str pattern: A regular expression.\n :param str replacement: A replacement string.\n :param bool ignore_case: If True then pattern will match case-insensitively.\n :returns: an H2OFrame with all values matching ``pattern`` replaced with ``replacement``."}
{"_id": "q_1079", "text": "Searches for matches to argument `pattern` within each element\n of a string column.\n\n Default behavior is to return indices of the elements matching the pattern. Parameter\n `output_logical` can be used to return a logical vector indicating if the element matches\n the pattern (1) or not (0).\n\n :param str pattern: A character string containing a regular expression.\n :param bool ignore_case: If True, then case is ignored during matching.\n :param bool invert: If True, then identify elements that do not match the pattern.\n :param bool output_logical: If True, then return logical vector of indicators instead of list of matching positions\n :return: H2OFrame holding the matching positions or a logical list if `output_logical` is enabled."}
{"_id": "q_1080", "text": "Construct a column that can be used to perform a random stratified split.\n\n :param float test_frac: The fraction of rows that will belong to the \"test\".\n :param int seed: The seed for the random number generator.\n\n :returns: an H2OFrame having single categorical column with two levels: ``\"train\"`` and ``\"test\"``.\n\n :examples:\n >>> stratsplit = df[\"y\"].stratified_split(test_frac=0.3, seed=12349453)\n >>> train = df[stratsplit==\"train\"]\n >>> test = df[stratsplit==\"test\"]\n >>>\n >>> # check that the distributions among the initial frame, and the\n >>> # train/test frames match\n >>> df[\"y\"].table()[\"Count\"] / df[\"y\"].table()[\"Count\"].sum()\n >>> train[\"y\"].table()[\"Count\"] / train[\"y\"].table()[\"Count\"].sum()\n >>> test[\"y\"].table()[\"Count\"] / test[\"y\"].table()[\"Count\"].sum()"}
{"_id": "q_1081", "text": "Get the index of the max value in a column or row\n\n :param bool skipna: If True (default), then NAs are ignored during the search. Otherwise presence\n of NAs renders the entire result NA.\n :param int axis: Direction of finding the max index. If 0 (default), then the max index is searched columnwise, and the\n result is a frame with 1 row and number of columns as in the original frame. If 1, then the max index is searched\n rowwise and the result is a frame with 1 column, and number of rows equal to the number of rows in the original frame.\n :returns: either a list of max index values per-column or an H2OFrame containing max index values\n per-row from the original frame."}
{"_id": "q_1082", "text": "Parse the provided file, and return Code object."}
{"_id": "q_1083", "text": "Move the token by `drow` rows and `dcol` columns."}
{"_id": "q_1084", "text": "Convert the parsed representation back into the source code."}
{"_id": "q_1085", "text": "The standardized centers for the kmeans model."}
{"_id": "q_1086", "text": "Connect to an existing H2O server, remote or local.\n\n There are two ways to connect to a server: either pass a `server` parameter containing an instance of\n an H2OLocalServer, or specify `ip` and `port` of the server that you want to connect to.\n\n :param server: An H2OLocalServer instance to connect to (optional).\n :param url: Full URL of the server to connect to (can be used instead of `ip` + `port` + `https`).\n :param ip: The ip address (or host name) of the server where H2O is running.\n :param port: Port number that H2O service is listening to.\n :param https: Set to True to connect via https:// instead of http://.\n :param verify_ssl_certificates: When using https, setting this to False will disable SSL certificates verification.\n :param auth: Either a (username, password) pair for basic authentication, an instance of h2o.auth.SpnegoAuth\n or one of the requests.auth authenticator objects.\n :param proxy: Proxy server address.\n :param cookies: Cookie (or list of) to add to request\n :param verbose: Set to False to disable printing connection status messages.\n :param connection_conf: Connection configuration object encapsulating connection parameters.\n :returns: the new :class:`H2OConnection` object."}
{"_id": "q_1087", "text": "Perform a REST API request to a previously connected server.\n\n This function is mostly for internal purposes, but may occasionally be useful for direct access to\n the backend H2O server. It has same parameters as :meth:`H2OConnection.request <h2o.backend.H2OConnection.request>`."}
{"_id": "q_1088", "text": "Upload a dataset from the provided local path to the H2O cluster.\n\n Does a single-threaded push to H2O. Also see :meth:`import_file`.\n\n :param path: A path specifying the location of the data to upload.\n :param destination_frame: The unique hex key assigned to the imported file. If none is given, a key will\n be automatically generated.\n :param header: -1 means the first line is data, 0 means guess, 1 means first line is header.\n :param sep: The field separator character. Values on each line of the file are separated by\n this character. If not provided, the parser will automatically detect the separator.\n :param col_names: A list of column names for the file.\n :param col_types: A list of types or a dictionary of column names to types to specify whether columns\n should be forced to a certain type upon import parsing. If a list, the types for elements that are\n one will be guessed. The possible types a column may have are:\n\n - \"unknown\" - this will force the column to be parsed as all NA\n - \"uuid\" - the values in the column must be true UUID or will be parsed as NA\n - \"string\" - force the column to be parsed as a string\n - \"numeric\" - force the column to be parsed as numeric. H2O will handle the compression of the numeric\n data in the optimal manner.\n - \"enum\" - force the column to be parsed as a categorical column.\n - \"time\" - force the column to be parsed as a time column. H2O will attempt to parse the following\n list of date time formats: (date) \"yyyy-MM-dd\", \"yyyy MM dd\", \"dd-MMM-yy\", \"dd MMM yy\", (time)\n \"HH:mm:ss\", \"HH:mm:ss:SSS\", \"HH:mm:ss:SSSnnnnnn\", \"HH.mm.ss\" \"HH.mm.ss.SSS\", \"HH.mm.ss.SSSnnnnnn\".\n Times can also contain \"AM\" or \"PM\".\n :param na_strings: A list of strings, or a list of lists of strings (one list per column), or a dictionary\n of column names to strings which are to be interpreted as missing values.\n :param skipped_columns: an integer lists of column indices to skip and not parsed into the final frame from the import file.\n\n :returns: a new :class:`H2OFrame` instance.\n\n :examples:\n >>> frame = h2o.upload_file(\"/path/to/local/data\")"}
{"_id": "q_1089", "text": "Import a dataset that is already on the cluster.\n\n The path to the data must be a valid path for each node in the H2O cluster. If some node in the H2O cluster\n cannot see the file, then an exception will be thrown by the H2O cluster. Does a parallel/distributed\n multi-threaded pull of the data. The main difference between this method and :func:`upload_file` is that\n the latter works with local files, whereas this method imports remote files (i.e. files local to the server).\n If you running H2O server on your own maching, then both methods behave the same.\n\n :param path: path(s) specifying the location of the data to import or a path to a directory of files to import\n :param destination_frame: The unique hex key assigned to the imported file. If none is given, a key will be\n automatically generated.\n :param parse: If True, the file should be parsed after import. If False, then a list is returned containing the file path.\n :param header: -1 means the first line is data, 0 means guess, 1 means first line is header.\n :param sep: The field separator character. Values on each line of the file are separated by\n this character. If not provided, the parser will automatically detect the separator.\n :param col_names: A list of column names for the file.\n :param col_types: A list of types or a dictionary of column names to types to specify whether columns\n should be forced to a certain type upon import parsing. If a list, the types for elements that are\n one will be guessed. The possible types a column may have are:\n\n - \"unknown\" - this will force the column to be parsed as all NA\n - \"uuid\" - the values in the column must be true UUID or will be parsed as NA\n - \"string\" - force the column to be parsed as a string\n - \"numeric\" - force the column to be parsed as numeric. H2O will handle the compression of the numeric\n data in the optimal manner.\n - \"enum\" - force the column to be parsed as a categorical column.\n - \"time\" - force the column to be parsed as a time column. H2O will attempt to parse the following\n list of date time formats: (date) \"yyyy-MM-dd\", \"yyyy MM dd\", \"dd-MMM-yy\", \"dd MMM yy\", (time)\n \"HH:mm:ss\", \"HH:mm:ss:SSS\", \"HH:mm:ss:SSSnnnnnn\", \"HH.mm.ss\" \"HH.mm.ss.SSS\", \"HH.mm.ss.SSSnnnnnn\".\n Times can also contain \"AM\" or \"PM\".\n :param na_strings: A list of strings, or a list of lists of strings (one list per column), or a dictionary\n of column names to strings which are to be interpreted as missing values.\n :param pattern: Character string containing a regular expression to match file(s) in the folder if `path` is a\n directory.\n :param skipped_columns: an integer list of column indices to skip and not parsed into the final frame from the import file.\n :param custom_non_data_line_markers: If a line in imported file starts with any character in given string it will NOT be imported. Empty string means all lines are imported, None means that default behaviour for given format will be used\n\n :returns: a new :class:`H2OFrame` instance.\n\n :examples:\n >>> # Single file import\n >>> iris = import_file(\"h2o-3/smalldata/iris.csv\")\n >>> # Return all files in the folder iris/ matching the regex r\"iris_.*\\.csv\"\n >>> iris_pattern = h2o.import_file(path = \"h2o-3/smalldata/iris\",\n ... pattern = \"iris_.*\\.csv\")"}
{"_id": "q_1090", "text": "Import Hive table to H2OFrame in memory.\n\n Make sure to start H2O with Hive on classpath. Uses hive-site.xml on classpath to connect to Hive.\n\n :param database: Name of Hive database (default database will be used by default)\n :param table: name of Hive table to import\n :param partitions: a list of lists of strings - partition key column values of partitions you want to import.\n :param allow_multi_format: enable import of partitioned tables with different storage formats used. WARNING:\n this may fail on out-of-memory for tables with a large number of small partitions.\n\n :returns: an :class:`H2OFrame` containing data of the specified Hive table.\n\n :examples:\n >>> my_citibike_data = h2o.import_hive_table(\"default\", \"table\", [[\"2017\", \"01\"], [\"2017\", \"02\"]])"}
{"_id": "q_1091", "text": "Import the SQL table that is the result of the specified SQL query to H2OFrame in memory.\n\n Creates a temporary SQL table from the specified sql_query.\n Runs multiple SELECT SQL queries on the temporary table concurrently for parallel ingestion, then drops the table.\n Be sure to start the h2o.jar in the terminal with your downloaded JDBC driver in the classpath::\n\n java -cp <path_to_h2o_jar>:<path_to_jdbc_driver_jar> water.H2OApp\n\n Also see h2o.import_sql_table. Currently supported SQL databases are MySQL, PostgreSQL, MariaDB, Hive, Oracle \n and Microsoft SQL Server.\n\n :param connection_url: URL of the SQL database connection as specified by the Java Database Connectivity (JDBC)\n Driver. For example, \"jdbc:mysql://localhost:3306/menagerie?&useSSL=false\"\n :param select_query: SQL query starting with `SELECT` that returns rows from one or more database tables.\n :param username: username for SQL server\n :param password: password for SQL server\n :param optimize: DEPRECATED. Ignored - use fetch_mode instead. Optimize import of SQL table for faster imports.\n :param use_temp_table: whether a temporary table should be created from select_query\n :param temp_table_name: name of temporary table to be created from select_query\n :param fetch_mode: Set to DISTRIBUTED to enable distributed import. Set to SINGLE to force a sequential read by a single node\n from the database.\n\n :returns: an :class:`H2OFrame` containing data of the specified SQL query.\n\n :examples:\n >>> conn_url = \"jdbc:mysql://172.16.2.178:3306/ingestSQL?&useSSL=false\"\n >>> select_query = \"SELECT bikeid from citibike20k\"\n >>> username = \"root\"\n >>> password = \"abc123\"\n >>> my_citibike_data = h2o.import_sql_select(conn_url, select_query,\n ... username, password, fetch_mode)"}
{"_id": "q_1092", "text": "Parse dataset using the parse setup structure.\n\n :param setup: Result of ``h2o.parse_setup()``\n :param id: an id for the frame.\n :param first_line_is_header: -1, 0, 1 if the first line is to be used as the header\n\n :returns: an :class:`H2OFrame` object."}
{"_id": "q_1093", "text": "Load a model from the server.\n\n :param model_id: The model identification in H2O\n\n :returns: Model object, a subclass of H2OEstimator"}
{"_id": "q_1094", "text": "Download the POJO for this model to the directory specified by path; if path is \"\", then dump to screen.\n\n :param model: the model whose scoring POJO should be retrieved.\n :param path: an absolute path to the directory where POJO should be saved.\n :param get_jar: retrieve the h2o-genmodel.jar also (will be saved to the same folder ``path``).\n :param jar_name: Custom name of genmodel jar.\n :returns: location of the downloaded POJO file."}
{"_id": "q_1095", "text": "Download an H2O data set to a CSV file on the local disk.\n\n Warning: Files located on the H2O server may be very large! Make sure you have enough\n hard drive space to accommodate the entire file.\n\n :param data: an H2OFrame object to be downloaded.\n :param filename: name for the CSV file where the data should be saved to."}
{"_id": "q_1096", "text": "Export a given H2OFrame to a path on the machine this python session is currently connected to.\n\n :param frame: the Frame to save to disk.\n :param path: the path to the save point on disk.\n :param force: if True, overwrite any preexisting file with the same path\n :param parts: enables export to multiple 'part' files instead of just a single file.\n Convenient for large datasets that take too long to store in a single file.\n Use parts=-1 to instruct H2O to determine the optimal number of part files or\n specify your desired maximum number of part files. Path needs to be a directory\n when exporting to multiple files, also that directory must be empty.\n Default is ``parts = 1``, which is to export to a single file."}
{"_id": "q_1097", "text": "Convert an H2O data object into a python-specific object.\n\n WARNING! This will pull all data local!\n\n If Pandas is available (and use_pandas is True), then pandas will be used to parse the\n data frame. Otherwise, a list-of-lists populated by character data will be returned (so\n the types of data will all be str).\n\n :param data: an H2O data object.\n :param use_pandas: If True, try to use pandas for reading in the data.\n :param header: If True, return column names as first element in list\n\n :returns: List of lists (Rows x Columns)."}
{"_id": "q_1098", "text": "H2O built-in demo facility.\n\n :param funcname: A string that identifies the h2o python function to demonstrate.\n :param interactive: If True, the user will be prompted to continue the demonstration after every segment.\n :param echo: If True, the python commands that are executed will be displayed.\n :param test: If True, `h2o.init()` will not be called (used for pyunit testing).\n\n :example:\n >>> import h2o\n >>> h2o.demo(\"gbm\")"}
{"_id": "q_1099", "text": "Imports a data file within the 'h2o_data' folder."}
{"_id": "q_1100", "text": "Create Model Metrics from predicted and actual values in H2O.\n\n :param H2OFrame predicted: an H2OFrame containing predictions.\n :param H2OFrame actuals: an H2OFrame containing actual values.\n :param domain: list of response factors for classification.\n :param distribution: distribution for regression."}
{"_id": "q_1101", "text": "Check that the provided frame id is valid in Rapids language."}
{"_id": "q_1102", "text": "Convert given number of bytes into a human readable representation, i.e. add prefix such as kb, Mb, Gb,\n etc. The `size` argument must be a non-negative integer.\n\n :param size: integer representing byte size of something\n :return: string representation of the size, in human-readable form"}
{"_id": "q_1103", "text": "Return a \"canonical\" version of slice ``s``.\n\n :param slice s: the original slice expression\n :param total int: total number of elements in the collection sliced by ``s``\n :return slice: a slice equivalent to ``s`` but not containing any negative indices or Nones."}
{"_id": "q_1104", "text": "MOJO scoring function to take a Pandas frame and use MOJO model as zip file to score.\n\n :param dataframe: Pandas frame to score.\n :param mojo_zip_path: Path to MOJO zip downloaded from H2O.\n :param genmodel_jar_path: Optional, path to genmodel jar file. If None (default) then the h2o-genmodel.jar in the same\n folder as the MOJO zip will be used.\n :param classpath: Optional, specifies custom user defined classpath which will be used when scoring. If None\n (default) then the default classpath for this MOJO model will be used.\n :param java_options: Optional, custom user defined options for Java. By default ``-Xmx4g`` is used.\n :param verbose: Optional, if True, then additional debug information will be printed. False by default.\n :return: Pandas frame with predictions"}
{"_id": "q_1105", "text": "MOJO scoring function to take a CSV file and use MOJO model as zip file to score.\n\n :param input_csv_path: Path to input CSV file.\n :param mojo_zip_path: Path to MOJO zip downloaded from H2O.\n :param output_csv_path: Optional, name of the output CSV file with computed predictions. If None (default), then\n predictions will be saved as prediction.csv in the same folder as the MOJO zip.\n :param genmodel_jar_path: Optional, path to genmodel jar file. If None (default) then the h2o-genmodel.jar in the same\n folder as the MOJO zip will be used.\n :param classpath: Optional, specifies custom user defined classpath which will be used when scoring. If None\n (default) then the default classpath for this MOJO model will be used.\n :param java_options: Optional, custom user defined options for Java. By default ``-Xmx4g -XX:ReservedCodeCacheSize=256m`` is used.\n :param verbose: Optional, if True, then additional debug information will be printed. False by default.\n :return: List of computed predictions"}
{"_id": "q_1106", "text": "The decorator to mark deprecated functions."}
{"_id": "q_1107", "text": "Wait until grid finishes computing."}
{"_id": "q_1108", "text": "Print a detailed summary of the explored models."}
{"_id": "q_1109", "text": "Print models sorted by metric."}
{"_id": "q_1110", "text": "Derived and returned the model parameters used to train the particular grid search model.\n\n :param str id: The model id of the model with hyperparameters of interest.\n :param bool display: Flag to indicate whether to display the hyperparameter names.\n\n :returns: A dict of model pararmeters derived from the hyper-parameters used to train this particular model."}
{"_id": "q_1111", "text": "Retrieve an H2OGridSearch instance.\n\n Optionally specify a metric by which to sort models and a sort order.\n Note that if neither cross-validation nor a validation frame is used in the grid search, then the\n training metrics will display in the \"get grid\" output. If a validation frame is passed to the grid, and\n ``nfolds = 0``, then the validation metrics will display. However, if ``nfolds`` > 1, then cross-validation\n metrics will display even if a validation frame is provided.\n\n :param str sort_by: A metric by which to sort the models in the grid space. Choices are: ``\"logloss\"``,\n ``\"residual_deviance\"``, ``\"mse\"``, ``\"auc\"``, ``\"r2\"``, ``\"accuracy\"``, ``\"precision\"``, ``\"recall\"``,\n ``\"f1\"``, etc.\n :param bool decreasing: Sort the models in decreasing order of metric if true, otherwise sort in increasing\n order (default).\n\n :returns: A new H2OGridSearch instance optionally sorted on the specified metric."}
{"_id": "q_1112", "text": "Return the Importance of components associcated with a pca model.\n\n use_pandas: ``bool`` (default: ``False``)."}
{"_id": "q_1113", "text": "Convert archetypes of the model into original feature space.\n\n :param H2OFrame test_data: The dataset upon which the model was trained.\n :param bool reverse_transform: Whether the transformation of the training data during model-building\n should be reversed on the projected archetypes.\n\n :returns: model archetypes projected back into the original training data's feature space."}
{"_id": "q_1114", "text": "Convert names with underscores into camelcase.\n\n For example:\n \"num_rows\" => \"numRows\"\n \"very_long_json_name\" => \"veryLongJsonName\"\n \"build_GBM_model\" => \"buildGbmModel\"\n \"KEY\" => \"key\"\n \"middle___underscores\" => \"middleUnderscores\"\n \"_exclude_fields\" => \"_excludeFields\" (retain initial/trailing underscores)\n \"__http_status__\" => \"__httpStatus__\"\n\n :param name: name to be converted"}
{"_id": "q_1115", "text": "Dedent text to the specific indentation level.\n\n :param ind: common indentation level for the resulting text (number of spaces to append to every line)\n :param text: text that should be transformed.\n :return: ``text`` with all common indentation removed, and then the specified amount of indentation added."}
{"_id": "q_1116", "text": "This function will extract the various operation time for GLRM model building iterations.\n\n :param javaLogText:\n :return:"}
{"_id": "q_1117", "text": "Main program. Take user input, parse it and call other functions to execute the commands\n and extract run summary and store run result in json file\n\n @return: none"}
{"_id": "q_1118", "text": "Close an existing connection; once closed it cannot be used again.\n\n Strictly speaking it is not necessary to close all connection that you opened -- we have several mechanisms\n in place that will do so automatically (__del__(), __exit__() and atexit() handlers), however there is also\n no good reason to make this method private."}
{"_id": "q_1119", "text": "Return the session id of the current connection.\n\n The session id is issued (through an API request) the first time it is requested, but no sooner. This is\n because generating a session id puts it into the DKV on the server, which effectively locks the cluster. Once\n issued, the session id will stay the same until the connection is closed."}
{"_id": "q_1120", "text": "Prepare `filename` to be sent to the server.\n\n The \"preparation\" consists of creating a data structure suitable\n for passing to requests.request()."}
{"_id": "q_1121", "text": "Log response from an API request."}
{"_id": "q_1122", "text": "Given a response object, prepare it to be handed over to the external caller.\n\n Preparation steps include:\n * detect if the response has error status, and convert it to an appropriate exception;\n * detect Content-Type, and based on that either parse the response as JSON or return as plain text."}
{"_id": "q_1123", "text": "Helper function to print connection status messages when in verbose mode."}
{"_id": "q_1124", "text": "Download the leader model in AutoML in MOJO format.\n\n :param path: the path where MOJO file should be saved.\n :param get_genmodel_jar: if True, then also download h2o-genmodel.jar and store it in folder ``path``.\n :param genmodel_name Custom name of genmodel jar\n :returns: name of the MOJO file written."}
{"_id": "q_1125", "text": "Fit this object by computing the means and standard deviations used by the transform method.\n\n :param X: An H2OFrame; may contain NAs and/or categoricals.\n :param y: None (Ignored)\n :param params: Ignored\n :returns: This H2OScaler instance"}
{"_id": "q_1126", "text": "Scale an H2OFrame with the fitted means and standard deviations.\n\n :param X: An H2OFrame; may contain NAs and/or categoricals.\n :param y: None (Ignored)\n :param params: (Ignored)\n :returns: A scaled H2OFrame."}
{"_id": "q_1127", "text": "remove extra characters before the actual string we are\n looking for. The Jenkins console output is encoded using utf-8. However, the stupid\n redirect function can only encode using ASCII. I have googled for half a day with no\n results to how to resolve the issue. Hence, we are going to the heat and just manually\n get rid of the junk.\n\n Parameters\n ----------\n\n string_content : str\n contains a line read in from jenkins console\n\n :return: str: contains the content of the line after the string '[0m'"}
{"_id": "q_1128", "text": "Find the slave machine where a Jenkins job was executed on. It will save this\n information in g_failed_test_info_dict. In addition, it will\n delete this particular function handle off the temp_func_list as we do not need\n to perform this action again.\n\n Parameters\n ----------\n\n each_line : str\n contains a line read in from jenkins console\n temp_func_list : list of Python function handles\n contains a list of functions that we want to invoke to extract information from\n the Jenkins console text.\n\n :return: bool to determine if text mining should continue on the jenkins console text"}
{"_id": "q_1129", "text": "Find if a Jenkins job has taken too long to finish and was killed. It will save this\n information in g_failed_test_info_dict.\n\n Parameters\n ----------\n\n each_line : str\n contains a line read in from jenkins console\n temp_func_list : list of Python function handles\n contains a list of functions that we want to invoke to extract information from\n the Jenkins console text.\n\n :return: bool to determine if text mining should continue on the jenkins console text"}
{"_id": "q_1130", "text": "Find if a Jenkins job has failed to build. It will save this\n information in g_failed_test_info_dict. In addition, it will delete this particular\n function handle off the temp_func_list as we do not need to perform this action again.\n\n Parameters\n ----------\n\n each_line : str\n contains a line read in from jenkins console\n temp_func_list : list of Python function handles\n contains a list of functions that we want to invoke to extract information from\n the Jenkins console text.\n\n :return: bool to determine if text mining should continue on the jenkins console text"}
{"_id": "q_1131", "text": "Find the build id of a jenkins job. It will save this\n information in g_failed_test_info_dict. In addition, it will delete this particular\n function handle off the temp_func_list as we do not need to perform this action again.\n\n Parameters\n ----------\n\n each_line : str\n contains a line read in from jenkins console\n temp_func_list : list of Python function handles\n contains a list of functions that we want to invoke to extract information from\n the Jenkins console text.\n\n :return: bool to determine if text mining should continue on the jenkins console text"}
{"_id": "q_1132", "text": "Save the log scraping results into logs denoted by g_output_filename_failed_tests and\n g_output_filename_passed_tests.\n\n :return: none"}
{"_id": "q_1133", "text": "Concatecate all log file into a summary text file to be sent to users\n at the end of a daily log scraping.\n\n :return: none"}
{"_id": "q_1134", "text": "Write one log file into the summary text file.\n\n Parameters\n ----------\n fhandle : Python file handle\n file handle to the summary text file\n file2read : Python file handle\n file handle to log file where we want to add its content to the summary text file.\n\n :return: none"}
{"_id": "q_1135", "text": "Loop through all java messages that are not associated with a unit test and\n write them into a log file.\n\n Parameters\n ----------\n key : str\n 9.general_bad_java_messages\n val : list of list of str\n contains the bad java messages and the message types.\n\n\n :return: none"}
{"_id": "q_1136", "text": "Load in pickle file that contains dict structure with bad java messages to ignore per unit test\n or for all cases. The ignored bad java info is stored in g_ok_java_messages dict.\n\n :return:"}
{"_id": "q_1137", "text": "Return enum constant `s` converted to a canonical snake-case."}
{"_id": "q_1138", "text": "Find the percentile of a list of values.\n\n @parameter N - is a list of values. Note N MUST BE already sorted.\n @parameter percent - a float value from 0.0 to 1.0.\n @parameter key - optional key function to compute value from each element of N.\n\n @return - the percentile of the values"}
{"_id": "q_1139", "text": "Dictionary of the default parameters of the model."}
{"_id": "q_1140", "text": "Retrieve Model Score History.\n\n :returns: The score history as an H2OTwoDimTable or a Pandas DataFrame."}
{"_id": "q_1141", "text": "Print innards of model, without regards to type."}
{"_id": "q_1142", "text": "Retreive the residual degress of freedom if this model has the attribute, or None otherwise.\n\n :param bool train: Get the residual dof for the training set. If both train and valid are False, then train\n is selected by default.\n :param bool valid: Get the residual dof for the validation set. If both train and valid are True, then train\n is selected by default.\n\n :returns: Return the residual dof, or None if it is not present."}
{"_id": "q_1143", "text": "Return the coefficients which can be applied to the non-standardized data.\n\n Note: standardize = True by default, if set to False then coef() return the coefficients which are fit directly."}
{"_id": "q_1144", "text": "Download the POJO for this model to the directory specified by path.\n\n If path is an empty string, then dump the output to screen.\n\n :param path: An absolute path to the directory where POJO should be saved.\n :param get_genmodel_jar: if True, then also download h2o-genmodel.jar and store it in folder ``path``.\n :param genmodel_name Custom name of genmodel jar\n :returns: name of the POJO file written."}
{"_id": "q_1145", "text": "Check that y_actual and y_predicted have the same length.\n\n :param H2OFrame y_actual:\n :param H2OFrame y_predicted:\n\n :returns: None"}
{"_id": "q_1146", "text": "Deep Learning model demo."}
{"_id": "q_1147", "text": "GLM model demo."}
{"_id": "q_1148", "text": "Wait for a key press on the console and return it.\n\n Borrowed from http://stackoverflow.com/questions/983354/how-do-i-make-python-to-wait-for-a-pressed-key"}
{"_id": "q_1149", "text": "Convert to a python 'data frame'."}
{"_id": "q_1150", "text": "Print the contents of this table."}
{"_id": "q_1151", "text": "Return the location of an h2o.jar executable.\n\n :param path0: Explicitly given h2o.jar path. If provided, then we will simply check whether the file is there,\n otherwise we will search for an executable in locations returned by ._jar_paths().\n\n :raises H2OStartupError: if no h2o.jar executable can be found."}
{"_id": "q_1152", "text": "Produce potential paths for an h2o.jar executable."}
{"_id": "q_1153", "text": "Convert uri to absolute filepath\n\n Parameters\n ----------\n uri : string\n URI of python module to return path for\n\n Returns\n -------\n path : None or string\n Returns None if there is no valid path for this URI\n Otherwise returns absolute file system path for URI\n\n Examples\n --------\n >>> docwriter = ApiDocWriter('sphinx')\n >>> import sphinx\n >>> modpath = sphinx.__path__[0]\n >>> res = docwriter._uri2path('sphinx.builder')\n >>> res == os.path.join(modpath, 'builder.py')\n True\n >>> res = docwriter._uri2path('sphinx')\n >>> res == os.path.join(modpath, '__init__.py')\n True\n >>> docwriter._uri2path('sphinx.does_not_exist')"}
{"_id": "q_1154", "text": "Parse lines of text for functions and classes"}
{"_id": "q_1155", "text": "Generate API reST files.\n\n Parameters\n ----------\n outdir : string\n Directory name in which to store files\n We create automatic filenames for each module\n \n Returns\n -------\n None\n\n Notes\n -----\n Sets self.written_modules to list of written modules"}
{"_id": "q_1156", "text": "Update the g_ok_java_messages dict structure by\n 1. add the new java ignored messages stored in message_dict if action == 1\n 2. remove the java ignored messages stired in message_dict if action == 2.\n\n Parameters\n ----------\n\n message_dict : Python dict\n key: unit test name or \"general\"\n value: list of java messages that are to be ignored if they are found when running the test stored as the key. If\n the key is \"general\", the list of java messages are to be ignored when running all tests.\n action : int\n if 1: add java ignored messages stored in message_dict to g_ok_java_messages dict;\n if 2: remove java ignored messages stored in message_dict from g_ok_java_messages dict.\n\n :return: none"}
{"_id": "q_1157", "text": "Save the ignored java message dict stored in g_ok_java_messages into a pickle file for future use.\n\n :return: none"}
{"_id": "q_1158", "text": "Write the java ignored messages in g_ok_java_messages into a text file for humans to read.\n\n :return: none"}
{"_id": "q_1159", "text": "Parse user inputs and set the corresponing global variables to perform the\n necessary tasks.\n\n Parameters\n ----------\n\n argv : string array\n contains flags and input options from users\n\n :return:"}
{"_id": "q_1160", "text": "Find all python files in the given directory and all subfolders."}
{"_id": "q_1161", "text": "Search the file for any magic incantations.\n\n :param filename: file to search\n :returns: a tuple containing the spell and then maybe some extra words (or None if no magic present)"}
{"_id": "q_1162", "text": "Transform H2OFrame using a MOJO Pipeline.\n\n :param data: Frame to be transformed.\n :param allow_timestamps: Allows datetime columns to be used directly with MOJO pipelines. It is recommended\n to parse your datetime columns as Strings when using pipelines because pipelines can interpret certain datetime\n formats in a different way. If your H2OFrame is parsed from a binary file format (eg. Parquet) instead of CSV\n it is safe to turn this option on and use datetime columns directly.\n\n :returns: A new H2OFrame."}
{"_id": "q_1163", "text": "This function will print out the intermittents onto the screen for casual viewing. It will also print out\n where the giant summary dictionary is going to be stored.\n\n :return: None"}
{"_id": "q_1164", "text": "Produce the desired metric plot.\n\n :param type: the type of metric plot (currently, only ROC supported).\n :param server: if True, generate plot inline using matplotlib's \"Agg\" backend.\n :returns: None"}
{"_id": "q_1165", "text": "Get the confusion matrix for the specified metric\n\n :param metrics: A string (or list of strings) among metrics listed in :const:`max_metrics`. Defaults to 'f1'.\n :param thresholds: A value (or list of values) between 0 and 1.\n :returns: a list of ConfusionMatrix objects (if there are more than one to return), or a single ConfusionMatrix\n (if there is only one)."}
{"_id": "q_1166", "text": "Returns True if a deep water model can be built, or False otherwise."}
{"_id": "q_1167", "text": "This method will remove data from the summary text file and the dictionary file for tests that occurs before\n the number of months specified by monthToKeep.\n\n :param monthToKeep:\n :return:"}
{"_id": "q_1168", "text": "Set site domain and name."}
{"_id": "q_1169", "text": "Append a header, preserving any duplicate entries."}
{"_id": "q_1170", "text": "Given a function, parse the docstring as YAML and return a dictionary of info."}
{"_id": "q_1171", "text": "Given `directory` and `packages` arugments, return a list of all the\n directories that should be used for serving static files from."}
{"_id": "q_1172", "text": "Returns an HTTP response, given the incoming path, method and request headers."}
{"_id": "q_1173", "text": "Send ASGI websocket messages, ensuring valid state transitions."}
{"_id": "q_1174", "text": "Adds the default_data to data and dumps it to a json."}
{"_id": "q_1175", "text": "Comments last user_id's medias"}
{"_id": "q_1176", "text": "Returns login and password stored in `secret.txt`."}
{"_id": "q_1177", "text": "Likes last user_id's medias"}
{"_id": "q_1178", "text": "Reads list from file. One line - one item.\n Returns the list if file items."}
{"_id": "q_1179", "text": "Finds the max and median long and short position concentrations\n in each time period specified by the index of positions.\n\n Parameters\n ----------\n positions : pd.DataFrame\n The positions that the strategy takes over time.\n\n Returns\n -------\n pd.DataFrame\n Columns are max long, max short, median long, and median short\n position concentrations. Rows are timeperiods."}
{"_id": "q_1180", "text": "Determines the long and short allocations in a portfolio.\n\n Parameters\n ----------\n positions : pd.DataFrame\n The positions that the strategy takes over time.\n\n Returns\n -------\n df_long_short : pd.DataFrame\n Long and short allocations as a decimal\n percentage of the total net liquidation"}
{"_id": "q_1181", "text": "Returns style factor exposure of an algorithm's positions\n\n Parameters\n ----------\n positions : pd.DataFrame\n Daily equity positions of algorithm, in dollars.\n - See full explanation in create_risk_tear_sheet\n\n risk_factor : pd.DataFrame\n Daily risk factor per asset.\n - DataFrame with dates as index and equities as columns\n - Example:\n Equity(24 Equity(62\n [AAPL]) [ABT])\n 2017-04-03\t -0.51284 1.39173\n 2017-04-04\t -0.73381 0.98149\n 2017-04-05\t -0.90132 1.13981"}
{"_id": "q_1182", "text": "Plots DataFrame output of compute_style_factor_exposures as a line graph\n\n Parameters\n ----------\n tot_style_factor_exposure : pd.Series\n Daily style factor exposures (output of compute_style_factor_exposures)\n - Time series with decimal style factor exposures\n - Example:\n 2017-04-24 0.037820\n 2017-04-25 0.016413\n 2017-04-26 -0.021472\n 2017-04-27 -0.024859\n\n factor_name : string\n Name of style factor, for use in graph title\n - Defaults to tot_style_factor_exposure.name"}
{"_id": "q_1183", "text": "Plots outputs of compute_sector_exposures as area charts\n\n Parameters\n ----------\n long_exposures, short_exposures : arrays\n Arrays of long and short sector exposures (output of\n compute_sector_exposures).\n\n sector_dict : dict or OrderedDict\n Dictionary of all sectors\n - See full description in compute_sector_exposures"}
{"_id": "q_1184", "text": "Plots output of compute_sector_exposures as line graphs\n\n Parameters\n ----------\n net_exposures : arrays\n Arrays of net sector exposures (output of compute_sector_exposures).\n\n sector_dict : dict or OrderedDict\n Dictionary of all sectors\n - See full description in compute_sector_exposures"}
{"_id": "q_1185", "text": "Generate a number of tear sheets that are useful\n for analyzing a strategy's performance.\n\n - Fetches benchmarks if needed.\n - Creates tear sheets for returns, and significant events.\n If possible, also creates tear sheets for position analysis,\n transaction analysis, and Bayesian analysis.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - Time series with decimal returns.\n - Example:\n 2015-07-16 -0.012143\n 2015-07-17 0.045350\n 2015-07-20 0.030957\n 2015-07-21 0.004902\n positions : pd.DataFrame, optional\n Daily net position values.\n - Time series of dollar amount invested in each position and cash.\n - Days where stocks are not held can be represented by 0 or NaN.\n - Non-working capital is labelled 'cash'\n - Example:\n index 'AAPL' 'MSFT' cash\n 2004-01-09 13939.3800 -14012.9930 711.5585\n 2004-01-12 14492.6300 -14624.8700 27.1821\n 2004-01-13 -13853.2800 13653.6400 -43.6375\n transactions : pd.DataFrame, optional\n Executed trade volumes and fill prices.\n - One row per trade.\n - Trades on different names that occur at the\n same time will have identical indicies.\n - Example:\n index amount price symbol\n 2004-01-09 12:18:01 483 324.12 'AAPL'\n 2004-01-09 12:18:01 122 83.10 'MSFT'\n 2004-01-13 14:12:23 -75 340.43 'AAPL'\n market_data : pd.Panel, optional\n Panel with items axis of 'price' and 'volume' DataFrames.\n The major and minor axes should match those of the\n the passed positions DataFrame (same dates and symbols).\n slippage : int/float, optional\n Basis points of slippage to apply to returns before generating\n tearsheet stats and plots.\n If a value is provided, slippage parameter sweep\n plots will be generated from the unadjusted returns.\n Transactions and positions must also be passed.\n - See txn.adjust_returns_for_slippage for more details.\n live_start_date : datetime, optional\n The point in time when the strategy began live trading,\n after its backtest period. This datetime should be normalized.\n hide_positions : bool, optional\n If True, will not output any symbol names.\n bayesian: boolean, optional\n If True, causes the generation of a Bayesian tear sheet.\n round_trips: boolean, optional\n If True, causes the generation of a round trip tear sheet.\n sector_mappings : dict or pd.Series, optional\n Security identifier to sector mapping.\n Security ids as keys, sectors as values.\n estimate_intraday: boolean or str, optional\n Instead of using the end-of-day positions, use the point in the day\n where we have the most $ invested. This will adjust positions to\n better approximate and represent how an intraday strategy behaves.\n By default, this is 'infer', and an attempt will be made to detect\n an intraday strategy. Specifying this value will prevent detection.\n cone_std : float, or tuple, optional\n If float, The standard deviation to use for the cone plots.\n If tuple, Tuple of standard deviation values to use for the cone plots\n - The cone is a normal distribution with this standard deviation\n centered around a linear regression.\n bootstrap : boolean (optional)\n Whether to perform bootstrap analysis for the performance\n metrics. Takes a few minutes longer.\n turnover_denom : str\n Either AGB or portfolio_value, default AGB.\n - See full explanation in txn.get_turnover.\n factor_returns : pd.Dataframe, optional\n Returns by factor, with date as index and factors as columns\n factor_loadings : pd.Dataframe, optional\n Factor loadings for all days in the date range, with date and\n ticker as index, and factors as columns.\n pos_in_dollars : boolean, optional\n indicates whether positions is in dollars\n header_rows : dict or OrderedDict, optional\n Extra rows to display at the top of the perf stats table.\n set_context : boolean, optional\n If True, set default plotting style context.\n - See plotting.context().\n factor_partitions : dict, optional\n dict specifying how factors should be separated in perf attrib\n factor returns and risk exposures plots\n - See create_perf_attrib_tear_sheet()."}
{"_id": "q_1186", "text": "Generate a number of plots for analyzing a\n strategy's positions and holdings.\n\n - Plots: gross leverage, exposures, top positions, and holdings.\n - Will also print the top positions held.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in create_full_tear_sheet.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in create_full_tear_sheet.\n show_and_plot_top_pos : int, optional\n By default, this is 2, and both prints and plots the\n top 10 positions.\n If this is 0, it will only plot; if 1, it will only print.\n hide_positions : bool, optional\n If True, will not output any symbol names.\n Overrides show_and_plot_top_pos to 0 to suppress text output.\n return_fig : boolean, optional\n If True, returns the figure that was plotted on.\n sector_mappings : dict or pd.Series, optional\n Security identifier to sector mapping.\n Security ids as keys, sectors as values.\n transactions : pd.DataFrame, optional\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n estimate_intraday: boolean or str, optional\n Approximate returns for intraday strategies.\n See description in create_full_tear_sheet."}
{"_id": "q_1187", "text": "Generate plots and tables for analyzing a strategy's performance.\n\n Parameters\n ----------\n returns : pd.Series\n Returns for each day in the date range.\n\n positions: pd.DataFrame\n Daily holdings (in dollars or percentages), indexed by date.\n Will be converted to percentages if positions are in dollars.\n Short positions show up as cash in the 'cash' column.\n\n factor_returns : pd.DataFrame\n Returns by factor, with date as index and factors as columns\n\n factor_loadings : pd.DataFrame\n Factor loadings for all days in the date range, with date\n and ticker as index, and factors as columns.\n\n transactions : pd.DataFrame, optional\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n - Default is None.\n\n pos_in_dollars : boolean, optional\n Flag indicating whether `positions` are in dollars or percentages\n If True, positions are in dollars.\n\n return_fig : boolean, optional\n If True, returns the figure that was plotted on.\n\n factor_partitions : dict\n dict specifying how factors should be separated in factor returns\n and risk exposures plots\n - Example:\n {'style': ['momentum', 'size', 'value', ...],\n 'sector': ['technology', 'materials', ... ]}"}
{"_id": "q_1188", "text": "Sums the absolute value of shares traded in each name on each day.\n Adds columns containing the closing price and total daily volume for\n each day-ticker combination.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in tears.create_full_tear_sheet\n market_data : pd.Panel\n Contains \"volume\" and \"price\" DataFrames for the tickers\n in the passed positions DataFrames\n\n Returns\n -------\n txn_daily : pd.DataFrame\n Daily totals for transacted shares in each traded name.\n price and volume columns for close price and daily volume for\n the corresponding ticker, respectively."}
{"_id": "q_1189", "text": "For each traded name, find the daily transaction total that consumed\n the greatest proportion of available daily bar volume.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n market_data : pd.Panel\n Panel with items axis of 'price' and 'volume' DataFrames.\n The major and minor axes should match those of the\n the passed positions DataFrame (same dates and symbols).\n last_n_days : integer\n Compute for only the last n days of the passed backtest data."}
{"_id": "q_1190", "text": "Maps a single transaction row to a dictionary.\n\n Parameters\n ----------\n txn : pd.DataFrame\n A single transaction object to convert to a dictionary.\n\n Returns\n -------\n dict\n Mapped transaction."}
{"_id": "q_1191", "text": "Extract daily transaction data from set of transaction objects.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Time series containing one row per symbol (and potentially\n duplicate datetime indices) and columns for amount and\n price.\n\n Returns\n -------\n pd.DataFrame\n Daily transaction volume and number of shares.\n - See full explanation in tears.create_full_tear_sheet."}
{"_id": "q_1192", "text": "Apply a slippage penalty for every dollar traded.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in create_full_tear_sheet.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n slippage_bps: int/float\n Basis points of slippage to apply.\n\n Returns\n -------\n pd.Series\n Time series of daily returns, adjusted for slippage."}
{"_id": "q_1193", "text": "Merge transactions of the same direction separated by less than\n max_delta time duration.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Prices and amounts of executed round_trips. One row per trade.\n - See full explanation in tears.create_full_tear_sheet\n\n max_delta : pandas.Timedelta (optional)\n Merge transactions in the same direction separated by less\n than max_delta time duration.\n\n\n Returns\n -------\n transactions : pd.DataFrame"}
{"_id": "q_1194", "text": "Group transactions into \"round trips\". First, transactions are\n grouped by day and directionality. Then, long and short\n transactions are matched to create round-trip round_trips for which\n PnL, duration and returns are computed. Crossings where a position\n changes from long to short and vice-versa are handled correctly.\n\n Under the hood, we reconstruct the individual shares in a\n portfolio over time and match round_trips in a FIFO-order.\n\n For example, the following transactions would constitute one round trip:\n index amount price symbol\n 2004-01-09 12:18:01 10 50 'AAPL'\n 2004-01-09 15:12:53 10 100 'AAPL'\n 2004-01-13 14:41:23 -10 100 'AAPL'\n 2004-01-13 15:23:34 -10 200 'AAPL'\n\n First, the first two and last two round_trips will be merged into a two\n single transactions (computing the price via vwap). Then, during\n the portfolio reconstruction, the two resulting transactions will\n be merged and result in 1 round-trip trade with a PnL of\n (150 * 20) - (75 * 20) = 1500.\n\n Note, that round trips do not have to close out positions\n completely. For example, we could have removed the last\n transaction in the example above and still generated a round-trip\n over 10 shares with 10 shares left in the portfolio to be matched\n with a later transaction.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Prices and amounts of executed round_trips. One row per trade.\n - See full explanation in tears.create_full_tear_sheet\n\n portfolio_value : pd.Series (optional)\n Portfolio value (all net assets including cash) over time.\n Note that portfolio_value needs to beginning of day, so either\n use .shift() or positions.sum(axis='columns') / (1+returns).\n\n Returns\n -------\n round_trips : pd.DataFrame\n DataFrame with one row per round trip. The returns column\n contains returns in respect to the portfolio value while\n rt_returns are the returns in regards to the invested capital\n into that partiulcar round-trip."}
{"_id": "q_1195", "text": "Generate various round-trip statistics.\n\n Parameters\n ----------\n round_trips : pd.DataFrame\n DataFrame with one row per round trip trade.\n - See full explanation in round_trips.extract_round_trips\n\n Returns\n -------\n stats : dict\n A dictionary where each value is a pandas DataFrame containing\n various round-trip statistics.\n\n See also\n --------\n round_trips.print_round_trip_stats"}
{"_id": "q_1196", "text": "Print various round-trip statistics. Tries to pretty-print tables\n with HTML output if run inside IPython NB.\n\n Parameters\n ----------\n round_trips : pd.DataFrame\n DataFrame with one row per round trip trade.\n - See full explanation in round_trips.extract_round_trips\n\n See also\n --------\n round_trips.gen_round_trip_stats"}
{"_id": "q_1197", "text": "Attributes the performance of a returns stream to a set of risk factors.\n\n Preprocesses inputs, and then calls empyrical.perf_attrib. See\n empyrical.perf_attrib for more info.\n\n Performance attribution determines how much each risk factor, e.g.,\n momentum, the technology sector, etc., contributed to total returns, as\n well as the daily exposure to each of the risk factors. The returns that\n can be attributed to one of the given risk factors are the\n `common_returns`, and the returns that _cannot_ be attributed to a risk\n factor are the `specific_returns`, or the alpha. The common_returns and\n specific_returns summed together will always equal the total returns.\n\n Parameters\n ----------\n returns : pd.Series\n Returns for each day in the date range.\n - Example:\n 2017-01-01 -0.017098\n 2017-01-02 0.002683\n 2017-01-03 -0.008669\n\n positions: pd.DataFrame\n Daily holdings (in dollars or percentages), indexed by date.\n Will be converted to percentages if positions are in dollars.\n Short positions show up as cash in the 'cash' column.\n - Examples:\n AAPL TLT XOM cash\n 2017-01-01 34 58 10 0\n 2017-01-02 22 77 18 0\n 2017-01-03 -15 27 30 15\n\n AAPL TLT XOM cash\n 2017-01-01 0.333333 0.568627 0.098039 0.0\n 2017-01-02 0.188034 0.658120 0.153846 0.0\n 2017-01-03 0.208333 0.375000 0.416667 0.0\n\n factor_returns : pd.DataFrame\n Returns by factor, with date as index and factors as columns\n - Example:\n momentum reversal\n 2017-01-01 0.002779 -0.005453\n 2017-01-02 0.001096 0.010290\n\n factor_loadings : pd.DataFrame\n Factor loadings for all days in the date range, with date and ticker as\n index, and factors as columns.\n - Example:\n momentum reversal\n dt ticker\n 2017-01-01 AAPL -1.592914 0.852830\n TLT 0.184864 0.895534\n XOM 0.993160 1.149353\n 2017-01-02 AAPL -0.140009 -0.524952\n TLT -1.066978 0.185435\n XOM -1.798401 0.761549\n\n\n transactions : pd.DataFrame, optional\n Executed trade volumes and fill prices. Used to check the turnover of\n the algorithm. Default is None, in which case the turnover check is\n skipped.\n\n - One row per trade.\n - Trades on different names that occur at the\n same time will have identical indicies.\n - Example:\n index amount price symbol\n 2004-01-09 12:18:01 483 324.12 'AAPL'\n 2004-01-09 12:18:01 122 83.10 'MSFT'\n 2004-01-13 14:12:23 -75 340.43 'AAPL'\n\n pos_in_dollars : bool\n Flag indicating whether `positions` are in dollars or percentages\n If True, positions are in dollars.\n\n Returns\n -------\n tuple of (risk_exposures_portfolio, perf_attribution)\n\n risk_exposures_portfolio : pd.DataFrame\n df indexed by datetime, with factors as columns\n - Example:\n momentum reversal\n dt\n 2017-01-01 -0.238655 0.077123\n 2017-01-02 0.821872 1.520515\n\n perf_attribution : pd.DataFrame\n df with factors, common returns, and specific returns as columns,\n and datetimes as index\n - Example:\n momentum reversal common_returns specific_returns\n dt\n 2017-01-01 0.249087 0.935925 1.185012 1.185012\n 2017-01-02 -0.003194 -0.400786 -0.403980 -0.403980"}
{"_id": "q_1198", "text": "Calls `perf_attrib` using inputs, and displays outputs using\n `utils.print_table`."}
{"_id": "q_1199", "text": "Plot total, specific, and common returns.\n\n Parameters\n ----------\n perf_attrib_data : pd.DataFrame\n df with factors, common returns, and specific returns as columns,\n and datetimes as index. Assumes the `total_returns` column is NOT\n cost adjusted.\n - Example:\n momentum reversal common_returns specific_returns\n dt\n 2017-01-01 0.249087 0.935925 1.185012 1.185012\n 2017-01-02 -0.003194 -0.400786 -0.403980 -0.403980\n\n cost : pd.Series, optional\n if present, gets subtracted from `perf_attrib_data['total_returns']`,\n and gets plotted separately\n\n ax : matplotlib.axes.Axes\n axes on which plots are made. if None, current axes will be used\n\n Returns\n -------\n ax : matplotlib.axes.Axes"}
{"_id": "q_1200", "text": "Plot each factor's contribution to performance.\n\n Parameters\n ----------\n perf_attrib_data : pd.DataFrame\n df with factors, common returns, and specific returns as columns,\n and datetimes as index\n - Example:\n momentum reversal common_returns specific_returns\n dt\n 2017-01-01 0.249087 0.935925 1.185012 1.185012\n 2017-01-02 -0.003194 -0.400786 -0.403980 -0.403980\n\n ax : matplotlib.axes.Axes\n axes on which plots are made. if None, current axes will be used\n\n title : str, optional\n title of plot\n\n Returns\n -------\n ax : matplotlib.axes.Axes"}
{"_id": "q_1201", "text": "Convert positions to percentages if necessary, and change them\n to long format.\n\n Parameters\n ----------\n positions: pd.DataFrame\n Daily holdings (in dollars or percentages), indexed by date.\n Will be converted to percentages if positions are in dollars.\n Short positions show up as cash in the 'cash' column.\n\n pos_in_dollars : bool\n Flag indicating whether `positions` are in dollars or percentages\n If True, positions are in dollars."}
{"_id": "q_1202", "text": "Compute cumulative returns, less costs."}
{"_id": "q_1203", "text": "If zipline asset objects are used, we want to print them out prettily\n within the tear sheet. This function should only be applied directly\n before displaying."}
{"_id": "q_1204", "text": "Logic for checking if a strategy is intraday and processing it.\n\n Parameters\n ----------\n estimate: boolean or str, optional\n Approximate returns for intraday strategies.\n See description in tears.create_full_tear_sheet.\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in create_full_tear_sheet.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n\n Returns\n -------\n pd.DataFrame\n Daily net position values, adjusted for intraday movement."}
{"_id": "q_1205", "text": "Intraday strategies will often not hold positions at the day end.\n This attempts to find the point in the day that best represents\n the activity of the strategy on that day, and effectively resamples\n the end-of-day positions with the positions at this point of day.\n The point of day is found by detecting when our exposure in the\n market is at its maximum point. Note that this is an estimate.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in create_full_tear_sheet.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n\n Returns\n -------\n pd.DataFrame\n Daily net position values, resampled for intraday behavior."}
{"_id": "q_1206", "text": "Drop entries from rets so that the start and end dates of rets match those\n of benchmark_rets.\n\n Parameters\n ----------\n rets : pd.Series\n Daily returns of the strategy, noncumulative.\n - See pf.tears.create_full_tear_sheet for more details\n\n benchmark_rets : pd.Series\n Daily returns of the benchmark, noncumulative.\n\n Returns\n -------\n clipped_rets : pd.Series\n Daily noncumulative returns with index clipped to match that of\n benchmark returns."}
{"_id": "q_1207", "text": "Calls the currently registered 'returns_func'\n\n Parameters\n ----------\n symbol : object\n An identifier for the asset whose return\n series is desired.\n e.g. ticker symbol or database ID\n start : date, optional\n Earliest date to fetch data for.\n Defaults to earliest date available.\n end : date, optional\n Latest date to fetch data for.\n Defaults to latest date available.\n\n Returns\n -------\n pandas.Series\n Returned by the current 'returns_func'"}
{"_id": "q_1208", "text": "Decorator to set plotting context and axes style during function call."}
{"_id": "q_1209", "text": "Create pyfolio default axes style context.\n\n Under the hood, calls and returns seaborn.axes_style() with\n some custom settings. Usually you would use in a with-context.\n\n Parameters\n ----------\n style : str, optional\n Name of seaborn style.\n rc : dict, optional\n Config flags.\n\n Returns\n -------\n seaborn plotting context\n\n Example\n -------\n >>> with pyfolio.plotting.axes_style(style='whitegrid'):\n >>> pyfolio.create_full_tear_sheet(..., set_context=False)\n\n See also\n --------\n For more information, see seaborn.plotting_context()."}
{"_id": "q_1210", "text": "Plots a heatmap of returns by month.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to seaborn plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1211", "text": "Plots a distribution of monthly returns.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1212", "text": "Plots total amount of stocks with an active position, breaking out\n short and long into transparent filled regions.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n positions : pd.DataFrame, optional\n Daily net position values.\n - See full explanation in tears.create_full_tear_sheet.\n legend_loc : matplotlib.loc, optional\n The location of the legend on the plot.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1213", "text": "Plots cumulative returns highlighting top drawdown periods.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n top : int, optional\n Amount of top drawdowns periods to plot (default 10).\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1214", "text": "Plots how far underwaterr returns are over time, or plots current\n drawdown vs. date.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1215", "text": "Plots cumulative rolling returns versus some benchmarks'.\n\n Backtest returns are in green, and out-of-sample (live trading)\n returns are in red.\n\n Additionally, a non-parametric cone plot may be added to the\n out-of-sample returns region.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n factor_returns : pd.Series, optional\n Daily noncumulative returns of the benchmark factor to which betas are\n computed. Usually a benchmark such as market returns.\n - This is in the same style as returns.\n live_start_date : datetime, optional\n The date when the strategy began live trading, after\n its backtest period. This date should be normalized.\n logy : bool, optional\n Whether to log-scale the y-axis.\n cone_std : float, or tuple, optional\n If float, The standard deviation to use for the cone plots.\n If tuple, Tuple of standard deviation values to use for the cone plots\n - See timeseries.forecast_cone_bounds for more details.\n legend_loc : matplotlib.loc, optional\n The location of the legend on the plot.\n volatility_match : bool, optional\n Whether to normalize the volatility of the returns to those of the\n benchmark returns. This helps compare strategies with different\n volatilities. Requires passing of benchmark_rets.\n cone_function : function, optional\n Function to use when generating forecast probability cone.\n The function signiture must follow the form:\n def cone(in_sample_returns (pd.Series),\n days_to_project_forward (int),\n cone_std= (float, or tuple),\n starting_value= (int, or float))\n See timeseries.forecast_cone_bootstrap for an example.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1216", "text": "Plots the rolling 6-month and 12-month beta versus date.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n factor_returns : pd.Series\n Daily noncumulative returns of the benchmark factor to which betas are\n computed. Usually a benchmark such as market returns.\n - This is in the same style as returns.\n legend_loc : matplotlib.loc, optional\n The location of the legend on the plot.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1217", "text": "Plots the rolling Sharpe ratio versus date.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n factor_returns : pd.Series, optional\n Daily noncumulative returns of the benchmark factor for\n which the benchmark rolling Sharpe is computed. Usually\n a benchmark such as market returns.\n - This is in the same style as returns.\n rolling_window : int, optional\n The days window over which to compute the sharpe ratio.\n legend_loc : matplotlib.loc, optional\n The location of the legend on the plot.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1218", "text": "Plots the sector exposures of the portfolio over time.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n sector_alloc : pd.DataFrame\n Portfolio allocation of positions. See pos.get_sector_alloc.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1219", "text": "Plots equity curves at different per-dollar slippage assumptions.\n\n Parameters\n ----------\n returns : pd.Series\n Timeseries of portfolio returns to be adjusted for various\n degrees of slippage.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in tears.create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in tears.create_full_tear_sheet.\n slippage_params: tuple\n Slippage pameters to apply to the return time series (in\n basis points).\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to seaborn plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1220", "text": "Plots trading volume per day vs. date.\n\n Also displays all-time daily average.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in tears.create_full_tear_sheet.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1221", "text": "Plots a histogram of transaction times, binning the times into\n buckets of a given duration.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in tears.create_full_tear_sheet.\n bin_minutes : float, optional\n Sizes of the bins in minutes, defaults to 5 minutes.\n tz : str, optional\n Time zone to plot against. Note that if the specified\n zone does not apply daylight savings, the distribution\n may be partially offset.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1222", "text": "Prints information about the worst drawdown periods.\n\n Prints peak dates, valley dates, recovery dates, and net\n drawdowns.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n top : int, optional\n Amount of top drawdowns periods to plot (default 5)."}
{"_id": "q_1223", "text": "Plots timespans and directions of a sample of round trip trades.\n\n Parameters\n ----------\n round_trips : pd.DataFrame\n DataFrame with one row per round trip trade.\n - See full explanation in round_trips.extract_round_trips\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1224", "text": "Prints the share of total PnL contributed by each\n traded name.\n\n Parameters\n ----------\n round_trips : pd.DataFrame\n DataFrame with one row per round trip trade.\n - See full explanation in round_trips.extract_round_trips\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."}
{"_id": "q_1225", "text": "Determines the Sortino ratio of a strategy.\n\n Parameters\n ----------\n returns : pd.Series or pd.DataFrame\n Daily returns of the strategy, noncumulative.\n - See full explanation in :func:`~pyfolio.timeseries.cum_returns`.\n required_return: float / series\n minimum acceptable return\n period : str, optional\n Defines the periodicity of the 'returns' data for purposes of\n annualizing. Can be 'monthly', 'weekly', or 'daily'.\n - Defaults to 'daily'.\n\n Returns\n -------\n depends on input type\n series ==> float\n DataFrame ==> np.array\n\n Annualized Sortino ratio."}
{"_id": "q_1226", "text": "Determines the Sharpe ratio of a strategy.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in :func:`~pyfolio.timeseries.cum_returns`.\n risk_free : int, float\n Constant risk-free return throughout the period.\n period : str, optional\n Defines the periodicity of the 'returns' data for purposes of\n annualizing. Can be 'monthly', 'weekly', or 'daily'.\n - Defaults to 'daily'.\n\n Returns\n -------\n float\n Sharpe ratio.\n np.nan\n If insufficient length of returns or if if adjusted returns are 0.\n\n Note\n -----\n See https://en.wikipedia.org/wiki/Sharpe_ratio for more details."}
{"_id": "q_1227", "text": "Determines the rolling beta of a strategy.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n factor_returns : pd.Series or pd.DataFrame\n Daily noncumulative returns of the benchmark factor to which betas are\n computed. Usually a benchmark such as market returns.\n - If DataFrame is passed, computes rolling beta for each column.\n - This is in the same style as returns.\n rolling_window : int, optional\n The size of the rolling window, in days, over which to compute\n beta (default 6 months).\n\n Returns\n -------\n pd.Series\n Rolling beta.\n\n Note\n -----\n See https://en.wikipedia.org/wiki/Beta_(finance) for more details."}
{"_id": "q_1228", "text": "Calculates the gross leverage of a strategy.\n\n Parameters\n ----------\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in tears.create_full_tear_sheet.\n\n Returns\n -------\n pd.Series\n Gross leverage."}
{"_id": "q_1229", "text": "Calculates various performance metrics of a strategy, for use in\n plotting.show_perf_stats.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n factor_returns : pd.Series, optional\n Daily noncumulative returns of the benchmark factor to which betas are\n computed. Usually a benchmark such as market returns.\n - This is in the same style as returns.\n - If None, do not compute alpha, beta, and information ratio.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in tears.create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in tears.create_full_tear_sheet.\n turnover_denom : str\n Either AGB or portfolio_value, default AGB.\n - See full explanation in txn.get_turnover.\n\n Returns\n -------\n pd.Series\n Performance metrics."}
{"_id": "q_1230", "text": "Calculate various summary statistics of data.\n\n Parameters\n ----------\n x : numpy.ndarray or pandas.Series\n Array to compute summary statistics for.\n\n Returns\n -------\n pandas.Series\n Series containing mean, median, std, as well as 5, 25, 75 and\n 95 percentiles of passed in values."}
{"_id": "q_1231", "text": "Finds top drawdowns, sorted by drawdown amount.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n top : int, optional\n The amount of top drawdowns to find (default 10).\n\n Returns\n -------\n drawdowns : list\n List of drawdown peaks, valleys, and recoveries. See get_max_drawdown."}
{"_id": "q_1232", "text": "Gnerate alternate paths using available values from in-sample returns.\n\n Parameters\n ----------\n is_returns : pandas.core.frame.DataFrame\n Non-cumulative in-sample returns.\n num_days : int\n Number of days to project the probability cone forward.\n starting_value : int or float\n Starting value of the out of sample period.\n num_samples : int\n Number of samples to draw from the in-sample daily returns.\n Each sample will be an array with length num_days.\n A higher number of samples will generate a more accurate\n bootstrap cone.\n random_seed : int\n Seed for the pseudorandom number generator used by the pandas\n sample method.\n\n Returns\n -------\n samples : numpy.ndarray"}
{"_id": "q_1233", "text": "Gnerate the upper and lower bounds of an n standard deviation\n cone of forecasted cumulative returns.\n\n Parameters\n ----------\n samples : numpy.ndarray\n Alternative paths, or series of possible outcomes.\n cone_std : list of int/float\n Number of standard devations to use in the boundaries of\n the cone. If multiple values are passed, cone bounds will\n be generated for each value.\n\n Returns\n -------\n samples : pandas.core.frame.DataFrame"}
{"_id": "q_1234", "text": "Generate plot for stochastic volatility model.\n\n Parameters\n ----------\n data : pandas.Series\n Returns to model.\n trace : pymc3.sampling.BaseTrace object, optional\n trace as returned by model_stoch_vol\n If not passed, sample from model.\n ax : matplotlib.axes object, optional\n Plot into axes object\n\n Returns\n -------\n ax object\n\n See Also\n --------\n model_stoch_vol : run stochastic volatility model"}
{"_id": "q_1235", "text": "Compute 5, 25, 75 and 95 percentiles of cumulative returns, used\n for the Bayesian cone.\n\n Parameters\n ----------\n preds : numpy.array\n Multiple (simulated) cumulative returns.\n starting_value : int (optional)\n Have cumulative returns start around this value.\n Default = 1.\n\n Returns\n -------\n dict of percentiles over time\n Dictionary mapping percentiles (5, 25, 75, 95) to a\n timeseries."}
{"_id": "q_1236", "text": "Compute Bayesian consistency score.\n\n Parameters\n ----------\n returns_test : pd.Series\n Observed cumulative returns.\n preds : numpy.array\n Multiple (simulated) cumulative returns.\n\n Returns\n -------\n Consistency score\n Score from 100 (returns_test perfectly on the median line of the\n Bayesian cone spanned by preds) to 0 (returns_test completely\n outside of Bayesian cone.)"}
{"_id": "q_1237", "text": "Generate cumulative returns plot with Bayesian cone.\n\n Parameters\n ----------\n returns_train : pd.Series\n Timeseries of simple returns\n returns_test : pd.Series\n Out-of-sample returns. Datetimes in returns_test will be added to\n returns_train as missing values and predictions will be generated\n for them.\n ppc : np.array\n Posterior predictive samples of shape samples x\n len(returns_test).\n plot_train_len : int (optional)\n How many data points to plot of returns_train. Useful to zoom in on\n the prediction if there is a long backtest period.\n ax : matplotlib.Axis (optional)\n Axes upon which to plot.\n\n Returns\n -------\n score : float\n Consistency score (see compute_consistency_score)\n trace : pymc3.sampling.BaseTrace\n A PyMC3 trace object that contains samples for each parameter\n of the posterior."}
{"_id": "q_1238", "text": "Defer the message.\n\n This message will remain in the queue but must be received\n specifically by its sequence number in order to be processed.\n\n :raises: ~azure.servicebus.common.errors.MessageAlreadySettled if the message has been settled.\n :raises: ~azure.servicebus.common.errors.MessageLockExpired if message lock has already expired.\n :raises: ~azure.servicebus.common.errors.SessionLockExpired if session lock has already expired.\n :raises: ~azure.servicebus.common.errors.MessageSettleFailed if message settle operation fails."}
{"_id": "q_1239", "text": "Guess Python Autorest options based on the spec path.\n\n Expected path:\n specification/compute/resource-manager/readme.md"}
{"_id": "q_1240", "text": "Deletes the managed application definition.\n\n :param application_definition_id: The fully qualified ID of the\n managed application definition, including the managed application name\n and the managed application definition resource type. Use the format,\n /subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.Solutions/applicationDefinitions/{applicationDefinition-name}\n :type application_definition_id: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse<None> if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises:\n :class:`ErrorResponseException<azure.mgmt.resource.managedapplications.models.ErrorResponseException>`"}
{"_id": "q_1241", "text": "Creates a new managed application definition.\n\n :param application_definition_id: The fully qualified ID of the\n managed application definition, including the managed application name\n and the managed application definition resource type. Use the format,\n /subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.Solutions/applicationDefinitions/{applicationDefinition-name}\n :type application_definition_id: str\n :param parameters: Parameters supplied to the create or update a\n managed application definition.\n :type parameters:\n ~azure.mgmt.resource.managedapplications.models.ApplicationDefinition\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns ApplicationDefinition\n or ClientRawResponse<ApplicationDefinition> if raw==True\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[~azure.mgmt.resource.managedapplications.models.ApplicationDefinition]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[~azure.mgmt.resource.managedapplications.models.ApplicationDefinition]]\n :raises:\n :class:`ErrorResponseException<azure.mgmt.resource.managedapplications.models.ErrorResponseException>`"}
{"_id": "q_1242", "text": "Return the target uri for the request."}
{"_id": "q_1243", "text": "Sends request to cloud service server and return the response."}
{"_id": "q_1244", "text": "Executes script actions on the specified HDInsight cluster.\n\n :param resource_group_name: The name of the resource group.\n :type resource_group_name: str\n :param cluster_name: The name of the cluster.\n :type cluster_name: str\n :param persist_on_success: Gets or sets if the scripts needs to be\n persisted.\n :type persist_on_success: bool\n :param script_actions: The list of run time script actions.\n :type script_actions:\n list[~azure.mgmt.hdinsight.models.RuntimeScriptAction]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse<None> if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises:\n :class:`ErrorResponseException<azure.mgmt.hdinsight.models.ErrorResponseException>`"}
{"_id": "q_1245", "text": "Check the availability of a Front Door resource name.\n\n :param name: The resource name to validate.\n :type name: str\n :param type: The type of the resource whose name is to be validated.\n Possible values include: 'Microsoft.Network/frontDoors',\n 'Microsoft.Network/frontDoors/frontendEndpoints'\n :type type: str or ~azure.mgmt.frontdoor.models.ResourceType\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: CheckNameAvailabilityOutput or ClientRawResponse if raw=true\n :rtype: ~azure.mgmt.frontdoor.models.CheckNameAvailabilityOutput or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`ErrorResponseException<azure.mgmt.frontdoor.models.ErrorResponseException>`"}
{"_id": "q_1246", "text": "Extracts the host authority from the given URI."}
{"_id": "q_1247", "text": "Return Credentials and default SubscriptionID of current loaded profile of the CLI.\n\n Credentials will be the \"az login\" command:\n https://docs.microsoft.com/cli/azure/authenticate-azure-cli\n\n Default subscription ID is either the only one you have, or you can define it:\n https://docs.microsoft.com/cli/azure/manage-azure-subscriptions-azure-cli\n\n .. versionadded:: 1.1.6\n\n :param str resource: The alternative resource for credentials if not ARM (GraphRBac, etc.)\n :param bool with_tenant: If True, return a three-tuple with last as tenant ID\n :return: tuple of Credentials and SubscriptionID (and tenant ID if with_tenant)\n :rtype: tuple"}
{"_id": "q_1248", "text": "Opens the request.\n\n method:\n the request VERB 'GET', 'POST', etc.\n url:\n the url to connect"}
{"_id": "q_1249", "text": "Sets up the timeout for the request."}
{"_id": "q_1250", "text": "Sets the request header."}
{"_id": "q_1251", "text": "Sends the request body."}
{"_id": "q_1252", "text": "Gets status text of response."}
{"_id": "q_1253", "text": "Gets response body as a SAFEARRAY and converts the SAFEARRAY to str."}
{"_id": "q_1254", "text": "Sets client certificate for the request."}
{"_id": "q_1255", "text": "Connects to host and sends the request."}
{"_id": "q_1256", "text": "Sends the headers of request."}
{"_id": "q_1257", "text": "Sends request body."}
{"_id": "q_1258", "text": "simplified an id to be more friendly for us people"}
{"_id": "q_1259", "text": "converts a Python name into a serializable name"}
{"_id": "q_1260", "text": "Verify whether two faces belong to a same person. Compares a face Id\n with a Person Id.\n\n :param face_id: FaceId of the face, comes from Face - Detect\n :type face_id: str\n :param person_id: Specify a certain person in a person group or a\n large person group. personId is created in PersonGroup Person - Create\n or LargePersonGroup Person - Create.\n :type person_id: str\n :param person_group_id: Using existing personGroupId and personId for\n fast loading a specified person. personGroupId is created in\n PersonGroup - Create. Parameter personGroupId and largePersonGroupId\n should not be provided at the same time.\n :type person_group_id: str\n :param large_person_group_id: Using existing largePersonGroupId and\n personId for fast loading a specified person. largePersonGroupId is\n created in LargePersonGroup - Create. Parameter personGroupId and\n largePersonGroupId should not be provided at the same time.\n :type large_person_group_id: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: VerifyResult or ClientRawResponse if raw=true\n :rtype: ~azure.cognitiveservices.vision.face.models.VerifyResult or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`APIErrorException<azure.cognitiveservices.vision.face.models.APIErrorException>`"}
{"_id": "q_1261", "text": "Adds a job to the specified account.\n\n The Batch service supports two ways to control the work done as part of\n a job. In the first approach, the user specifies a Job Manager task.\n The Batch service launches this task when it is ready to start the job.\n The Job Manager task controls all other tasks that run under this job,\n by using the Task APIs. In the second approach, the user directly\n controls the execution of tasks under an active job, by using the Task\n APIs. Also note: when naming jobs, avoid including sensitive\n information such as user names or secret project names. This\n information may appear in telemetry logs accessible to Microsoft\n Support engineers.\n\n :param job: The job to be added.\n :type job: ~azure.batch.models.JobAddParameter\n :param job_add_options: Additional parameters for the operation\n :type job_add_options: ~azure.batch.models.JobAddOptions\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: None or ClientRawResponse if raw=true\n :rtype: None or ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`BatchErrorException<azure.batch.models.BatchErrorException>`"}
{"_id": "q_1262", "text": "descends through a hierarchy of nodes returning the list of children\n at the inner most level. Only returns children who share a common parent,\n not cousins."}
{"_id": "q_1263", "text": "Recursively searches from the parent to the child,\n gathering all the applicable namespaces along the way"}
{"_id": "q_1264", "text": "Converts xml response to service bus namespace\n\n The xml format for namespace:\n<entry>\n<id>uuid:00000000-0000-0000-0000-000000000000;id=0000000</id>\n<title type=\"text\">myunittests</title>\n<updated>2012-08-22T16:48:10Z</updated>\n<content type=\"application/xml\">\n <NamespaceDescription\n xmlns=\"http://schemas.microsoft.com/netservices/2010/10/servicebus/connect\"\n xmlns:i=\"http://www.w3.org/2001/XMLSchema-instance\">\n <Name>myunittests</Name>\n <Region>West US</Region>\n <DefaultKey>0000000000000000000000000000000000000000000=</DefaultKey>\n <Status>Active</Status>\n <CreatedAt>2012-08-22T16:48:10.217Z</CreatedAt>\n <AcsManagementEndpoint>https://myunittests-sb.accesscontrol.windows.net/</AcsManagementEndpoint>\n <ServiceBusEndpoint>https://myunittests.servicebus.windows.net/</ServiceBusEndpoint>\n <ConnectionString>Endpoint=sb://myunittests.servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=0000000000000000000000000000000000000000000=</ConnectionString>\n <SubscriptionId>00000000000000000000000000000000</SubscriptionId>\n <Enabled>true</Enabled>\n </NamespaceDescription>\n</content>\n</entry>"}
{"_id": "q_1265", "text": "Converts xml response to service bus region\n\n The xml format for region:\n<entry>\n<id>uuid:157c311f-081f-4b4a-a0ba-a8f990ffd2a3;id=1756759</id>\n<title type=\"text\"></title>\n<updated>2013-04-10T18:25:29Z</updated>\n<content type=\"application/xml\">\n <RegionCodeDescription\n xmlns=\"http://schemas.microsoft.com/netservices/2010/10/servicebus/connect\"\n xmlns:i=\"http://www.w3.org/2001/XMLSchema-instance\">\n <Code>East Asia</Code>\n <FullName>East Asia</FullName>\n </RegionCodeDescription>\n</content>\n</entry>"}
{"_id": "q_1266", "text": "Replaces the runbook draft content.\n\n :param resource_group_name: Name of an Azure Resource group.\n :type resource_group_name: str\n :param automation_account_name: The name of the automation account.\n :type automation_account_name: str\n :param runbook_name: The runbook name.\n :type runbook_name: str\n :param runbook_content: The\u00a0runbook\u00a0draft\u00a0content.\n :type runbook_content: Generator\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns object or\n ClientRawResponse<object> if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[Generator]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[Generator]]\n :raises:\n :class:`ErrorResponseException<azure.mgmt.automation.models.ErrorResponseException>`"}
{"_id": "q_1267", "text": "Asynchronous operation to modify a knowledgebase.\n\n :param kb_id: Knowledgebase id.\n :type kb_id: str\n :param update_kb: Post body of the request.\n :type update_kb:\n ~azure.cognitiveservices.knowledge.qnamaker.models.UpdateKbOperationDTO\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: Operation or ClientRawResponse if raw=true\n :rtype: ~azure.cognitiveservices.knowledge.qnamaker.models.Operation\n or ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`ErrorResponseException<azure.cognitiveservices.knowledge.qnamaker.models.ErrorResponseException>`"}
{"_id": "q_1268", "text": "Gets a collection that contains the object IDs of the groups of which\n the user is a member.\n\n :param object_id: The object ID of the user for which to get group\n membership.\n :type object_id: str\n :param security_enabled_only: If true, only membership in\n security-enabled groups should be checked. Otherwise, membership in\n all groups should be checked.\n :type security_enabled_only: bool\n :param additional_properties: Unmatched properties from the message\n are deserialized this collection\n :type additional_properties: dict[str, object]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: An iterator like instance of str\n :rtype: ~azure.graphrbac.models.StrPaged[str]\n :raises:\n :class:`GraphErrorException<azure.graphrbac.models.GraphErrorException>`"}
{"_id": "q_1269", "text": "Will clone the given PR branch and vuild the package with the given name."}
{"_id": "q_1270", "text": "Import data into Redis cache.\n\n :param resource_group_name: The name of the resource group.\n :type resource_group_name: str\n :param name: The name of the Redis cache.\n :type name: str\n :param files: files to import.\n :type files: list[str]\n :param format: File format.\n :type format: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse<None> if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1271", "text": "Replace alterations data.\n\n :param word_alterations: Collection of word alterations.\n :type word_alterations:\n list[~azure.cognitiveservices.knowledge.qnamaker.models.AlterationsDTO]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: None or ClientRawResponse if raw=true\n :rtype: None or ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`ErrorResponseException<azure.cognitiveservices.knowledge.qnamaker.models.ErrorResponseException>`"}
{"_id": "q_1272", "text": "Returns system properties for the specified storage account.\n\n service_name:\n Name of the storage service account."}
{"_id": "q_1273", "text": "Returns the primary and secondary access keys for the specified\n storage account.\n\n service_name:\n Name of the storage service account."}
{"_id": "q_1274", "text": "Deletes the specified storage account from Windows Azure.\n\n service_name:\n Name of the storage service account."}
{"_id": "q_1275", "text": "Checks to see if the specified storage account name is available, or\n if it has already been taken.\n\n service_name:\n Name of the storage service account."}
{"_id": "q_1276", "text": "Retrieves system properties for the specified hosted service. These\n properties include the service name and service type; the name of the\n affinity group to which the service belongs, or its location if it is\n not part of an affinity group; and optionally, information on the\n service's deployments.\n\n service_name:\n Name of the hosted service.\n embed_detail:\n When True, the management service returns properties for all\n deployments of the service, as well as for the service itself."}
{"_id": "q_1277", "text": "Creates a new hosted service in Windows Azure.\n\n service_name:\n A name for the hosted service that is unique within Windows Azure.\n This name is the DNS prefix name and can be used to access the\n hosted service.\n label:\n A name for the hosted service. The name can be up to 100 characters\n in length. The name can be used to identify the storage account for\n your tracking purposes.\n description:\n A description for the hosted service. The description can be up to\n 1024 characters in length.\n location:\n The location where the hosted service will be created. You can\n specify either a location or affinity_group, but not both.\n affinity_group:\n The name of an existing affinity group associated with this\n subscription. This name is a GUID and can be retrieved by examining\n the name element of the response body returned by\n list_affinity_groups. You can specify either a location or\n affinity_group, but not both.\n extended_properties:\n Dictionary containing name/value pairs of storage account\n properties. You can have a maximum of 50 extended property\n name/value pairs. The maximum length of the Name element is 64\n characters, only alphanumeric characters and underscores are valid\n in the Name, and the name must start with a letter. The value has\n a maximum length of 255 characters."}
{"_id": "q_1278", "text": "Deletes the specified hosted service from Windows Azure.\n\n service_name:\n Name of the hosted service.\n complete:\n True if all OS/data disks and the source blobs for the disks should\n also be deleted from storage."}
{"_id": "q_1279", "text": "Uploads a new service package and creates a new deployment on staging\n or production.\n\n service_name:\n Name of the hosted service.\n deployment_slot:\n The environment to which the hosted service is deployed. Valid\n values are: staging, production\n name:\n The name for the deployment. The deployment name must be unique\n among other deployments for the hosted service.\n package_url:\n A URL that refers to the location of the service package in the\n Blob service. The service package can be located either in a\n storage account beneath the same subscription or a Shared Access\n Signature (SAS) URI from any storage account.\n label:\n A name for the hosted service. The name can be up to 100 characters\n in length. It is recommended that the label be unique within the\n subscription. The name can be used to identify the hosted service\n for your tracking purposes.\n configuration:\n The base-64 encoded service configuration file for the deployment.\n start_deployment:\n Indicates whether to start the deployment immediately after it is\n created. If false, the service model is still deployed to the\n virtual machines but the code is not run immediately. Instead, the\n service is Suspended until you call Update Deployment Status and\n set the status to Running, at which time the service will be\n started. A deployed service still incurs charges, even if it is\n suspended.\n treat_warnings_as_error:\n Indicates whether to treat package validation warnings as errors.\n If set to true, the Created Deployment operation fails if there\n are validation warnings on the service package.\n extended_properties:\n Dictionary containing name/value pairs of storage account\n properties. You can have a maximum of 50 extended property\n name/value pairs. The maximum length of the Name element is 64\n characters, only alphanumeric characters and underscores are valid\n in the Name, and the name must start with a letter. The value has\n a maximum length of 255 characters."}
{"_id": "q_1280", "text": "Deletes the specified deployment.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name of the deployment."}
{"_id": "q_1281", "text": "Initiates a virtual IP swap between the staging and production\n deployment environments for a service. If the service is currently\n running in the staging environment, it will be swapped to the\n production environment. If it is running in the production\n environment, it will be swapped to staging.\n\n service_name:\n Name of the hosted service.\n production:\n The name of the production deployment.\n source_deployment:\n The name of the source deployment."}
{"_id": "q_1282", "text": "Initiates a change to the deployment configuration.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name of the deployment.\n configuration:\n The base-64 encoded service configuration file for the deployment.\n treat_warnings_as_error:\n Indicates whether to treat package validation warnings as errors.\n If set to true, the Created Deployment operation fails if there\n are validation warnings on the service package.\n mode:\n If set to Manual, WalkUpgradeDomain must be called to apply the\n update. If set to Auto, the Windows Azure platform will\n automatically apply the update To each upgrade domain for the\n service. Possible values are: Auto, Manual\n extended_properties:\n Dictionary containing name/value pairs of storage account\n properties. You can have a maximum of 50 extended property\n name/value pairs. The maximum length of the Name element is 64\n characters, only alphanumeric characters and underscores are valid\n in the Name, and the name must start with a letter. The value has\n a maximum length of 255 characters."}
{"_id": "q_1283", "text": "Initiates a change in deployment status.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name of the deployment.\n status:\n The change to initiate to the deployment status. Possible values\n include:\n Running, Suspended"}
{"_id": "q_1284", "text": "Specifies the next upgrade domain to be walked during manual in-place\n upgrade or configuration change.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name of the deployment.\n upgrade_domain:\n An integer value that identifies the upgrade domain to walk.\n Upgrade domains are identified with a zero-based index: the first\n upgrade domain has an ID of 0, the second has an ID of 1, and so on."}
{"_id": "q_1285", "text": "Reinstalls the operating system on instances of web roles or worker\n roles and initializes the storage resources that are used by them. If\n you do not want to initialize storage resources, you can use\n reimage_role_instance.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name of the deployment.\n role_instance_names:\n List of role instance names."}
{"_id": "q_1286", "text": "Deletes a service certificate from the certificate store of a hosted\n service.\n\n service_name:\n Name of the hosted service.\n thumbalgorithm:\n The algorithm for the certificate's thumbprint.\n thumbprint:\n The hexadecimal representation of the thumbprint."}
{"_id": "q_1287", "text": "The Add Management Certificate operation adds a certificate to the\n list of management certificates. Management certificates, which are\n also known as subscription certificates, authenticate clients\n attempting to connect to resources associated with your Windows Azure\n subscription.\n\n public_key:\n A base64 representation of the management certificate public key.\n thumbprint:\n The thumb print that uniquely identifies the management\n certificate.\n data:\n The certificate's raw data in base-64 encoded .cer format."}
{"_id": "q_1288", "text": "The Delete Management Certificate operation deletes a certificate from\n the list of management certificates. Management certificates, which\n are also known as subscription certificates, authenticate clients\n attempting to connect to resources associated with your Windows Azure\n subscription.\n\n thumbprint:\n The thumb print that uniquely identifies the management\n certificate."}
{"_id": "q_1289", "text": "Returns the system properties associated with the specified affinity\n group.\n\n affinity_group_name:\n The name of the affinity group."}
{"_id": "q_1290", "text": "Deletes an affinity group in the specified subscription.\n\n affinity_group_name:\n The name of the affinity group."}
{"_id": "q_1291", "text": "List subscription operations.\n\n start_time: Required. An ISO8601 date.\n end_time: Required. An ISO8601 date.\n object_id_filter: Optional. Returns subscription operations only for the specified object type and object ID\n operation_result_filter: Optional. Returns subscription operations only for the specified result status, either Succeeded, Failed, or InProgress.\n continuation_token: Optional.\n More information at:\n https://msdn.microsoft.com/en-us/library/azure/gg715318.aspx"}
{"_id": "q_1292", "text": "Deletes a reserved IP address from the specified subscription.\n\n name:\n Required. Name of the reserved IP address."}
{"_id": "q_1293", "text": "Disassociate an existing reservedIP from the given deployment.\n\n name:\n Required. Name of the reserved IP address.\n\n service_name:\n Required. Name of the hosted service.\n\n deployment_name:\n Required. Name of the deployment.\n\n virtual_ip_name:\n Optional. Name of the VirtualIP in case of multi Vip tenant.\n If this value is not specified default virtualIP is used\n for this operation."}
{"_id": "q_1294", "text": "Retrieves information about the specified reserved IP address.\n\n name:\n Required. Name of the reserved IP address."}
{"_id": "q_1295", "text": "Retrieves the specified virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role."}
{"_id": "q_1296", "text": "Provisions a virtual machine based on the supplied configuration.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name for the deployment. The deployment name must be unique\n among other deployments for the hosted service.\n deployment_slot:\n The environment to which the hosted service is deployed. Valid\n values are: staging, production\n label:\n Specifies an identifier for the deployment. The label can be up to\n 100 characters long. The label can be used for tracking purposes.\n role_name:\n The name of the role.\n system_config:\n Contains the metadata required to provision a virtual machine from\n a Windows or Linux OS image. Use an instance of\n WindowsConfigurationSet or LinuxConfigurationSet.\n os_virtual_hard_disk:\n Contains the parameters Windows Azure uses to create the operating\n system disk for the virtual machine. If you are creating a Virtual\n Machine by using a VM Image, this parameter is not used.\n network_config:\n Encapsulates the metadata required to create the virtual network\n configuration for a virtual machine. If you do not include a\n network configuration set you will not be able to access the VM\n through VIPs over the internet. If your virtual machine belongs to\n a virtual network you can not specify which subnet address space\n it resides under. Use an instance of ConfigurationSet.\n availability_set_name:\n Specifies the name of an availability set to which to add the\n virtual machine. This value controls the virtual machine\n allocation in the Windows Azure environment. Virtual machines\n specified in the same availability set are allocated to different\n nodes to maximize availability.\n data_virtual_hard_disks:\n Contains the parameters Windows Azure uses to create a data disk\n for a virtual machine.\n role_size:\n The size of the virtual machine to allocate. The default value is\n Small. Possible values are: ExtraSmall,Small,Medium,Large,\n ExtraLarge,A5,A6,A7,A8,A9,Basic_A0,Basic_A1,Basic_A2,Basic_A3,\n Basic_A4,Standard_D1,Standard_D2,Standard_D3,Standard_D4,\n Standard_D11,Standard_D12,Standard_D13,Standard_D14,Standard_G1,\n Standard_G2,Sandard_G3,Standard_G4,Standard_G5. The specified\n value must be compatible with the disk selected in the \n OSVirtualHardDisk values.\n role_type:\n The type of the role for the virtual machine. The only supported\n value is PersistentVMRole.\n virtual_network_name:\n Specifies the name of an existing virtual network to which the\n deployment will belong.\n resource_extension_references:\n Optional. Contains a collection of resource extensions that are to\n be installed on the Virtual Machine. This element is used if\n provision_guest_agent is set to True. Use an iterable of instances\n of ResourceExtensionReference.\n provision_guest_agent:\n Optional. Indicates whether the VM Agent is installed on the\n Virtual Machine. To run a resource extension in a Virtual Machine,\n this service must be installed.\n vm_image_name:\n Optional. Specifies the name of the VM Image that is to be used to\n create the Virtual Machine. If this is specified, the\n system_config and network_config parameters are not used.\n media_location:\n Optional. Required if the Virtual Machine is being created from a\n published VM Image. Specifies the location of the VHD file that is\n created when VMImageName specifies a published VM Image.\n dns_servers:\n Optional. List of DNS servers (use DnsServer class) to associate\n with the Virtual Machine.\n reserved_ip_name:\n Optional. Specifies the name of a reserved IP address that is to be\n assigned to the deployment. You must run create_reserved_ip_address\n before you can assign the address to the deployment using this\n element."}
{"_id": "q_1297", "text": "Adds a virtual machine to an existing deployment.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n system_config:\n Contains the metadata required to provision a virtual machine from\n a Windows or Linux OS image. Use an instance of\n WindowsConfigurationSet or LinuxConfigurationSet.\n os_virtual_hard_disk:\n Contains the parameters Windows Azure uses to create the operating\n system disk for the virtual machine. If you are creating a Virtual\n Machine by using a VM Image, this parameter is not used.\n network_config:\n Encapsulates the metadata required to create the virtual network\n configuration for a virtual machine. If you do not include a\n network configuration set you will not be able to access the VM\n through VIPs over the internet. If your virtual machine belongs to\n a virtual network you can not specify which subnet address space\n it resides under.\n availability_set_name:\n Specifies the name of an availability set to which to add the\n virtual machine. This value controls the virtual machine allocation\n in the Windows Azure environment. Virtual machines specified in the\n same availability set are allocated to different nodes to maximize\n availability.\n data_virtual_hard_disks:\n Contains the parameters Windows Azure uses to create a data disk\n for a virtual machine.\n role_size:\n The size of the virtual machine to allocate. The default value is\n Small. Possible values are: ExtraSmall, Small, Medium, Large,\n ExtraLarge. The specified value must be compatible with the disk\n selected in the OSVirtualHardDisk values.\n role_type:\n The type of the role for the virtual machine. The only supported\n value is PersistentVMRole.\n resource_extension_references:\n Optional. Contains a collection of resource extensions that are to\n be installed on the Virtual Machine. This element is used if\n provision_guest_agent is set to True.\n provision_guest_agent:\n Optional. Indicates whether the VM Agent is installed on the\n Virtual Machine. To run a resource extension in a Virtual Machine,\n this service must be installed.\n vm_image_name:\n Optional. Specifies the name of the VM Image that is to be used to\n create the Virtual Machine. If this is specified, the\n system_config and network_config parameters are not used.\n media_location:\n Optional. Required if the Virtual Machine is being created from a\n published VM Image. Specifies the location of the VHD file that is\n created when VMImageName specifies a published VM Image."}
{"_id": "q_1298", "text": "Deletes the specified virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n complete:\n True if all OS/data disks and the source blobs for the disks should\n also be deleted from storage."}
{"_id": "q_1299", "text": "The Capture Role operation captures a virtual machine image to your\n image gallery. From the captured image, you can create additional\n customized virtual machines.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n post_capture_action:\n Specifies the action after capture operation completes. Possible\n values are: Delete, Reprovision.\n target_image_name:\n Specifies the image name of the captured virtual machine.\n target_image_label:\n Specifies the friendly name of the captured virtual machine.\n provisioning_configuration:\n Use an instance of WindowsConfigurationSet or LinuxConfigurationSet."}
{"_id": "q_1300", "text": "Starts the specified virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role."}
{"_id": "q_1301", "text": "Starts the specified virtual machines.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_names:\n The names of the roles, as an enumerable of strings."}
{"_id": "q_1302", "text": "Shuts down the specified virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n post_shutdown_action:\n Specifies how the Virtual Machine should be shut down. Values are:\n Stopped\n Shuts down the Virtual Machine but retains the compute\n resources. You will continue to be billed for the resources\n that the stopped machine uses.\n StoppedDeallocated\n Shuts down the Virtual Machine and releases the compute\n resources. You are not billed for the compute resources that\n this Virtual Machine uses. If a static Virtual Network IP\n address is assigned to the Virtual Machine, it is reserved."}
{"_id": "q_1303", "text": "Shuts down the specified virtual machines.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_names:\n The names of the roles, as an enumerable of strings.\n post_shutdown_action:\n Specifies how the Virtual Machine should be shut down. Values are:\n Stopped\n Shuts down the Virtual Machine but retains the compute\n resources. You will continue to be billed for the resources\n that the stopped machine uses.\n StoppedDeallocated\n Shuts down the Virtual Machine and releases the compute\n resources. You are not billed for the compute resources that\n this Virtual Machine uses. If a static Virtual Network IP\n address is assigned to the Virtual Machine, it is reserved."}
{"_id": "q_1304", "text": "Adds a DNS server definition to an existing deployment.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n dns_server_name:\n Specifies the name of the DNS server.\n address:\n Specifies the IP address of the DNS server."}
{"_id": "q_1305", "text": "Updates the ip address of a DNS server.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n dns_server_name:\n Specifies the name of the DNS server.\n address:\n Specifies the IP address of the DNS server."}
{"_id": "q_1306", "text": "Deletes a DNS server from a deployment.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n dns_server_name:\n Name of the DNS server that you want to delete."}
{"_id": "q_1307", "text": "Lists the versions of a resource extension that are available to add\n to a Virtual Machine.\n\n publisher_name:\n Name of the resource extension publisher.\n extension_name:\n Name of the resource extension."}
{"_id": "q_1308", "text": "Unreplicate a VM image from all regions This operation\n is only for publishers. You have to be registered as image publisher\n with Microsoft Azure to be able to call this\n\n vm_image_name:\n Specifies the name of the VM Image that is to be used for\n unreplication. The VM Image Name should be the user VM Image,\n not the published name of the VM Image."}
{"_id": "q_1309", "text": "Share an already replicated OS image. This operation is only for\n publishers. You have to be registered as image publisher with Windows\n Azure to be able to call this.\n\n vm_image_name:\n The name of the virtual machine image to share\n permission:\n The sharing permission: public, msdn, or private"}
{"_id": "q_1310", "text": "Creates a VM Image in the image repository that is associated with the\n specified subscription using a specified set of virtual hard disks.\n\n vm_image:\n An instance of VMImage class.\n vm_image.name: Required. Specifies the name of the image.\n vm_image.label: Required. Specifies an identifier for the image.\n vm_image.description: Optional. Specifies the description of the image.\n vm_image.os_disk_configuration:\n Required. Specifies configuration information for the operating \n system disk that is associated with the image.\n vm_image.os_disk_configuration.host_caching:\n Optional. Specifies the caching behavior of the operating system disk.\n Possible values are: None, ReadOnly, ReadWrite \n vm_image.os_disk_configuration.os_state:\n Required. Specifies the state of the operating system in the image.\n Possible values are: Generalized, Specialized\n A Virtual Machine that is fully configured and running contains a\n Specialized operating system. A Virtual Machine on which the\n Sysprep command has been run with the generalize option contains a\n Generalized operating system.\n vm_image.os_disk_configuration.os:\n Required. Specifies the operating system type of the image.\n vm_image.os_disk_configuration.media_link:\n Required. Specifies the location of the blob in Windows Azure\n storage. The blob location belongs to a storage account in the\n subscription specified by the <subscription-id> value in the\n operation call.\n vm_image.data_disk_configurations:\n Optional. Specifies configuration information for the data disks\n that are associated with the image. A VM Image might not have data\n disks associated with it.\n vm_image.data_disk_configurations[].host_caching:\n Optional. Specifies the caching behavior of the data disk.\n Possible values are: None, ReadOnly, ReadWrite \n vm_image.data_disk_configurations[].lun:\n Optional if the lun for the disk is 0. Specifies the Logical Unit\n Number (LUN) for the data disk.\n vm_image.data_disk_configurations[].media_link:\n Required. Specifies the location of the blob in Windows Azure\n storage. The blob location belongs to a storage account in the\n subscription specified by the <subscription-id> value in the\n operation call.\n vm_image.data_disk_configurations[].logical_size_in_gb:\n Required. Specifies the size, in GB, of the data disk.\n vm_image.language: Optional. Specifies the language of the image.\n vm_image.image_family:\n Optional. Specifies a value that can be used to group VM Images.\n vm_image.recommended_vm_size:\n Optional. Specifies the size to use for the Virtual Machine that\n is created from the VM Image.\n vm_image.eula:\n Optional. Specifies the End User License Agreement that is\n associated with the image. The value for this element is a string,\n but it is recommended that the value be a URL that points to a EULA.\n vm_image.icon_uri:\n Optional. Specifies the URI to the icon that is displayed for the\n image in the Management Portal.\n vm_image.small_icon_uri:\n Optional. Specifies the URI to the small icon that is displayed for\n the image in the Management Portal.\n vm_image.privacy_uri:\n Optional. Specifies the URI that points to a document that contains\n the privacy policy related to the image.\n vm_image.published_date:\n Optional. Specifies the date when the image was added to the image\n repository.\n vm_image.show_in_gui:\n Optional. Indicates whether the VM Images should be listed in the\n portal."}
{"_id": "q_1311", "text": "Deletes the specified VM Image from the image repository that is\n associated with the specified subscription.\n\n vm_image_name:\n The name of the image.\n delete_vhd:\n Deletes the underlying vhd blob in Azure storage."}
{"_id": "q_1312", "text": "Adds an OS image that is currently stored in a storage account in your\n subscription to the image repository.\n\n label:\n Specifies the friendly name of the image.\n media_link:\n Specifies the location of the blob in Windows Azure blob store\n where the media for the image is located. The blob location must\n belong to a storage account in the subscription specified by the\n <subscription-id> value in the operation call. Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n name:\n Specifies a name for the OS image that Windows Azure uses to\n identify the image when creating one or more virtual machines.\n os:\n The operating system type of the OS image. Possible values are:\n Linux, Windows"}
{"_id": "q_1313", "text": "Updates an OS image that in your image repository.\n\n image_name:\n The name of the image to update.\n label:\n Specifies the friendly name of the image to be updated. You cannot\n use this operation to update images provided by the Windows Azure\n platform.\n media_link:\n Specifies the location of the blob in Windows Azure blob store\n where the media for the image is located. The blob location must\n belong to a storage account in the subscription specified by the\n <subscription-id> value in the operation call. Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n name:\n Specifies a name for the OS image that Windows Azure uses to\n identify the image when creating one or more VM Roles.\n os:\n The operating system type of the OS image. Possible values are:\n Linux, Windows"}
{"_id": "q_1314", "text": "Updates metadata elements from a given OS image reference.\n\n image_name:\n The name of the image to update.\n os_image:\n An instance of OSImage class.\n os_image.label: Optional. Specifies an identifier for the image.\n os_image.description: Optional. Specifies the description of the image.\n os_image.language: Optional. Specifies the language of the image.\n os_image.image_family:\n Optional. Specifies a value that can be used to group VM Images.\n os_image.recommended_vm_size:\n Optional. Specifies the size to use for the Virtual Machine that\n is created from the VM Image.\n os_image.eula:\n Optional. Specifies the End User License Agreement that is\n associated with the image. The value for this element is a string,\n but it is recommended that the value be a URL that points to a EULA.\n os_image.icon_uri:\n Optional. Specifies the URI to the icon that is displayed for the\n image in the Management Portal.\n os_image.small_icon_uri:\n Optional. Specifies the URI to the small icon that is displayed for\n the image in the Management Portal.\n os_image.privacy_uri:\n Optional. Specifies the URI that points to a document that contains\n the privacy policy related to the image.\n os_image.published_date:\n Optional. Specifies the date when the image was added to the image\n repository.\n os.image.media_link:\n Required: Specifies the location of the blob in Windows Azure\n blob store where the media for the image is located. The blob\n location must belong to a storage account in the subscription\n specified by the <subscription-id> value in the operation call.\n Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n os_image.name:\n Specifies a name for the OS image that Windows Azure uses to\n identify the image when creating one or more VM Roles.\n os_image.os:\n The operating system type of the OS image. Possible values are:\n Linux, Windows"}
{"_id": "q_1315", "text": "Deletes the specified OS image from your image repository.\n\n image_name:\n The name of the image.\n delete_vhd:\n Deletes the underlying vhd blob in Azure storage."}
{"_id": "q_1316", "text": "Retrieves the specified data disk from a virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n lun:\n The Logical Unit Number (LUN) for the disk."}
{"_id": "q_1317", "text": "Adds a data disk to a virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n lun:\n Specifies the Logical Unit Number (LUN) for the disk. The LUN\n specifies the slot in which the data drive appears when mounted\n for usage by the virtual machine. Valid LUN values are 0 through 15.\n host_caching:\n Specifies the platform caching behavior of data disk blob for\n read/write efficiency. The default vault is ReadOnly. Possible\n values are: None, ReadOnly, ReadWrite\n media_link:\n Specifies the location of the blob in Windows Azure blob store\n where the media for the disk is located. The blob location must\n belong to the storage account in the subscription specified by the\n <subscription-id> value in the operation call. Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n disk_label:\n Specifies the description of the data disk. When you attach a disk,\n either by directly referencing a media using the MediaLink element\n or specifying the target disk size, you can use the DiskLabel\n element to customize the name property of the target data disk.\n disk_name:\n Specifies the name of the disk. Windows Azure uses the specified\n disk to create the data disk for the machine and populates this\n field with the disk name.\n logical_disk_size_in_gb:\n Specifies the size, in GB, of an empty disk to be attached to the\n role. The disk can be created as part of disk attach or create VM\n role call by specifying the value for this property. Windows Azure\n creates the empty disk based on size preference and attaches the\n newly created disk to the Role.\n source_media_link:\n Specifies the location of a blob in account storage which is\n mounted as a data disk when the virtual machine is created."}
{"_id": "q_1318", "text": "Updates the specified data disk attached to the specified virtual\n machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n lun:\n Specifies the Logical Unit Number (LUN) for the disk. The LUN\n specifies the slot in which the data drive appears when mounted\n for usage by the virtual machine. Valid LUN values are 0 through\n 15.\n host_caching:\n Specifies the platform caching behavior of data disk blob for\n read/write efficiency. The default vault is ReadOnly. Possible\n values are: None, ReadOnly, ReadWrite\n media_link:\n Specifies the location of the blob in Windows Azure blob store\n where the media for the disk is located. The blob location must\n belong to the storage account in the subscription specified by\n the <subscription-id> value in the operation call. Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n updated_lun:\n Specifies the Logical Unit Number (LUN) for the disk. The LUN\n specifies the slot in which the data drive appears when mounted\n for usage by the virtual machine. Valid LUN values are 0 through 15.\n disk_label:\n Specifies the description of the data disk. When you attach a disk,\n either by directly referencing a media using the MediaLink element\n or specifying the target disk size, you can use the DiskLabel\n element to customize the name property of the target data disk.\n disk_name:\n Specifies the name of the disk. Windows Azure uses the specified\n disk to create the data disk for the machine and populates this\n field with the disk name.\n logical_disk_size_in_gb:\n Specifies the size, in GB, of an empty disk to be attached to the\n role. The disk can be created as part of disk attach or create VM\n role call by specifying the value for this property. Windows Azure\n creates the empty disk based on size preference and attaches the\n newly created disk to the Role."}
{"_id": "q_1319", "text": "Removes the specified data disk from a virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n lun:\n The Logical Unit Number (LUN) for the disk.\n delete_vhd:\n Deletes the underlying vhd blob in Azure storage."}
{"_id": "q_1320", "text": "Adds a disk to the user image repository. The disk can be an OS disk\n or a data disk.\n\n has_operating_system:\n Deprecated.\n label:\n Specifies the description of the disk.\n media_link:\n Specifies the location of the blob in Windows Azure blob store\n where the media for the disk is located. The blob location must\n belong to the storage account in the current subscription specified\n by the <subscription-id> value in the operation call. Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n name:\n Specifies a name for the disk. Windows Azure uses the name to\n identify the disk when creating virtual machines from the disk.\n os:\n The OS type of the disk. Possible values are: Linux, Windows"}
{"_id": "q_1321", "text": "Updates an existing disk in your image repository.\n\n disk_name:\n The name of the disk to update.\n has_operating_system:\n Deprecated.\n label:\n Specifies the description of the disk.\n media_link:\n Deprecated.\n name:\n Deprecated.\n os:\n Deprecated."}
{"_id": "q_1322", "text": "Deletes the specified data or operating system disk from your image\n repository.\n\n disk_name:\n The name of the disk to delete.\n delete_vhd:\n Deletes the underlying vhd blob in Azure storage."}
{"_id": "q_1323", "text": "This is a temporary patch pending a fix in uAMQP."}
{"_id": "q_1324", "text": "Receive a batch of messages at once.\n\n This approach it optimal if you wish to process multiple messages simultaneously. Note that the\n number of messages retrieved in a single batch will be dependent on\n whether `prefetch` was set for the receiver. This call will prioritize returning\n quickly over meeting a specified batch size, and so will return as soon as at least\n one message is received and there is a gap in incoming messages regardless\n of the specified batch size.\n\n :param max_batch_size: Maximum number of messages in the batch. Actual number\n returned will depend on prefetch size and incoming stream rate.\n :type max_batch_size: int\n :param timeout: The time to wait in seconds for the first message to arrive.\n If no messages arrive, and no timeout is specified, this call will not return\n until the connection is closed. If specified, an no messages arrive within the\n timeout period, an empty list will be returned.\n :rtype: list[~azure.servicebus.common.message.Message]\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START fetch_next_messages]\n :end-before: [END fetch_next_messages]\n :language: python\n :dedent: 4\n :caption: Get the messages in batch from the receiver"}
{"_id": "q_1325", "text": "Renew the session lock.\n\n This operation must be performed periodically in order to retain a lock on the\n session to continue message processing.\n Once the lock is lost the connection will be closed. This operation can\n also be performed as a threaded background task by registering the session\n with an `azure.servicebus.AutoLockRenew` instance.\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START renew_lock]\n :end-before: [END renew_lock]\n :language: python\n :dedent: 4\n :caption: Renew the session lock before it expires"}
{"_id": "q_1326", "text": "Converts SinglePlacementGroup property to false for a existing virtual\n machine scale set.\n\n :param resource_group_name: The name of the resource group.\n :type resource_group_name: str\n :param vm_scale_set_name: The name of the virtual machine scale set to\n create or update.\n :type vm_scale_set_name: str\n :param active_placement_group_id: Id of the placement group in which\n you want future virtual machine instances to be placed. To query\n placement group Id, please use Virtual Machine Scale Set VMs - Get\n API. If not provided, the platform will choose one with maximum number\n of virtual machine instances.\n :type active_placement_group_id: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: None or ClientRawResponse if raw=true\n :rtype: None or ~msrest.pipeline.ClientRawResponse\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1327", "text": "Imports an externally created key, stores it, and returns key\n parameters and attributes to the client.\n\n The import key operation may be used to import any key type into an\n Azure Key Vault. If the named key already exists, Azure Key Vault\n creates a new version of the key. This operation requires the\n keys/import permission.\n\n :param vault_base_url: The vault name, for example\n https://myvault.vault.azure.net.\n :type vault_base_url: str\n :param key_name: Name for the imported key.\n :type key_name: str\n :param key: The Json web key\n :type key: ~azure.keyvault.v2016_10_01.models.JsonWebKey\n :param hsm: Whether to import as a hardware key (HSM) or software key.\n :type hsm: bool\n :param key_attributes: The key management attributes.\n :type key_attributes: ~azure.keyvault.v2016_10_01.models.KeyAttributes\n :param tags: Application specific metadata in the form of key-value\n pairs.\n :type tags: dict[str, str]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: KeyBundle or ClientRawResponse if raw=true\n :rtype: ~azure.keyvault.v2016_10_01.models.KeyBundle or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`KeyVaultErrorException<azure.keyvault.v2016_10_01.models.KeyVaultErrorException>`"}
{"_id": "q_1328", "text": "The update key operation changes specified attributes of a stored key\n and can be applied to any key type and key version stored in Azure Key\n Vault.\n\n In order to perform this operation, the key must already exist in the\n Key Vault. Note: The cryptographic material of a key itself cannot be\n changed. This operation requires the keys/update permission.\n\n :param vault_base_url: The vault name, for example\n https://myvault.vault.azure.net.\n :type vault_base_url: str\n :param key_name: The name of key to update.\n :type key_name: str\n :param key_version: The version of the key to update.\n :type key_version: str\n :param key_ops: Json web key operations. For more information on\n possible key operations, see JsonWebKeyOperation.\n :type key_ops: list[str or\n ~azure.keyvault.v2016_10_01.models.JsonWebKeyOperation]\n :param key_attributes:\n :type key_attributes: ~azure.keyvault.v2016_10_01.models.KeyAttributes\n :param tags: Application specific metadata in the form of key-value\n pairs.\n :type tags: dict[str, str]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: KeyBundle or ClientRawResponse if raw=true\n :rtype: ~azure.keyvault.v2016_10_01.models.KeyBundle or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`KeyVaultErrorException<azure.keyvault.v2016_10_01.models.KeyVaultErrorException>`"}
{"_id": "q_1329", "text": "Sets a secret in a specified key vault.\n\n The SET operation adds a secret to the Azure Key Vault. If the named\n secret already exists, Azure Key Vault creates a new version of that\n secret. This operation requires the secrets/set permission.\n\n :param vault_base_url: The vault name, for example\n https://myvault.vault.azure.net.\n :type vault_base_url: str\n :param secret_name: The name of the secret.\n :type secret_name: str\n :param value: The value of the secret.\n :type value: str\n :param tags: Application specific metadata in the form of key-value\n pairs.\n :type tags: dict[str, str]\n :param content_type: Type of the secret value such as a password.\n :type content_type: str\n :param secret_attributes: The secret management attributes.\n :type secret_attributes:\n ~azure.keyvault.v2016_10_01.models.SecretAttributes\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: SecretBundle or ClientRawResponse if raw=true\n :rtype: ~azure.keyvault.v2016_10_01.models.SecretBundle or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`KeyVaultErrorException<azure.keyvault.v2016_10_01.models.KeyVaultErrorException>`"}
{"_id": "q_1330", "text": "Create a Service Bus client from a connection string.\n\n :param conn_str: The connection string.\n :type conn_str: str\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START create_async_servicebus_client_connstr]\n :end-before: [END create_async_servicebus_client_connstr]\n :language: python\n :dedent: 4\n :caption: Create a ServiceBusClient via a connection string."}
{"_id": "q_1331", "text": "Get an async client for a subscription entity.\n\n :param topic_name: The name of the topic.\n :type topic_name: str\n :param subscription_name: The name of the subscription.\n :type subscription_name: str\n :rtype: ~azure.servicebus.aio.async_client.SubscriptionClient\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the subscription is not found.\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START get_async_subscription_client]\n :end-before: [END get_async_subscription_client]\n :language: python\n :dedent: 4\n :caption: Get a TopicClient for the specified topic."}
{"_id": "q_1332", "text": "Get an async client for all subscription entities in the topic.\n\n :param topic_name: The topic to list subscriptions for.\n :type topic_name: str\n :rtype: list[~azure.servicebus.aio.async_client.SubscriptionClient]\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the topic is not found."}
{"_id": "q_1333", "text": "Send one or more messages to the current entity.\n\n This operation will open a single-use connection, send the supplied messages, and close\n connection. If the entity requires sessions, a session ID must be either\n provided here, or set on each outgoing message.\n\n :param messages: One or more messages to be sent.\n :type messages: ~azure.servicebus.aio.async_message.Message or\n list[~azure.servicebus.aio.async_message.Message]\n :param message_timeout: The period in seconds during which the Message must be\n sent. If the send is not completed in this time it will return a failure result.\n :type message_timeout: int\n :param session: An optional session ID. If supplied this session ID will be\n applied to every outgoing message sent with this Sender.\n If an individual message already has a session ID, that will be\n used instead. If no session ID is supplied here, nor set on an outgoing\n message, a ValueError will be raised if the entity is sessionful.\n :type session: str or ~uuid.Guid\n :raises: ~azure.servicebus.common.errors.MessageSendFailed\n :returns: A list of the send results of all the messages. Each\n send result is a tuple with two values. The first is a boolean, indicating `True`\n if the message sent, or `False` if it failed. The second is an error if the message\n failed, otherwise it will be `None`.\n :rtype: list[tuple[bool, ~azure.servicebus.common.errors.MessageSendFailed]]\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START queue_client_send]\n :end-before: [END queue_client_send]\n :language: python\n :dedent: 4\n :caption: Send a single message.\n\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START queue_client_send_multiple]\n :end-before: [END queue_client_send_multiple]\n :language: python\n :dedent: 4\n :caption: Send multiple messages."}
{"_id": "q_1334", "text": "Get a Receiver for the Service Bus endpoint.\n\n A Receiver represents a single open connection with which multiple receive operations can be made.\n\n :param session: A specific session from which to receive. This must be specified for a\n sessionful entity, otherwise it must be None. In order to receive the next available\n session, set this to NEXT_AVAILABLE.\n :type session: str or ~azure.servicebus.common.constants.NEXT_AVAILABLE\n :param prefetch: The maximum number of messages to cache with each request to the service.\n The default value is 0, meaning messages will be received from the service and processed\n one at a time. Increasing this value will improve message throughput performance but increase\n the chance that messages will expire while they are cached if they're not processed fast enough.\n :type prefetch: int\n :param mode: The mode with which messages will be retrieved from the entity. The two options\n are PeekLock and ReceiveAndDelete. Messages received with PeekLock must be settled within a given\n lock period before they will be removed from the queue. Messages received with ReceiveAndDelete\n will be immediately removed from the queue, and cannot be subsequently rejected or re-received if\n the client fails to process the message. The default mode is PeekLock.\n :type mode: ~azure.servicebus.common.constants.ReceiveSettleMode\n :param idle_timeout: The timeout in seconds between received messages after which the receiver will\n automatically shutdown. The default value is 0, meaning no timeout.\n :type idle_timeout: int\n :returns: A Receiver instance with an unopened connection.\n :rtype: ~azure.servicebus.aio.async_receive_handler.Receiver\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START open_close_receiver_context]\n :end-before: [END open_close_receiver_context]\n :language: python\n :dedent: 4\n :caption: Receive messages with a Receiver."}
{"_id": "q_1335", "text": "Extracts request id from response header."}
{"_id": "q_1336", "text": "Returns the status of the specified operation. After calling an\n asynchronous operation, you can call Get Operation Status to determine\n whether the operation has succeeded, failed, or is still in progress.\n\n request_id:\n The request ID for the request you wish to track."}
{"_id": "q_1337", "text": "Add additional headers for management."}
{"_id": "q_1338", "text": "Assumed called on Travis, to prepare a package to be deployed\n\n This method prints on stdout for Travis.\n Return is obj to pass to sys.exit() directly"}
{"_id": "q_1339", "text": "List certificates in a specified key vault.\n\n The GetCertificates operation returns the set of certificates resources\n in the specified key vault. This operation requires the\n certificates/list permission.\n\n :param vault_base_url: The vault name, for example\n https://myvault.vault.azure.net.\n :type vault_base_url: str\n :param maxresults: Maximum number of results to return in a page. If\n not specified the service will return up to 25 results.\n :type maxresults: int\n :param include_pending: Specifies whether to include certificates\n which are not completely provisioned.\n :type include_pending: bool\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: An iterator like instance of CertificateItem\n :rtype:\n ~azure.keyvault.v7_0.models.CertificateItemPaged[~azure.keyvault.v7_0.models.CertificateItem]\n :raises:\n :class:`KeyVaultErrorException<azure.keyvault.v7_0.models.KeyVaultErrorException>`"}
{"_id": "q_1340", "text": "Get list of available service bus regions."}
{"_id": "q_1341", "text": "List the service bus namespaces defined on the account."}
{"_id": "q_1342", "text": "Create a new service bus namespace.\n\n name:\n Name of the service bus namespace to create.\n region:\n Region to create the namespace in."}
{"_id": "q_1343", "text": "Checks to see if the specified service bus namespace is available, or\n if it has already been taken.\n\n name:\n Name of the service bus namespace to validate."}
{"_id": "q_1344", "text": "Retrieves the topics in the service namespace.\n\n name:\n Name of the service bus namespace."}
{"_id": "q_1345", "text": "Retrieves the notification hubs in the service namespace.\n\n name:\n Name of the service bus namespace."}
{"_id": "q_1346", "text": "Retrieves the relays in the service namespace.\n\n name:\n Name of the service bus namespace."}
{"_id": "q_1347", "text": "This operation gets rollup data for Service Bus metrics queue.\n Rollup data includes the time granularity for the telemetry aggregation as well as\n the retention settings for each time granularity.\n\n name:\n Name of the service bus namespace.\n queue_name:\n Name of the service bus queue in this namespace.\n metric:\n name of a supported metric"}
{"_id": "q_1348", "text": "This operation gets rollup data for Service Bus metrics topic.\n Rollup data includes the time granularity for the telemetry aggregation as well as\n the retention settings for each time granularity.\n\n name:\n Name of the service bus namespace.\n topic_name:\n Name of the service bus queue in this namespace.\n metric:\n name of a supported metric"}
{"_id": "q_1349", "text": "This operation gets rollup data for Service Bus metrics relay.\n Rollup data includes the time granularity for the telemetry aggregation as well as\n the retention settings for each time granularity.\n\n name:\n Name of the service bus namespace.\n relay_name:\n Name of the service bus relay in this namespace.\n metric:\n name of a supported metric"}
{"_id": "q_1350", "text": "Create a virtual environment in a directory."}
{"_id": "q_1351", "text": "Create a new Azure SQL Database server.\n\n admin_login:\n The administrator login name for the new server.\n admin_password:\n The administrator login password for the new server.\n location:\n The region to deploy the new server."}
{"_id": "q_1352", "text": "Reset the administrator password for a server.\n\n server_name:\n Name of the server to change the password.\n admin_password:\n The new administrator password for the server."}
{"_id": "q_1353", "text": "Gets quotas for an Azure SQL Database Server.\n\n server_name:\n Name of the server."}
{"_id": "q_1354", "text": "Gets the event logs for an Azure SQL Database Server.\n\n server_name:\n Name of the server to retrieve the event logs from.\n start_date:\n The starting date and time of the events to retrieve in UTC format,\n for example '2011-09-28 16:05:00'.\n interval_size_in_minutes:\n Size of the event logs to retrieve (in minutes).\n Valid values are: 5, 60, or 1440.\n event_types:\n The event type of the log entries you want to retrieve.\n Valid values are: \n - connection_successful\n - connection_failed\n - connection_terminated\n - deadlock\n - throttling\n - throttling_long_transaction\n To return all event types pass in an empty string."}
{"_id": "q_1355", "text": "Updates existing database details.\n\n server_name:\n Name of the server to contain the new database.\n name:\n Required. The name for the new database. See Naming Requirements\n in Azure SQL Database General Guidelines and Limitations and\n Database Identifiers for more information.\n new_database_name:\n Optional. The new name for the new database.\n service_objective_id:\n Optional. The new service level to apply to the database. For more\n information about service levels, see Azure SQL Database Service\n Tiers and Performance Levels. Use List Service Level Objectives to\n get the correct ID for the desired service objective.\n edition:\n Optional. The new edition for the new database.\n max_size_bytes:\n Optional. The new size of the database in bytes. For information on\n available sizes for each edition, see Azure SQL Database Service\n Tiers (Editions)."}
{"_id": "q_1356", "text": "Deletes an Azure SQL Database.\n\n server_name:\n Name of the server where the database is located.\n name:\n Name of the database to delete."}
{"_id": "q_1357", "text": "List the SQL databases defined on the specified server name"}
{"_id": "q_1358", "text": "Close down the handler connection.\n\n If the handler has already closed,\n this operation will do nothing. An optional exception can be passed in to\n indicate that the handler was shutdown due to error.\n It is recommended to open a handler within a context manager as\n opposed to calling the method directly.\n\n .. note:: This operation is not thread-safe.\n\n :param exception: An optional exception if the handler is closing\n due to an error.\n :type exception: Exception\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START open_close_sender_directly]\n :end-before: [END open_close_sender_directly]\n :language: python\n :dedent: 4\n :caption: Explicitly open and close a Sender."}
{"_id": "q_1359", "text": "Close down the receiver connection.\n\n If the receiver has already closed, this operation will do nothing. An optional\n exception can be passed in to indicate that the handler was shutdown due to error.\n It is recommended to open a handler within a context manager as\n opposed to calling the method directly.\n The receiver will be implicitly closed on completion of the message iterator,\n however this method will need to be called explicitly if the message iterator is not run\n to completion.\n\n .. note:: This operation is not thread-safe.\n\n :param exception: An optional exception if the handler is closing\n due to an error.\n :type exception: Exception\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START open_close_receiver_directly]\n :end-before: [END open_close_receiver_directly]\n :language: python\n :dedent: 4\n :caption: Iterate then explicitly close a Receiver."}
{"_id": "q_1360", "text": "Get the session state.\n\n Returns None if no state has been set.\n\n :rtype: str\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START set_session_state]\n :end-before: [END set_session_state]\n :language: python\n :dedent: 4\n :caption: Getting and setting the state of a session."}
{"_id": "q_1361", "text": "Verifies that the challenge is a Bearer challenge and returns the key=value pairs."}
{"_id": "q_1362", "text": "Purges data in an Log Analytics workspace by a set of user-defined\n filters.\n\n :param resource_group_name: The name of the resource group to get. The\n name is case insensitive.\n :type resource_group_name: str\n :param workspace_name: Log Analytics workspace name\n :type workspace_name: str\n :param table: Table from which to purge data.\n :type table: str\n :param filters: The set of columns and filters (queries) to run over\n them to purge the resulting data.\n :type filters:\n list[~azure.mgmt.loganalytics.models.WorkspacePurgeBodyFilters]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns object or\n ClientRawResponse<object> if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[object] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[object]]\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1363", "text": "Handle connection and service errors.\n\n Called internally when an event has failed to send so we\n can parse the error to determine whether we should attempt\n to retry sending the event again.\n Returns the action to take according to error type.\n\n :param error: The error received in the send attempt.\n :type error: Exception\n :rtype: ~uamqp.errors.ErrorAction"}
{"_id": "q_1364", "text": "Deletes an existing queue. This operation will also remove all\n associated state including messages in the queue.\n\n queue_name:\n Name of the queue to delete.\n fail_not_exist:\n Specify whether to throw an exception if the queue doesn't exist."}
{"_id": "q_1365", "text": "Retrieves an existing queue.\n\n queue_name:\n Name of the queue."}
{"_id": "q_1366", "text": "Creates a new topic. Once created, this topic resource manifest is\n immutable.\n\n topic_name:\n Name of the topic to create.\n topic:\n Topic object to create.\n fail_on_exist:\n Specify whether to throw an exception when the topic exists."}
{"_id": "q_1367", "text": "Creates a new rule. Once created, this rule's resource manifest is\n immutable.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription.\n rule_name:\n Name of the rule.\n fail_on_exist:\n Specify whether to throw an exception when the rule exists."}
{"_id": "q_1368", "text": "Retrieves the description for the specified rule.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription.\n rule_name:\n Name of the rule."}
{"_id": "q_1369", "text": "Retrieves the rules that exist under the specified subscription.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription."}
{"_id": "q_1370", "text": "Creates a new subscription. Once created, this subscription resource\n manifest is immutable.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription.\n fail_on_exist:\n Specify whether throw exception when subscription exists."}
{"_id": "q_1371", "text": "Gets an existing subscription.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription."}
{"_id": "q_1372", "text": "Enqueues a message into the specified topic. The limit to the number\n of messages which may be present in the topic is governed by the\n message size in MaxTopicSizeInBytes. If this message causes the topic\n to exceed its quota, a quota exceeded error is returned and the\n message will be rejected.\n\n topic_name:\n Name of the topic.\n message:\n Message object containing message body and properties."}
{"_id": "q_1373", "text": "Unlocks a message for processing by other receivers on a given\n queue. This operation deletes the lock object, causing the\n message to be unlocked. A message must have first been locked by a\n receiver before this operation is called.\n\n queue_name:\n Name of the queue.\n sequence_number:\n The sequence number of the message to be unlocked as returned in\n BrokerProperties['SequenceNumber'] by the Peek Message operation.\n lock_token:\n The ID of the lock as returned by the Peek Message operation in\n BrokerProperties['LockToken']"}
{"_id": "q_1374", "text": "Receive a message from a queue for processing.\n\n queue_name:\n Name of the queue.\n peek_lock:\n Optional. True to retrieve and lock the message. False to read and\n delete the message. Default is True (lock).\n timeout:\n Optional. The timeout parameter is expressed in seconds."}
{"_id": "q_1375", "text": "Receive a message from a subscription for processing.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription.\n peek_lock:\n Optional. True to retrieve and lock the message. False to read and\n delete the message. Default is True (lock).\n timeout:\n Optional. The timeout parameter is expressed in seconds."}
{"_id": "q_1376", "text": "Creates a new Event Hub.\n\n hub_name:\n Name of event hub.\n hub:\n Optional. Event hub properties. Instance of EventHub class.\n hub.message_retention_in_days:\n Number of days to retain the events for this Event Hub.\n hub.status:\n Status of the Event Hub (enabled or disabled).\n hub.user_metadata:\n User metadata.\n hub.partition_count:\n Number of shards on the Event Hub.\n fail_on_exist:\n Specify whether to throw an exception when the event hub exists."}
{"_id": "q_1377", "text": "Retrieves an existing event hub.\n\n hub_name:\n Name of the event hub."}
{"_id": "q_1378", "text": "Add additional headers for Service Bus."}
{"_id": "q_1379", "text": "Check if token expires or not."}
{"_id": "q_1380", "text": "Reset Service Principal Profile of a managed cluster.\n\n Update the service principal Profile for a managed cluster.\n\n :param resource_group_name: The name of the resource group.\n :type resource_group_name: str\n :param resource_name: The name of the managed cluster resource.\n :type resource_name: str\n :param client_id: The ID for the service principal.\n :type client_id: str\n :param secret: The secret password associated with the service\n principal in plain text.\n :type secret: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse<None> if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1381", "text": "Deletes itself if find queue name or topic name and subscription\n name."}
{"_id": "q_1382", "text": "Unlocks itself if find queue name or topic name and subscription\n name."}
{"_id": "q_1383", "text": "Renew lock on itself if find queue name or topic name and subscription\n name."}
{"_id": "q_1384", "text": "add addtional headers to request for message request."}
{"_id": "q_1385", "text": "return the current message as expected by batch body format"}
{"_id": "q_1386", "text": "Gets the health of a Service Fabric cluster.\n\n Use EventsHealthStateFilter to filter the collection of health events\n reported on the cluster based on the health state.\n Similarly, use NodesHealthStateFilter and ApplicationsHealthStateFilter\n to filter the collection of nodes and applications returned based on\n their aggregated health state.\n\n :param nodes_health_state_filter: Allows filtering of the node health\n state objects returned in the result of cluster health query\n based on their health state. The possible values for this parameter\n include integer value of one of the\n following health states. Only nodes that match the filter are\n returned. All nodes are used to evaluate the aggregated health state.\n If not specified, all entries are returned.\n The state values are flag-based enumeration, so the value could be a\n combination of these values obtained using bitwise 'OR' operator.\n For example, if the provided value is 6 then health state of nodes\n with HealthState value of OK (2) and Warning (4) are returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type nodes_health_state_filter: int\n :param applications_health_state_filter: Allows filtering of the\n application health state objects returned in the result of cluster\n health\n query based on their health state.\n The possible values for this parameter include integer value obtained\n from members or bitwise operations\n on members of HealthStateFilter enumeration. Only applications that\n match the filter are returned.\n All applications are used to evaluate the aggregated health state. If\n not specified, all entries are returned.\n The state values are flag-based enumeration, so the value could be a\n combination of these values obtained using bitwise 'OR' operator.\n For example, if the provided value is 6 then health state of\n applications with HealthState value of OK (2) and Warning (4) are\n returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type applications_health_state_filter: int\n :param events_health_state_filter: Allows filtering the collection of\n HealthEvent objects returned based on health state.\n The possible values for this parameter include integer value of one of\n the following health states.\n Only events that match the filter are returned. All events are used to\n evaluate the aggregated health state.\n If not specified, all entries are returned. The state values are\n flag-based enumeration, so the value could be a combination of these\n values, obtained using the bitwise 'OR' operator. For example, If the\n provided value is 6 then all of the events with HealthState value of\n OK (2) and Warning (4) are returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type events_health_state_filter: int\n :param exclude_health_statistics: Indicates whether the health\n statistics should be returned as part of the query result. False by\n default.\n The statistics show the number of children entities in health state\n Ok, Warning, and Error.\n :type exclude_health_statistics: bool\n :param include_system_application_health_statistics: Indicates whether\n the health statistics should include the fabric:/System application\n health statistics. False by default.\n If IncludeSystemApplicationHealthStatistics is set to true, the health\n statistics include the entities that belong to the fabric:/System\n application.\n Otherwise, the query result includes health statistics only for user\n applications.\n The health statistics must be included in the query result for this\n parameter to be applied.\n :type include_system_application_health_statistics: bool\n :param timeout: The server timeout for performing the operation in\n seconds. This timeout specifies the time duration that the client is\n willing to wait for the requested operation to complete. The default\n value for this parameter is 60 seconds.\n :type timeout: long\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: ClusterHealth or ClientRawResponse if raw=true\n :rtype: ~azure.servicefabric.models.ClusterHealth or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`FabricErrorException<azure.servicefabric.models.FabricErrorException>`"}
{"_id": "q_1387", "text": "Gets the health of a Service Fabric cluster using the specified policy.\n\n Use EventsHealthStateFilter to filter the collection of health events\n reported on the cluster based on the health state.\n Similarly, use NodesHealthStateFilter and ApplicationsHealthStateFilter\n to filter the collection of nodes and applications returned based on\n their aggregated health state.\n Use ClusterHealthPolicies to override the health policies used to\n evaluate the health.\n\n :param nodes_health_state_filter: Allows filtering of the node health\n state objects returned in the result of cluster health query\n based on their health state. The possible values for this parameter\n include integer value of one of the\n following health states. Only nodes that match the filter are\n returned. All nodes are used to evaluate the aggregated health state.\n If not specified, all entries are returned.\n The state values are flag-based enumeration, so the value could be a\n combination of these values obtained using bitwise 'OR' operator.\n For example, if the provided value is 6 then health state of nodes\n with HealthState value of OK (2) and Warning (4) are returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type nodes_health_state_filter: int\n :param applications_health_state_filter: Allows filtering of the\n application health state objects returned in the result of cluster\n health\n query based on their health state.\n The possible values for this parameter include integer value obtained\n from members or bitwise operations\n on members of HealthStateFilter enumeration. Only applications that\n match the filter are returned.\n All applications are used to evaluate the aggregated health state. If\n not specified, all entries are returned.\n The state values are flag-based enumeration, so the value could be a\n combination of these values obtained using bitwise 'OR' operator.\n For example, if the provided value is 6 then health state of\n applications with HealthState value of OK (2) and Warning (4) are\n returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type applications_health_state_filter: int\n :param events_health_state_filter: Allows filtering the collection of\n HealthEvent objects returned based on health state.\n The possible values for this parameter include integer value of one of\n the following health states.\n Only events that match the filter are returned. All events are used to\n evaluate the aggregated health state.\n If not specified, all entries are returned. The state values are\n flag-based enumeration, so the value could be a combination of these\n values, obtained using the bitwise 'OR' operator. For example, If the\n provided value is 6 then all of the events with HealthState value of\n OK (2) and Warning (4) are returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type events_health_state_filter: int\n :param exclude_health_statistics: Indicates whether the health\n statistics should be returned as part of the query result. False by\n default.\n The statistics show the number of children entities in health state\n Ok, Warning, and Error.\n :type exclude_health_statistics: bool\n :param include_system_application_health_statistics: Indicates whether\n the health statistics should include the fabric:/System application\n health statistics. False by default.\n If IncludeSystemApplicationHealthStatistics is set to true, the health\n statistics include the entities that belong to the fabric:/System\n application.\n Otherwise, the query result includes health statistics only for user\n applications.\n The health statistics must be included in the query result for this\n parameter to be applied.\n :type include_system_application_health_statistics: bool\n :param timeout: The server timeout for performing the operation in\n seconds. This timeout specifies the time duration that the client is\n willing to wait for the requested operation to complete. The default\n value for this parameter is 60 seconds.\n :type timeout: long\n :param application_health_policy_map: Defines a map that contains\n specific application health policies for different applications.\n Each entry specifies as key the application name and as value an\n ApplicationHealthPolicy used to evaluate the application health.\n If an application is not specified in the map, the application health\n evaluation uses the ApplicationHealthPolicy found in its application\n manifest or the default application health policy (if no health policy\n is defined in the manifest).\n The map is empty by default.\n :type application_health_policy_map:\n list[~azure.servicefabric.models.ApplicationHealthPolicyMapItem]\n :param cluster_health_policy: Defines a health policy used to evaluate\n the health of the cluster or of a cluster node.\n :type cluster_health_policy:\n ~azure.servicefabric.models.ClusterHealthPolicy\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: ClusterHealth or ClientRawResponse if raw=true\n :rtype: ~azure.servicefabric.models.ClusterHealth or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`FabricErrorException<azure.servicefabric.models.FabricErrorException>`"}
{"_id": "q_1388", "text": "Removes or unregisters a Service Fabric application type from the\n cluster.\n\n This operation can only be performed if all application instances of\n the application type have been deleted. Once the application type is\n unregistered, no new application instances can be created for this\n particular application type.\n\n :param application_type_name: The name of the application type.\n :type application_type_name: str\n :param application_type_version: The version of the application type\n as defined in the application manifest.\n :type application_type_version: str\n :param timeout: The server timeout for performing the operation in\n seconds. This timeout specifies the time duration that the client is\n willing to wait for the requested operation to complete. The default\n value for this parameter is 60 seconds.\n :type timeout: long\n :param async_parameter: The flag indicating whether or not unprovision\n should occur asynchronously. When set to true, the unprovision\n operation returns when the request is accepted by the system, and the\n unprovision operation continues without any timeout limit. The default\n value is false. However, we recommend setting it to true for large\n application packages that were provisioned.\n :type async_parameter: bool\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: None or ClientRawResponse if raw=true\n :rtype: None or ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`FabricErrorException<azure.servicefabric.models.FabricErrorException>`"}
{"_id": "q_1389", "text": "Submits a property batch.\n\n Submits a batch of property operations. Either all or none of the\n operations will be committed.\n\n :param name_id: The Service Fabric name, without the 'fabric:' URI\n scheme.\n :type name_id: str\n :param timeout: The server timeout for performing the operation in\n seconds. This timeout specifies the time duration that the client is\n willing to wait for the requested operation to complete. The default\n value for this parameter is 60 seconds.\n :type timeout: long\n :param operations: A list of the property batch operations to be\n executed.\n :type operations:\n list[~azure.servicefabric.models.PropertyBatchOperation]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: PropertyBatchInfo or ClientRawResponse if raw=true\n :rtype: ~azure.servicefabric.models.PropertyBatchInfo or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`FabricErrorException<azure.servicefabric.models.FabricErrorException>`"}
{"_id": "q_1390", "text": "Start capturing network packets for the site.\n\n Start capturing network packets for the site.\n\n :param resource_group_name: Name of the resource group to which the\n resource belongs.\n :type resource_group_name: str\n :param name: The name of the web app.\n :type name: str\n :param duration_in_seconds: The duration to keep capturing in seconds.\n :type duration_in_seconds: int\n :param max_frame_length: The maximum frame length in bytes (Optional).\n :type max_frame_length: int\n :param sas_url: The Blob URL to store capture file.\n :type sas_url: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns list or\n ClientRawResponse<list> if raw==True\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[list[~azure.mgmt.web.models.NetworkTrace]]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[list[~azure.mgmt.web.models.NetworkTrace]]]\n :raises:\n :class:`DefaultErrorResponseException<azure.mgmt.web.models.DefaultErrorResponseException>`"}
{"_id": "q_1391", "text": "Get the difference in configuration settings between two web app slots.\n\n Get the difference in configuration settings between two web app slots.\n\n :param resource_group_name: Name of the resource group to which the\n resource belongs.\n :type resource_group_name: str\n :param name: Name of the app.\n :type name: str\n :param slot: Name of the source slot. If a slot is not specified, the\n production slot is used as the source slot.\n :type slot: str\n :param target_slot: Destination deployment slot during swap operation.\n :type target_slot: str\n :param preserve_vnet: <code>true</code> to preserve Virtual Network to\n the slot during swap; otherwise, <code>false</code>.\n :type preserve_vnet: bool\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: An iterator like instance of SlotDifference\n :rtype:\n ~azure.mgmt.web.models.SlotDifferencePaged[~azure.mgmt.web.models.SlotDifference]\n :raises:\n :class:`DefaultErrorResponseException<azure.mgmt.web.models.DefaultErrorResponseException>`"}
{"_id": "q_1392", "text": "Swaps two deployment slots of an app.\n\n Swaps two deployment slots of an app.\n\n :param resource_group_name: Name of the resource group to which the\n resource belongs.\n :type resource_group_name: str\n :param name: Name of the app.\n :type name: str\n :param slot: Name of the source slot. If a slot is not specified, the\n production slot is used as the source slot.\n :type slot: str\n :param target_slot: Destination deployment slot during swap operation.\n :type target_slot: str\n :param preserve_vnet: <code>true</code> to preserve Virtual Network to\n the slot during swap; otherwise, <code>false</code>.\n :type preserve_vnet: bool\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse<None> if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1393", "text": "Execute OData query.\n\n Executes an OData query for events.\n\n :param app_id: ID of the application. This is Application ID from the\n API Access settings blade in the Azure portal.\n :type app_id: str\n :param event_type: The type of events to query; either a standard\n event type (`traces`, `customEvents`, `pageViews`, `requests`,\n `dependencies`, `exceptions`, `availabilityResults`) or `$all` to\n query across all event types. Possible values include: '$all',\n 'traces', 'customEvents', 'pageViews', 'browserTimings', 'requests',\n 'dependencies', 'exceptions', 'availabilityResults',\n 'performanceCounters', 'customMetrics'\n :type event_type: str or ~azure.applicationinsights.models.EventType\n :param timespan: Optional. The timespan over which to retrieve events.\n This is an ISO8601 time period value. This timespan is applied in\n addition to any that are specified in the Odata expression.\n :type timespan: str\n :param filter: An expression used to filter the returned events\n :type filter: str\n :param search: A free-text search expression to match for whether a\n particular event should be returned\n :type search: str\n :param orderby: A comma-separated list of properties with \\\\\"asc\\\\\"\n (the default) or \\\\\"desc\\\\\" to control the order of returned events\n :type orderby: str\n :param select: Limits the properties to just those requested on each\n returned event\n :type select: str\n :param skip: The number of items to skip over before returning events\n :type skip: int\n :param top: The number of events to return\n :type top: int\n :param format: Format for the returned events\n :type format: str\n :param count: Request a count of matching items included with the\n returned events\n :type count: bool\n :param apply: An expression used for aggregation over returned events\n :type apply: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: EventsResults or ClientRawResponse if raw=true\n :rtype: ~azure.applicationinsights.models.EventsResults or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`ErrorResponseException<azure.applicationinsights.models.ErrorResponseException>`"}
{"_id": "q_1394", "text": "Add a face to a large face list. The input face is specified as an\n image with a targetFace rectangle. It returns a persistedFaceId\n representing the added face, and persistedFaceId will not expire.\n\n :param large_face_list_id: Id referencing a particular large face\n list.\n :type large_face_list_id: str\n :param image: An image stream.\n :type image: Generator\n :param user_data: User-specified data about the face for any purpose.\n The maximum length is 1KB.\n :type user_data: str\n :param target_face: A face rectangle to specify the target face to be\n added to a person in the format of \"targetFace=left,top,width,height\".\n E.g. \"targetFace=10,10,100,100\". If there is more than one face in the\n image, targetFace is required to specify which face to add. No\n targetFace means there is only one face detected in the entire image.\n :type target_face: list[int]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param callback: When specified, will be called with each chunk of\n data that is streamed. The callback should take two arguments, the\n bytes of the current chunk of data and the response object. If the\n data is uploading, response will be None.\n :type callback: Callable[Bytes, response=None]\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: PersistedFace or ClientRawResponse if raw=true\n :rtype: ~azure.cognitiveservices.vision.face.models.PersistedFace or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`APIErrorException<azure.cognitiveservices.vision.face.models.APIErrorException>`"}
{"_id": "q_1395", "text": "Reset auth_attempted on redirects."}
{"_id": "q_1396", "text": "Publishes a batch of events to an Azure Event Grid topic.\n\n :param topic_hostname: The host name of the topic, e.g.\n topic1.westus2-1.eventgrid.azure.net\n :type topic_hostname: str\n :param events: An array of events to be published to Event Grid.\n :type events: list[~azure.eventgrid.models.EventGridEvent]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides<msrest:optionsforoperations>`.\n :return: None or ClientRawResponse if raw=true\n :rtype: None or ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`HttpOperationError<msrest.exceptions.HttpOperationError>`"}
{"_id": "q_1397", "text": "Moves resources from one resource group to another resource group.\n\n The resources to move must be in the same source resource group. The\n target resource group may be in a different subscription. When moving\n resources, both the source group and the target group are locked for\n the duration of the operation. Write and delete operations are blocked\n on the groups until the move completes. .\n\n :param source_resource_group_name: The name of the resource group\n containing the resources to move.\n :type source_resource_group_name: str\n :param resources: The IDs of the resources.\n :type resources: list[str]\n :param target_resource_group: The target resource group.\n :type target_resource_group: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse<None> if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1398", "text": "Create a queue entity.\n\n :param queue_name: The name of the new queue.\n :type queue_name: str\n :param lock_duration: The lock durection in seconds for each message in the queue.\n :type lock_duration: int\n :param max_size_in_megabytes: The max size to allow the queue to grow to.\n :type max_size_in_megabytes: int\n :param requires_duplicate_detection: Whether the queue will require every message with\n a specified time frame to have a unique ID. Non-unique messages will be discarded.\n Default value is False.\n :type requires_duplicate_detection: bool\n :param requires_session: Whether the queue will be sessionful, and therefore require all\n message to have a Session ID and be received by a sessionful receiver.\n Default value is False.\n :type requires_session: bool\n :param default_message_time_to_live: The length of time a message will remain in the queue\n before it is either discarded or moved to the dead letter queue.\n :type default_message_time_to_live: ~datetime.timedelta\n :param dead_lettering_on_message_expiration: Whether to move expired messages to the\n dead letter queue. Default value is False.\n :type dead_lettering_on_message_expiration: bool\n :param duplicate_detection_history_time_window: The period within which all incoming messages\n must have a unique message ID.\n :type duplicate_detection_history_time_window: ~datetime.timedelta\n :param max_delivery_count: The maximum number of times a message will attempt to be delivered\n before it is moved to the dead letter queue.\n :type max_delivery_count: int\n :param enable_batched_operations:\n :type: enable_batched_operations: bool\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.common.AzureConflictHttpError if a queue of the same name already exists."}
{"_id": "q_1399", "text": "Delete a queue entity.\n\n :param queue_name: The name of the queue to delete.\n :type queue_name: str\n :param fail_not_exist: Whether to raise an exception if the named queue is not\n found. If set to True, a ServiceBusResourceNotFound will be raised.\n Default value is False.\n :type fail_not_exist: bool\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namesapce is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the queue is not found\n and `fail_not_exist` is set to True."}
{"_id": "q_1400", "text": "Delete a topic entity.\n\n :param topic_name: The name of the topic to delete.\n :type topic_name: str\n :param fail_not_exist: Whether to raise an exception if the named topic is not\n found. If set to True, a ServiceBusResourceNotFound will be raised.\n Default value is False.\n :type fail_not_exist: bool\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namesapce is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the topic is not found\n and `fail_not_exist` is set to True."}
{"_id": "q_1401", "text": "Create a subscription entity.\n\n :param topic_name: The name of the topic under which to create the subscription.\n :param subscription_name: The name of the new subscription.\n :type subscription_name: str\n :param lock_duration: The lock durection in seconds for each message in the subscription.\n :type lock_duration: int\n :param requires_session: Whether the subscription will be sessionful, and therefore require all\n message to have a Session ID and be received by a sessionful receiver.\n Default value is False.\n :type requires_session: bool\n :param default_message_time_to_live: The length of time a message will remain in the subscription\n before it is either discarded or moved to the dead letter queue.\n :type default_message_time_to_live: ~datetime.timedelta\n :param dead_lettering_on_message_expiration: Whether to move expired messages to the\n dead letter queue. Default value is False.\n :type dead_lettering_on_message_expiration: bool\n :param dead_lettering_on_filter_evaluation_exceptions: Whether to move messages that error on\n filtering into the dead letter queue. Default is False, and the messages will be discarded.\n :type dead_lettering_on_filter_evaluation_exceptions: bool\n :param max_delivery_count: The maximum number of times a message will attempt to be delivered\n before it is moved to the dead letter queue.\n :type max_delivery_count: int\n :param enable_batched_operations:\n :type: enable_batched_operations: bool\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.common.AzureConflictHttpError if a queue of the same name already exists."}
{"_id": "q_1402", "text": "Perform an operation to update the properties of the entity.\n\n :returns: The properties of the entity as a dictionary.\n :rtype: dict[str, Any]\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the entity does not exist.\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the endpoint cannot be reached.\n :raises: ~azure.common.AzureHTTPError if the credentials are invalid."}
{"_id": "q_1403", "text": "Whether the receivers lock on a particular session has expired.\n\n :rtype: bool"}
{"_id": "q_1404", "text": "Creates an Azure subscription.\n\n :param billing_account_name: The name of the commerce root billing\n account.\n :type billing_account_name: str\n :param invoice_section_name: The name of the invoice section.\n :type invoice_section_name: str\n :param body: The subscription creation parameters.\n :type body:\n ~azure.mgmt.subscription.models.SubscriptionCreationParameters\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns\n SubscriptionCreationResult or\n ClientRawResponse<SubscriptionCreationResult> if raw==True\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[~azure.mgmt.subscription.models.SubscriptionCreationResult]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[~azure.mgmt.subscription.models.SubscriptionCreationResult]]\n :raises:\n :class:`ErrorResponseException<azure.mgmt.subscription.models.ErrorResponseException>`"}
{"_id": "q_1405", "text": "Export logs that show Api requests made by this subscription in the\n given time window to show throttling activities.\n\n :param parameters: Parameters supplied to the LogAnalytics\n getRequestRateByInterval Api.\n :type parameters:\n ~azure.mgmt.compute.v2018_04_01.models.RequestRateByIntervalInput\n :param location: The location upon which virtual-machine-sizes is\n queried.\n :type location: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns\n LogAnalyticsOperationResult or\n ClientRawResponse<LogAnalyticsOperationResult> if raw==True\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[~azure.mgmt.compute.v2018_04_01.models.LogAnalyticsOperationResult]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[~azure.mgmt.compute.v2018_04_01.models.LogAnalyticsOperationResult]]\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1406", "text": "Scan output for exceptions\n\n If there is an output from an add task collection call add it to the results.\n\n :param results_queue: Queue containing results of attempted add_collection's\n :type results_queue: collections.deque\n :return: list of TaskAddResults\n :rtype: list[~TaskAddResult]"}
{"_id": "q_1407", "text": "Main method for worker to run\n\n Pops a chunk of tasks off the collection of pending tasks to be added and submits them to be added.\n\n :param collections.deque results_queue: Queue for worker to output results to"}
{"_id": "q_1408", "text": "Will build the actual config for Jinja2, based on SDK config."}
{"_id": "q_1409", "text": "Starts an environment by starting all resources inside the environment.\n This operation can take a while to complete.\n\n :param user_name: The name of the user.\n :type user_name: str\n :param environment_id: The resourceId of the environment\n :type environment_id: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse<None> if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1410", "text": "Create message from response.\n\n response:\n response from Service Bus cloud server.\n service_instance:\n the Service Bus client."}
{"_id": "q_1411", "text": "Converts entry element to rule object.\n\n The format of xml for rule:\n<entry xmlns='http://www.w3.org/2005/Atom'>\n<content type='application/xml'>\n<RuleDescription\n xmlns:i=\"http://www.w3.org/2001/XMLSchema-instance\"\n xmlns=\"http://schemas.microsoft.com/netservices/2010/10/servicebus/connect\">\n <Filter i:type=\"SqlFilterExpression\">\n <SqlExpression>MyProperty='XYZ'</SqlExpression>\n </Filter>\n <Action i:type=\"SqlFilterAction\">\n <SqlExpression>set MyProperty2 = 'ABC'</SqlExpression>\n </Action>\n</RuleDescription>\n</content>\n</entry>"}
{"_id": "q_1412", "text": "Converts entry element to queue object.\n\n The format of xml response for queue:\n<QueueDescription\n xmlns=\\\"http://schemas.microsoft.com/netservices/2010/10/servicebus/connect\\\">\n <MaxSizeInBytes>10000</MaxSizeInBytes>\n <DefaultMessageTimeToLive>PT5M</DefaultMessageTimeToLive>\n <LockDuration>PT2M</LockDuration>\n <RequiresGroupedReceives>False</RequiresGroupedReceives>\n <SupportsDuplicateDetection>False</SupportsDuplicateDetection>\n ...\n</QueueDescription>"}
{"_id": "q_1413", "text": "Converts entry element to topic\n\n The xml format for topic:\n<entry xmlns='http://www.w3.org/2005/Atom'>\n <content type='application/xml'>\n <TopicDescription\n xmlns:i=\"http://www.w3.org/2001/XMLSchema-instance\"\n xmlns=\"http://schemas.microsoft.com/netservices/2010/10/servicebus/connect\">\n <DefaultMessageTimeToLive>P10675199DT2H48M5.4775807S</DefaultMessageTimeToLive>\n <MaxSizeInMegabytes>1024</MaxSizeInMegabytes>\n <RequiresDuplicateDetection>false</RequiresDuplicateDetection>\n <DuplicateDetectionHistoryTimeWindow>P7D</DuplicateDetectionHistoryTimeWindow>\n <DeadLetteringOnFilterEvaluationExceptions>true</DeadLetteringOnFilterEvaluationExceptions>\n </TopicDescription>\n </content>\n</entry>"}
{"_id": "q_1414", "text": "Converts entry element to subscription\n\n The xml format for subscription:\n<entry xmlns='http://www.w3.org/2005/Atom'>\n <content type='application/xml'>\n <SubscriptionDescription\n xmlns:i=\"http://www.w3.org/2001/XMLSchema-instance\"\n xmlns=\"http://schemas.microsoft.com/netservices/2010/10/servicebus/connect\">\n <LockDuration>PT5M</LockDuration>\n <RequiresSession>false</RequiresSession>\n <DefaultMessageTimeToLive>P10675199DT2H48M5.4775807S</DefaultMessageTimeToLive>\n <DeadLetteringOnMessageExpiration>false</DeadLetteringOnMessageExpiration>\n <DeadLetteringOnFilterEvaluationExceptions>true</DeadLetteringOnFilterEvaluationExceptions>\n </SubscriptionDescription>\n </content>\n</entry>"}
{"_id": "q_1415", "text": "Creates a new certificate inside the specified account.\n\n :param resource_group_name: The name of the resource group that\n contains the Batch account.\n :type resource_group_name: str\n :param account_name: The name of the Batch account.\n :type account_name: str\n :param certificate_name: The identifier for the certificate. This must\n be made up of algorithm and thumbprint separated by a dash, and must\n match the certificate data in the request. For example SHA1-a3d1c5.\n :type certificate_name: str\n :param parameters: Additional parameters for certificate creation.\n :type parameters:\n ~azure.mgmt.batch.models.CertificateCreateOrUpdateParameters\n :param if_match: The entity state (ETag) version of the certificate to\n update. A value of \"*\" can be used to apply the operation only if the\n certificate already exists. If omitted, this operation will always be\n applied.\n :type if_match: str\n :param if_none_match: Set to '*' to allow a new certificate to be\n created, but to prevent updating an existing certificate. Other values\n will be ignored.\n :type if_none_match: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :return: An instance of AzureOperationPoller that returns Certificate\n or ClientRawResponse if raw=true\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[~azure.mgmt.batch.models.Certificate]\n or ~msrest.pipeline.ClientRawResponse\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1416", "text": "Deletes the specified certificate.\n\n :param resource_group_name: The name of the resource group that\n contains the Batch account.\n :type resource_group_name: str\n :param account_name: The name of the Batch account.\n :type account_name: str\n :param certificate_name: The identifier for the certificate. This must\n be made up of algorithm and thumbprint separated by a dash, and must\n match the certificate data in the request. For example SHA1-a3d1c5.\n :type certificate_name: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :return: An instance of AzureOperationPoller that returns None or\n ClientRawResponse if raw=true\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrest.pipeline.ClientRawResponse\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1417", "text": "Return a SDK client initialized with a JSON auth dict.\n\n The easiest way to obtain this content is to call the following CLI commands:\n\n .. code:: bash\n\n az ad sp create-for-rbac --sdk-auth\n\n This method will fill automatically the following client parameters:\n - credentials\n - subscription_id\n - base_url\n - tenant_id\n\n Parameters provided in kwargs will override parameters and be passed directly to the client.\n\n :Example:\n\n .. code:: python\n\n from azure.common.client_factory import get_client_from_auth_file\n from azure.mgmt.compute import ComputeManagementClient\n config_dict = {\n \"clientId\": \"ad735158-65ca-11e7-ba4d-ecb1d756380e\",\n \"clientSecret\": \"b70bb224-65ca-11e7-810c-ecb1d756380e\",\n \"subscriptionId\": \"bfc42d3a-65ca-11e7-95cf-ecb1d756380e\",\n \"tenantId\": \"c81da1d8-65ca-11e7-b1d1-ecb1d756380e\",\n \"activeDirectoryEndpointUrl\": \"https://login.microsoftonline.com\",\n \"resourceManagerEndpointUrl\": \"https://management.azure.com/\",\n \"activeDirectoryGraphResourceId\": \"https://graph.windows.net/\",\n \"sqlManagementEndpointUrl\": \"https://management.core.windows.net:8443/\",\n \"galleryEndpointUrl\": \"https://gallery.azure.com/\",\n \"managementEndpointUrl\": \"https://management.core.windows.net/\"\n }\n client = get_client_from_json_dict(ComputeManagementClient, config_dict)\n\n .. versionadded:: 1.1.7\n\n :param client_class: A SDK client class\n :param dict config_dict: A config dict.\n :return: An instantiated client"}
{"_id": "q_1418", "text": "resp_body is the XML we received\n resp_type is a string, such as Containers,\n return_type is the type we're constructing, such as ContainerEnumResults\n item_type is the type object of the item to be created, such as Container\n\n This function then returns a ContainerEnumResults object with the\n containers member populated with the results."}
{"_id": "q_1419", "text": "get properties from element tree element"}
{"_id": "q_1420", "text": "Get a client for a queue entity.\n\n :param queue_name: The name of the queue.\n :type queue_name: str\n :rtype: ~azure.servicebus.servicebus_client.QueueClient\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the queue is not found.\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START get_queue_client]\n :end-before: [END get_queue_client]\n :language: python\n :dedent: 8\n :caption: Get the specific queue client from Service Bus client"}
{"_id": "q_1421", "text": "Get clients for all queue entities in the namespace.\n\n :rtype: list[~azure.servicebus.servicebus_client.QueueClient]\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START list_queues]\n :end-before: [END list_queues]\n :language: python\n :dedent: 4\n :caption: List the queues from Service Bus client"}
{"_id": "q_1422", "text": "Get a client for a topic entity.\n\n :param topic_name: The name of the topic.\n :type topic_name: str\n :rtype: ~azure.servicebus.servicebus_client.TopicClient\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the topic is not found.\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START get_topic_client]\n :end-before: [END get_topic_client]\n :language: python\n :dedent: 8\n :caption: Get the specific topic client from Service Bus client"}
{"_id": "q_1423", "text": "Get a client for all topic entities in the namespace.\n\n :rtype: list[~azure.servicebus.servicebus_client.TopicClient]\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START list_topics]\n :end-before: [END list_topics]\n :language: python\n :dedent: 4\n :caption: List the topics from Service Bus client"}
{"_id": "q_1424", "text": "Receive messages by sequence number that have been previously deferred.\n\n When receiving deferred messages from a partitioned entity, all of the supplied\n sequence numbers must be messages from the same partition.\n\n :param sequence_numbers: A list of the sequence numbers of messages that have been\n deferred.\n :type sequence_numbers: list[int]\n :param mode: The mode with which messages will be retrieved from the entity. The two options\n are PeekLock and ReceiveAndDelete. Messages received with PeekLock must be settled within a given\n lock period before they will be removed from the queue. Messages received with ReceiveAndDelete\n will be immediately removed from the queue, and cannot be subsequently rejected or re-received if\n the client fails to process the message. The default mode is PeekLock.\n :type mode: ~azure.servicebus.common.constants.ReceiveSettleMode\n :rtype: list[~azure.servicebus.common.message.Message]\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START receive_deferred_messages_service_bus]\n :end-before: [END receive_deferred_messages_service_bus]\n :language: python\n :dedent: 8\n :caption: Get the messages which were deferred using their sequence numbers"}
{"_id": "q_1425", "text": "Settle messages that have been previously deferred.\n\n :param settlement: How the messages are to be settled. This must be a string\n of one of the following values: 'completed', 'suspended', 'abandoned'.\n :type settlement: str\n :param messages: A list of deferred messages to be settled.\n :type messages: list[~azure.servicebus.common.message.DeferredMessage]\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START settle_deferred_messages_service_bus]\n :end-before: [END settle_deferred_messages_service_bus]\n :language: python\n :dedent: 8\n :caption: Settle deferred messages."}
{"_id": "q_1426", "text": "Delete a website.\n\n webspace_name:\n The name of the webspace.\n website_name:\n The name of the website.\n delete_empty_server_farm:\n If the site being deleted is the last web site in a server farm,\n you can delete the server farm by setting this to True.\n delete_metrics:\n To also delete the metrics for the site that you are deleting, you\n can set this to True."}
{"_id": "q_1427", "text": "Update a web site.\n\n webspace_name:\n The name of the webspace.\n website_name:\n The name of the website.\n state:\n The wanted state ('Running' or 'Stopped' accepted)"}
{"_id": "q_1428", "text": "Restart a web site.\n\n webspace_name:\n The name of the webspace.\n website_name:\n The name of the website."}
{"_id": "q_1429", "text": "Updates the policies for the specified container registry.\n\n :param resource_group_name: The name of the resource group to which\n the container registry belongs.\n :type resource_group_name: str\n :param registry_name: The name of the container registry.\n :type registry_name: str\n :param quarantine_policy: An object that represents quarantine policy\n for a container registry.\n :type quarantine_policy:\n ~azure.mgmt.containerregistry.v2018_02_01_preview.models.QuarantinePolicy\n :param trust_policy: An object that represents content trust policy\n for a container registry.\n :type trust_policy:\n ~azure.mgmt.containerregistry.v2018_02_01_preview.models.TrustPolicy\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns RegistryPolicies or\n ClientRawResponse<RegistryPolicies> if raw==True\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[~azure.mgmt.containerregistry.v2018_02_01_preview.models.RegistryPolicies]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[~azure.mgmt.containerregistry.v2018_02_01_preview.models.RegistryPolicies]]\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1430", "text": "Completes the restore operation on a managed database.\n\n :param location_name: The name of the region where the resource is\n located.\n :type location_name: str\n :param operation_id: Management operation id that this request tries\n to complete.\n :type operation_id: str\n :param last_backup_name: The last backup name to apply\n :type last_backup_name: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse<None> if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError<msrestazure.azure_exceptions.CloudError>`"}
{"_id": "q_1431", "text": "Cancel one or more messages that have previsouly been scheduled and are still pending.\n\n :param sequence_numbers: The seqeuence numbers of the scheduled messages.\n :type sequence_numbers: int\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START cancel_schedule_messages]\n :end-before: [END cancel_schedule_messages]\n :language: python\n :dedent: 4\n :caption: Schedule messages."}
{"_id": "q_1432", "text": "Reconnect the handler.\n\n If the handler was disconnected from the service with\n a retryable error - attempt to reconnect.\n This method will be called automatically for most retryable errors.\n Also attempts to re-queue any messages that were pending before the reconnect."}
{"_id": "q_1433", "text": "Returns the width of the string it would be when displayed."}
{"_id": "q_1434", "text": "Drops Characters by unicode not by bytes."}
{"_id": "q_1435", "text": "Clears out the previous line and prints a new one."}
{"_id": "q_1436", "text": "Creates a status line with appropriate size."}
{"_id": "q_1437", "text": "Segments are yielded when they are available\n\n Segments appear on a time line, for dynamic content they are only available at a certain time\n and sometimes for a limited time. For static content they are all available at the same time.\n\n :param kwargs: extra args to pass to the segment template\n :return: yields Segments"}
{"_id": "q_1438", "text": "Puts a value into a queue but aborts if this thread is closed."}
{"_id": "q_1439", "text": "Wrapper around ElementTree.fromstring with some extras.\n\n Provides these extra features:\n - Handles incorrectly encoded XML\n - Allows stripping namespace information\n - Wraps errors in custom exception with a snippet of the data in the message"}
{"_id": "q_1440", "text": "Search for a key in a nested dict, or list of nested dicts, and return the values.\n\n :param data: dict/list to search\n :param key: key to find\n :return: matches for key"}
{"_id": "q_1441", "text": "Spawn the process defined in `cmd`\n\n parameters is converted to options the short and long option prefixes\n if a list is given as the value, the parameter is repeated with each\n value\n\n If timeout is set the spawn will block until the process returns or\n the timeout expires.\n\n :param parameters: optional parameters\n :param arguments: positional arguments\n :param stderr: where to redirect stderr to\n :param timeout: timeout for short lived process\n :param long_option_prefix: option prefix, default -\n :param short_option_prefix: long option prefix, default --\n :return: spawned process"}
{"_id": "q_1442", "text": "Brute force regex based HTML tag parser. This is a rough-and-ready searcher to find HTML tags when\n standards compliance is not required. Will find tags that are commented out, or inside script tag etc.\n\n :param html: HTML page\n :param tag: tag name to find\n :return: generator with Tags"}
{"_id": "q_1443", "text": "Attempt to parse a DASH manifest file and return its streams\n\n :param session: Streamlink session instance\n :param url_or_manifest: URL of the manifest file or an XML manifest string\n :return: a dict of name -> DASHStream instances"}
{"_id": "q_1444", "text": "Determine which Unicode encoding the JSON text sample is encoded with\n\n RFC4627 (http://www.ietf.org/rfc/rfc4627.txt) suggests that the encoding of JSON text can be determined\n by checking the pattern of NULL bytes in first 4 octets of the text.\n :param sample: a sample of at least 4 bytes of the JSON text\n :return: the most likely encoding of the JSON text"}
{"_id": "q_1445", "text": "Parses JSON from a response."}
{"_id": "q_1446", "text": "Parses XML from a response."}
{"_id": "q_1447", "text": "Parses a semi-colon delimited list of query parameters.\n\n Example: foo=bar;baz=qux"}
{"_id": "q_1448", "text": "Return the message for this LogRecord.\n\n Return the message for this LogRecord after merging any user-supplied\n arguments with the message."}
{"_id": "q_1449", "text": "Attempt a login to LiveEdu.tv"}
{"_id": "q_1450", "text": "Loads a plugin from the same directory as the calling plugin.\n\n The path used is extracted from the last call in module scope,\n therefore this must be called only from module level in the\n originating plugin or the correct plugin path will not be found."}
{"_id": "q_1451", "text": "Update or remove keys from a query string in a URL\n\n :param url: URL to update\n :param qsd: dict of keys to update, a None value leaves it unchanged\n :param remove: list of keys to remove, or \"*\" to remove all\n note: updated keys are never removed, even if unchanged\n :return: updated URL"}
{"_id": "q_1452", "text": "Find all the arguments required by name\n\n :param name: name of the argument the find the dependencies\n\n :return: list of dependant arguments"}
{"_id": "q_1453", "text": "Decides where to write the stream.\n\n Depending on arguments it can be one of these:\n - The stdout pipe\n - A subprocess' stdin pipe\n - A named pipe that the subprocess reads from\n - A regular file"}
{"_id": "q_1454", "text": "Creates a HTTP server listening on a given host and port.\n\n If host is empty, listen on all available interfaces, and if port is 0,\n listen on a random high port."}
{"_id": "q_1455", "text": "Repeatedly accept HTTP connections on a server.\n\n Forever if the serving externally, or while a player is running if it is not\n empty."}
{"_id": "q_1456", "text": "Continuously output the stream over HTTP."}
{"_id": "q_1457", "text": "Prepares a filename to be passed to the player."}
{"_id": "q_1458", "text": "Reads data from stream and then writes it to the output."}
{"_id": "q_1459", "text": "Decides what to do with the selected stream.\n\n Depending on arguments it can be one of these:\n - Output internal command-line\n - Output JSON represenation\n - Continuously output the stream over HTTP\n - Output stream data to selected output"}
{"_id": "q_1460", "text": "Fetches streams using correct parameters."}
{"_id": "q_1461", "text": "Returns the real stream name of a synonym."}
{"_id": "q_1462", "text": "Formats a dict of streams.\n\n Filters out synonyms and displays them next to\n the stream they point to.\n\n Streams are sorted according to their quality\n (based on plugin.stream_weight)."}
{"_id": "q_1463", "text": "The URL handler.\n\n Attempts to resolve the URL to a plugin and then attempts\n to fetch a list of available streams.\n\n Proceeds to handle stream if user specified a valid one,\n otherwise output list of valid streams."}
{"_id": "q_1464", "text": "Opens a web browser to allow the user to grant Streamlink\n access to their Twitch account."}
{"_id": "q_1465", "text": "Console setup."}
{"_id": "q_1466", "text": "Sets the global HTTP settings, such as proxy and headers."}
{"_id": "q_1467", "text": "Loads any additional plugins."}
{"_id": "q_1468", "text": "Sets Streamlink options."}
{"_id": "q_1469", "text": "Fallback if no stream_id was found before"}
{"_id": "q_1470", "text": "Returns current value of specified option.\n\n :param key: key of the option"}
{"_id": "q_1471", "text": "Returns current value of plugin specific option.\n\n :param plugin: name of the plugin\n :param key: key of the option"}
{"_id": "q_1472", "text": "Attempts to find a plugin that can use this URL.\n\n The default protocol (http) will be prefixed to the URL if\n not specified.\n\n Raises :exc:`NoPluginError` on failure.\n\n :param url: a URL to match against loaded plugins\n :param follow_redirect: follow redirects"}
{"_id": "q_1473", "text": "Checks if the string value starts with another string."}
{"_id": "q_1474", "text": "Checks if the string value contains another string."}
{"_id": "q_1475", "text": "Filters out unwanted items using the specified function.\n\n Supports both dicts and sequences, key/value pairs are\n expanded when applied to a dict."}
{"_id": "q_1476", "text": "Apply function to each value inside the sequence or dict.\n\n Supports both dicts and sequences, key/value pairs are\n expanded when applied to a dict."}
{"_id": "q_1477", "text": "Parses an URL and validates its attributes."}
{"_id": "q_1478", "text": "Find a list of XML elements via xpath."}
{"_id": "q_1479", "text": "Attempts to parse a M3U8 playlist from a string of data.\n\n If specified, *base_uri* is the base URI that relative URIs will\n be joined together with, otherwise relative URIs will be as is.\n\n If specified, *parser* can be a M3U8Parser subclass to be used\n to parse the data."}
{"_id": "q_1480", "text": "Logs in to Steam"}
{"_id": "q_1481", "text": "Returns the stream_id contained in the HTML."}
{"_id": "q_1482", "text": "Creates a key-function mapping.\n\n The return value from the function should be either\n - A tuple containing a name and stream\n - A iterator of tuples containing a name and stream\n\n Any extra arguments will be passed to the function."}
{"_id": "q_1483", "text": "Makes a call against the api.\n\n :param entrypoint: API method to call.\n :param params: parameters to include in the request data.\n :param schema: schema to use to validate the data"}
{"_id": "q_1484", "text": "Starts a session against Crunchyroll's server.\n Is recommended that you call this method before making any other calls\n to make sure you have a valid session against the server."}
{"_id": "q_1485", "text": "Creates a new CrunchyrollAPI object, initiates it's session and\n tries to authenticate it either by using saved credentials or the\n user's username and password."}
{"_id": "q_1486", "text": "Log 'msg % args' at level 'level' only if condition is fulfilled."}
{"_id": "q_1487", "text": "Creates a distributed session.\n\n It calls `MonitoredTrainingSession` to create a :class:`MonitoredSession` for distributed training.\n\n Parameters\n ----------\n task_spec : :class:`TaskSpecDef`.\n The task spec definition from create_task_spec_def()\n checkpoint_dir : str.\n Optional path to a directory where to restore variables.\n scaffold : ``Scaffold``\n A `Scaffold` used for gathering or building supportive ops.\n If not specified, a default one is created. It's used to finalize the graph.\n hooks : list of ``SessionRunHook`` objects.\n Optional\n chief_only_hooks : list of ``SessionRunHook`` objects.\n Activate these hooks if `is_chief==True`, ignore otherwise.\n save_checkpoint_secs : int\n The frequency, in seconds, that a checkpoint is saved\n using a default checkpoint saver. If `save_checkpoint_secs` is set to\n `None`, then the default checkpoint saver isn't used.\n save_summaries_steps : int\n The frequency, in number of global steps, that the\n summaries are written to disk using a default summary saver. If both\n `save_summaries_steps` and `save_summaries_secs` are set to `None`, then\n the default summary saver isn't used. Default 100.\n save_summaries_secs : int\n The frequency, in secs, that the summaries are written\n to disk using a default summary saver. If both `save_summaries_steps` and\n `save_summaries_secs` are set to `None`, then the default summary saver\n isn't used. Default not enabled.\n config : ``tf.ConfigProto``\n an instance of `tf.ConfigProto` proto used to configure the session.\n It's the `config` argument of constructor of `tf.Session`.\n stop_grace_period_secs : int\n Number of seconds given to threads to stop after\n `close()` has been called.\n log_step_count_steps : int\n The frequency, in number of global steps, that the\n global step/sec is logged.\n\n Examples\n --------\n A simple example for distributed training where all the workers use the same dataset:\n\n >>> task_spec = TaskSpec()\n >>> with tf.device(task_spec.device_fn()):\n >>> tensors = create_graph()\n >>> with tl.DistributedSession(task_spec=task_spec,\n ... checkpoint_dir='/tmp/ckpt') as session:\n >>> while not session.should_stop():\n >>> session.run(tensors)\n\n An example where the dataset is shared among the workers\n (see https://www.tensorflow.org/programmers_guide/datasets):\n\n >>> task_spec = TaskSpec()\n >>> # dataset is a :class:`tf.data.Dataset` with the raw data\n >>> dataset = create_dataset()\n >>> if task_spec is not None:\n >>> dataset = dataset.shard(task_spec.num_workers, task_spec.shard_index)\n >>> # shuffle or apply a map function to the new sharded dataset, for example:\n >>> dataset = dataset.shuffle(buffer_size=10000)\n >>> dataset = dataset.batch(batch_size)\n >>> dataset = dataset.repeat(num_epochs)\n >>> # create the iterator for the dataset and the input tensor\n >>> iterator = dataset.make_one_shot_iterator()\n >>> next_element = iterator.get_next()\n >>> with tf.device(task_spec.device_fn()):\n >>> # next_element is the input for the graph\n >>> tensors = create_graph(next_element)\n >>> with tl.DistributedSession(task_spec=task_spec,\n ... checkpoint_dir='/tmp/ckpt') as session:\n >>> while not session.should_stop():\n >>> session.run(tensors)\n\n References\n ----------\n - `MonitoredTrainingSession <https://www.tensorflow.org/api_docs/python/tf/train/MonitoredTrainingSession>`__"}
{"_id": "q_1488", "text": "A helper function that shows how to train and validate a model at the same time.\n\n Parameters\n ----------\n validate_step_size : int\n Validate the training network every N steps."}
{"_id": "q_1489", "text": "A generic function to load mnist-like dataset.\n\n Parameters:\n ----------\n shape : tuple\n The shape of digit images.\n path : str\n The path that the data is downloaded to.\n name : str\n The dataset name you want to use(the default is 'mnist').\n url : str\n The url of dataset(the default is 'http://yann.lecun.com/exdb/mnist/')."}
{"_id": "q_1490", "text": "Load Matt Mahoney's dataset.\n\n Download a text file from Matt Mahoney's website\n if not present, and make sure it's the right size.\n Extract the first file enclosed in a zip file as a list of words.\n This dataset can be used for Word Embedding.\n\n Parameters\n ----------\n path : str\n The path that the data is downloaded to, defaults is ``data/mm_test8/``.\n\n Returns\n --------\n list of str\n The raw text data e.g. [.... 'their', 'families', 'who', 'were', 'expelled', 'from', 'jerusalem', ...]\n\n Examples\n --------\n >>> words = tl.files.load_matt_mahoney_text8_dataset()\n >>> print('Data size', len(words))"}
{"_id": "q_1491", "text": "Load IMDB dataset.\n\n Parameters\n ----------\n path : str\n The path that the data is downloaded to, defaults is ``data/imdb/``.\n nb_words : int\n Number of words to get.\n skip_top : int\n Top most frequent words to ignore (they will appear as oov_char value in the sequence data).\n maxlen : int\n Maximum sequence length. Any longer sequence will be truncated.\n seed : int\n Seed for reproducible data shuffling.\n start_char : int\n The start of a sequence will be marked with this character. Set to 1 because 0 is usually the padding character.\n oov_char : int\n Words that were cut out because of the num_words or skip_top limit will be replaced with this character.\n index_from : int\n Index actual words with this index and higher.\n\n Examples\n --------\n >>> X_train, y_train, X_test, y_test = tl.files.load_imdb_dataset(\n ... nb_words=20000, test_split=0.2)\n >>> print('X_train.shape', X_train.shape)\n (20000,) [[1, 62, 74, ... 1033, 507, 27],[1, 60, 33, ... 13, 1053, 7]..]\n >>> print('y_train.shape', y_train.shape)\n (20000,) [1 0 0 ..., 1 0 1]\n\n References\n -----------\n - `Modified from keras. <https://github.com/fchollet/keras/blob/master/keras/datasets/imdb.py>`__"}
{"_id": "q_1492", "text": "Load Nietzsche dataset.\n\n Parameters\n ----------\n path : str\n The path that the data is downloaded to, defaults is ``data/nietzsche/``.\n\n Returns\n --------\n str\n The content.\n\n Examples\n --------\n >>> see tutorial_generate_text.py\n >>> words = tl.files.load_nietzsche_dataset()\n >>> words = basic_clean_str(words)\n >>> words = words.split()"}
{"_id": "q_1493", "text": "Load WMT'15 English-to-French translation dataset.\n\n It will download the data from the WMT'15 Website (10^9-French-English corpus), and the 2013 news test from the same site as development set.\n Returns the directories of training data and test data.\n\n Parameters\n ----------\n path : str\n The path that the data is downloaded to, defaults is ``data/wmt_en_fr/``.\n\n References\n ----------\n - Code modified from /tensorflow/models/rnn/translation/data_utils.py\n\n Notes\n -----\n Usually, it will take a long time to download this dataset."}
{"_id": "q_1494", "text": "Download file from Google Drive.\n\n See ``tl.files.load_celebA_dataset`` for example.\n\n Parameters\n --------------\n ID : str\n The driver ID.\n destination : str\n The destination for save file."}
{"_id": "q_1495", "text": "Load CelebA dataset\n\n Return a list of image path.\n\n Parameters\n -----------\n path : str\n The path that the data is downloaded to, defaults is ``data/celebA/``."}
{"_id": "q_1496", "text": "Assign the given parameters to the TensorLayer network.\n\n Parameters\n ----------\n sess : Session\n TensorFlow Session.\n params : list of array\n A list of parameters (array) in order.\n network : :class:`Layer`\n The network to be assigned.\n\n Returns\n --------\n list of operations\n A list of tf ops in order that assign params. Support sess.run(ops) manually.\n\n Examples\n --------\n - See ``tl.files.save_npz``\n\n References\n ----------\n - `Assign value to a TensorFlow variable <http://stackoverflow.com/questions/34220532/how-to-assign-value-to-a-tensorflow-variable>`__"}
{"_id": "q_1497", "text": "Load model from npz and assign to a network.\n\n Parameters\n -------------\n sess : Session\n TensorFlow Session.\n name : str\n The name of the `.npz` file.\n network : :class:`Layer`\n The network to be assigned.\n\n Returns\n --------\n False or network\n Returns False, if the model is not exist.\n\n Examples\n --------\n - See ``tl.files.save_npz``"}
{"_id": "q_1498", "text": "Input parameters and the file name, save parameters as a dictionary into .npz file.\n\n Use ``tl.files.load_and_assign_npz_dict()`` to restore.\n\n Parameters\n ----------\n save_list : list of parameters\n A list of parameters (tensor) to be saved.\n name : str\n The name of the `.npz` file.\n sess : Session\n TensorFlow Session."}
{"_id": "q_1499", "text": "Load parameters from `ckpt` file.\n\n Parameters\n ------------\n sess : Session\n TensorFlow Session.\n mode_name : str\n The name of the model, default is ``model.ckpt``.\n save_dir : str\n The path / file directory to the `ckpt`, default is ``checkpoint``.\n var_list : list of tensor\n The parameters / variables (tensor) to be saved. If empty, save all global variables (default).\n is_latest : boolean\n Whether to load the latest `ckpt`, if False, load the `ckpt` with the name of ```mode_name``.\n printable : boolean\n Whether to print all parameters information.\n\n Examples\n ----------\n - Save all global parameters.\n\n >>> tl.files.save_ckpt(sess=sess, mode_name='model.ckpt', save_dir='model', printable=True)\n\n - Save specific parameters.\n\n >>> tl.files.save_ckpt(sess=sess, mode_name='model.ckpt', var_list=net.all_params, save_dir='model', printable=True)\n\n - Load latest ckpt.\n\n >>> tl.files.load_ckpt(sess=sess, var_list=net.all_params, save_dir='model', printable=True)\n\n - Load specific ckpt.\n\n >>> tl.files.load_ckpt(sess=sess, mode_name='model.ckpt', var_list=net.all_params, save_dir='model', is_latest=False, printable=True)"}
{"_id": "q_1500", "text": "Load `.npy` file.\n\n Parameters\n ------------\n path : str\n Path to the file (optional).\n name : str\n File name.\n\n Examples\n ---------\n - see tl.files.save_any_to_npy()"}
{"_id": "q_1501", "text": "Checks if file exists in working_directory otherwise tries to dowload the file,\n and optionally also tries to extract the file if format is \".zip\" or \".tar\"\n\n Parameters\n -----------\n filename : str\n The name of the (to be) dowloaded file.\n working_directory : str\n A folder path to search for the file in and dowload the file to\n url : str\n The URL to download the file from\n extract : boolean\n If True, tries to uncompress the dowloaded file is \".tar.gz/.tar.bz2\" or \".zip\" file, default is False.\n expected_bytes : int or None\n If set tries to verify that the downloaded file is of the specified size, otherwise raises an Exception, defaults is None which corresponds to no check being performed.\n\n Returns\n ----------\n str\n File path of the dowloaded (uncompressed) file.\n\n Examples\n --------\n >>> down_file = tl.files.maybe_download_and_extract(filename='train-images-idx3-ubyte.gz',\n ... working_directory='data/',\n ... url_source='http://yann.lecun.com/exdb/mnist/')\n >>> tl.files.maybe_download_and_extract(filename='ADEChallengeData2016.zip',\n ... working_directory='data/',\n ... url_source='http://sceneparsing.csail.mit.edu/data/',\n ... extract=True)"}
{"_id": "q_1502", "text": "Process a batch of data by given function by threading.\n\n Usually be used for data augmentation.\n\n Parameters\n -----------\n data : numpy.array or others\n The data to be processed.\n thread_count : int\n The number of threads to use.\n fn : function\n The function for data processing.\n more args : the args for `fn`\n Ssee Examples below.\n\n Examples\n --------\n Process images.\n\n >>> images, _, _, _ = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3))\n >>> images = tl.prepro.threading_data(images[0:32], tl.prepro.zoom, zoom_range=[0.5, 1])\n\n Customized image preprocessing function.\n\n >>> def distort_img(x):\n >>> x = tl.prepro.flip_axis(x, axis=0, is_random=True)\n >>> x = tl.prepro.flip_axis(x, axis=1, is_random=True)\n >>> x = tl.prepro.crop(x, 100, 100, is_random=True)\n >>> return x\n >>> images = tl.prepro.threading_data(images, distort_img)\n\n Process images and masks together (Usually be used for image segmentation).\n\n >>> X, Y --> [batch_size, row, col, 1]\n >>> data = tl.prepro.threading_data([_ for _ in zip(X, Y)], tl.prepro.zoom_multi, zoom_range=[0.5, 1], is_random=True)\n data --> [batch_size, 2, row, col, 1]\n >>> X_, Y_ = data.transpose((1,0,2,3,4))\n X_, Y_ --> [batch_size, row, col, 1]\n >>> tl.vis.save_image(X_, 'images.png')\n >>> tl.vis.save_image(Y_, 'masks.png')\n\n Process images and masks together by using ``thread_count``.\n\n >>> X, Y --> [batch_size, row, col, 1]\n >>> data = tl.prepro.threading_data(X, tl.prepro.zoom_multi, 8, zoom_range=[0.5, 1], is_random=True)\n data --> [batch_size, 2, row, col, 1]\n >>> X_, Y_ = data.transpose((1,0,2,3,4))\n X_, Y_ --> [batch_size, row, col, 1]\n >>> tl.vis.save_image(X_, 'after.png')\n >>> tl.vis.save_image(Y_, 'before.png')\n\n Customized function for processing images and masks together.\n\n >>> def distort_img(data):\n >>> x, y = data\n >>> x, y = tl.prepro.flip_axis_multi([x, y], axis=0, is_random=True)\n >>> x, y = tl.prepro.flip_axis_multi([x, y], axis=1, is_random=True)\n >>> x, y = tl.prepro.crop_multi([x, y], 100, 100, is_random=True)\n >>> return x, y\n\n >>> X, Y --> [batch_size, row, col, channel]\n >>> data = tl.prepro.threading_data([_ for _ in zip(X, Y)], distort_img)\n >>> X_, Y_ = data.transpose((1,0,2,3,4))\n\n Returns\n -------\n list or numpyarray\n The processed results.\n\n References\n ----------\n - `python queue <https://pymotw.com/2/Queue/index.html#module-Queue>`__\n - `run with limited queue <http://effbot.org/librarybook/queue.htm>`__"}
{"_id": "q_1503", "text": "Projective transform by given coordinates, usually 4 coordinates.\n\n see `scikit-image <http://scikit-image.org/docs/dev/auto_examples/applications/plot_geometric.html>`__.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n src : list or numpy\n The original coordinates, usually 4 coordinates of (width, height).\n dst : list or numpy\n The coordinates after transformation, the number of coordinates is the same with src.\n map_args : dictionary or None\n Keyword arguments passed to inverse map.\n output_shape : tuple of 2 int\n Shape of the output image generated. By default the shape of the input image is preserved. Note that, even for multi-band images, only rows and columns need to be specified.\n order : int\n The order of interpolation. The order has to be in the range 0-5:\n - 0 Nearest-neighbor\n - 1 Bi-linear (default)\n - 2 Bi-quadratic\n - 3 Bi-cubic\n - 4 Bi-quartic\n - 5 Bi-quintic\n mode : str\n One of `constant` (default), `edge`, `symmetric`, `reflect` or `wrap`.\n Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad.\n cval : float\n Used in conjunction with mode `constant`, the value outside the image boundaries.\n clip : boolean\n Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range.\n preserve_range : boolean\n Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float.\n\n Returns\n -------\n numpy.array\n A processed image.\n\n Examples\n --------\n Assume X is an image from CIFAR-10, i.e. shape == (32, 32, 3)\n\n >>> src = [[0,0],[0,32],[32,0],[32,32]] # [w, h]\n >>> dst = [[10,10],[0,32],[32,0],[32,32]]\n >>> x = tl.prepro.projective_transform_by_points(X, src, dst)\n\n References\n -----------\n - `scikit-image : geometric transformations <http://scikit-image.org/docs/dev/auto_examples/applications/plot_geometric.html>`__\n - `scikit-image : examples <http://scikit-image.org/docs/dev/auto_examples/index.html>`__"}
{"_id": "q_1504", "text": "Rotate an image randomly or non-randomly.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n rg : int or float\n Degree to rotate, usually 0 ~ 180.\n is_random : boolean\n If True, randomly rotate. Default is False\n row_index col_index and channel_index : int\n Index of row, col and channel, default (0, 1, 2), for theano (1, 2, 0).\n fill_mode : str\n Method to fill missing pixel, default `nearest`, more options `constant`, `reflect` or `wrap`, see `scipy ndimage affine_transform <https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.interpolation.affine_transform.html>`__\n cval : float\n Value used for points outside the boundaries of the input if mode=`constant`. Default is 0.0\n order : int\n The order of interpolation. The order has to be in the range 0-5. See ``tl.prepro.affine_transform`` and `scipy ndimage affine_transform <https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.interpolation.affine_transform.html>`__\n\n Returns\n -------\n numpy.array\n A processed image.\n\n Examples\n ---------\n >>> x --> [row, col, 1]\n >>> x = tl.prepro.rotation(x, rg=40, is_random=False)\n >>> tl.vis.save_image(x, 'im.png')"}
{"_id": "q_1505", "text": "Randomly or centrally crop an image.\n\n Parameters\n ----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n wrg : int\n Size of width.\n hrg : int\n Size of height.\n is_random : boolean,\n If True, randomly crop, else central crop. Default is False.\n row_index: int\n index of row.\n col_index: int\n index of column.\n\n Returns\n -------\n numpy.array\n A processed image."}
{"_id": "q_1506", "text": "Flip the axises of multiple images together, such as flip left and right, up and down, randomly or non-randomly,\n\n Parameters\n -----------\n x : list of numpy.array\n List of images with dimension of [n_images, row, col, channel] (default).\n others : args\n See ``tl.prepro.flip_axis``.\n\n Returns\n -------\n numpy.array\n A list of processed images."}
{"_id": "q_1507", "text": "Shift an image randomly or non-randomly.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n wrg : float\n Percentage of shift in axis x, usually -0.25 ~ 0.25.\n hrg : float\n Percentage of shift in axis y, usually -0.25 ~ 0.25.\n is_random : boolean\n If True, randomly shift. Default is False.\n row_index col_index and channel_index : int\n Index of row, col and channel, default (0, 1, 2), for theano (1, 2, 0).\n fill_mode : str\n Method to fill missing pixel, default `nearest`, more options `constant`, `reflect` or `wrap`, see `scipy ndimage affine_transform <https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.interpolation.affine_transform.html>`__\n cval : float\n Value used for points outside the boundaries of the input if mode='constant'. Default is 0.0.\n order : int\n The order of interpolation. The order has to be in the range 0-5. See ``tl.prepro.affine_transform`` and `scipy ndimage affine_transform <https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.interpolation.affine_transform.html>`__\n\n Returns\n -------\n numpy.array\n A processed image."}
{"_id": "q_1508", "text": "Change the brightness of a single image, randomly or non-randomly.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n gamma : float\n Non negative real number. Default value is 1.\n - Small than 1 means brighter.\n - If `is_random` is True, gamma in a range of (1-gamma, 1+gamma).\n gain : float\n The constant multiplier. Default value is 1.\n is_random : boolean\n If True, randomly change brightness. Default is False.\n\n Returns\n -------\n numpy.array\n A processed image.\n\n References\n -----------\n - `skimage.exposure.adjust_gamma <http://scikit-image.org/docs/dev/api/skimage.exposure.html>`__\n - `chinese blog <http://www.cnblogs.com/denny402/p/5124402.html>`__"}
{"_id": "q_1509", "text": "Perform illumination augmentation for a single image, randomly or non-randomly.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n gamma : float\n Change brightness (the same with ``tl.prepro.brightness``)\n - if is_random=False, one float number, small than one means brighter, greater than one means darker.\n - if is_random=True, tuple of two float numbers, (min, max).\n contrast : float\n Change contrast.\n - if is_random=False, one float number, small than one means blur.\n - if is_random=True, tuple of two float numbers, (min, max).\n saturation : float\n Change saturation.\n - if is_random=False, one float number, small than one means unsaturation.\n - if is_random=True, tuple of two float numbers, (min, max).\n is_random : boolean\n If True, randomly change illumination. Default is False.\n\n Returns\n -------\n numpy.array\n A processed image.\n\n Examples\n ---------\n Random\n\n >>> x = tl.prepro.illumination(x, gamma=(0.5, 5.0), contrast=(0.3, 1.0), saturation=(0.7, 1.0), is_random=True)\n\n Non-random\n\n >>> x = tl.prepro.illumination(x, 0.5, 0.6, 0.8, is_random=False)"}
{"_id": "q_1510", "text": "Adjust hue of an RGB image.\n\n This is a convenience method that converts an RGB image to float representation, converts it to HSV, add an offset to the hue channel, converts back to RGB and then back to the original data type.\n For TF, see `tf.image.adjust_hue <https://www.tensorflow.org/api_docs/python/tf/image/adjust_hue>`__.and `tf.image.random_hue <https://www.tensorflow.org/api_docs/python/tf/image/random_hue>`__.\n\n Parameters\n -----------\n im : numpy.array\n An image with values between 0 and 255.\n hout : float\n The scale value for adjusting hue.\n - If is_offset is False, set all hue values to this value. 0 is red; 0.33 is green; 0.66 is blue.\n - If is_offset is True, add this value as the offset to the hue channel.\n is_offset : boolean\n Whether `hout` is added on HSV as offset or not. Default is True.\n is_clip : boolean\n If HSV value smaller than 0, set to 0. Default is True.\n is_random : boolean\n If True, randomly change hue. Default is False.\n\n Returns\n -------\n numpy.array\n A processed image.\n\n Examples\n ---------\n Random, add a random value between -0.2 and 0.2 as the offset to every hue values.\n\n >>> im_hue = tl.prepro.adjust_hue(image, hout=0.2, is_offset=True, is_random=False)\n\n Non-random, make all hue to green.\n\n >>> im_green = tl.prepro.adjust_hue(image, hout=0.66, is_offset=False, is_random=False)\n\n References\n -----------\n - `tf.image.random_hue <https://www.tensorflow.org/api_docs/python/tf/image/random_hue>`__.\n - `tf.image.adjust_hue <https://www.tensorflow.org/api_docs/python/tf/image/adjust_hue>`__.\n - `StackOverflow: Changing image hue with python PIL <https://stackoverflow.com/questions/7274221/changing-image-hue-with-python-pil>`__."}
{"_id": "q_1511", "text": "Resize an image by given output size and method.\n\n Warning, this function will rescale the value to [0, 255].\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n size : list of 2 int or None\n For height and width.\n interp : str\n Interpolation method for re-sizing (`nearest`, `lanczos`, `bilinear`, `bicubic` (default) or `cubic`).\n mode : str\n The PIL image mode (`P`, `L`, etc.) to convert image before resizing.\n\n Returns\n -------\n numpy.array\n A processed image.\n\n References\n ------------\n - `scipy.misc.imresize <https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.imresize.html>`__"}
{"_id": "q_1512", "text": "Normalize every pixels by the same given mean and std, which are usually\n compute from all examples.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n mean : float\n Value for subtraction.\n std : float\n Value for division.\n epsilon : float\n A small position value for dividing standard deviation.\n\n Returns\n -------\n numpy.array\n A processed image."}
{"_id": "q_1513", "text": "Apply ZCA whitening on an image by given principal components matrix.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n principal_components : matrix\n Matrix from ``get_zca_whitening_principal_components_img``.\n\n Returns\n -------\n numpy.array\n A processed image."}
{"_id": "q_1514", "text": "Randomly set some pixels to zero by a given keeping probability.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] or [row, col].\n keep : float\n The keeping probability (0, 1), the lower more values will be set to zero.\n\n Returns\n -------\n numpy.array\n A processed image."}
{"_id": "q_1515", "text": "Inputs a list of points, return a 2D image.\n\n Parameters\n --------------\n list_points : list of 2 int\n [[x, y], [x, y]..] for point coordinates.\n size : tuple of 2 int\n (w, h) for output size.\n val : float or int\n For the contour value.\n\n Returns\n -------\n numpy.array\n An image."}
{"_id": "q_1516", "text": "Parse darknet annotation format into two lists for class and bounding box.\n\n Input list of [[class, x, y, w, h], ...], return two list of [class ...] and [[x, y, w, h], ...].\n\n Parameters\n ------------\n annotations : list of list\n A list of class and bounding boxes of images e.g. [[class, x, y, w, h], ...]\n\n Returns\n -------\n list of int\n List of class labels.\n\n list of list of 4 numbers\n List of bounding box."}
{"_id": "q_1517", "text": "Resize an image, and compute the new bounding box coordinates.\n\n Parameters\n -------------\n im : numpy.array\n An image with dimension of [row, col, channel] (default).\n coords : list of list of 4 int/float or None\n Coordinates [[x, y, w, h], [x, y, w, h], ...]\n size interp and mode : args\n See ``tl.prepro.imresize``.\n is_rescale : boolean\n Set to True, if the input coordinates are rescaled to [0, 1], then return the original coordinates. Default is False.\n\n Returns\n -------\n numpy.array\n A processed image\n list of list of 4 numbers\n A list of new bounding boxes.\n\n Examples\n --------\n >>> im = np.zeros([80, 100, 3]) # as an image with shape width=100, height=80\n >>> _, coords = obj_box_imresize(im, coords=[[20, 40, 30, 30], [10, 20, 20, 20]], size=[160, 200], is_rescale=False)\n >>> print(coords)\n [[40, 80, 60, 60], [20, 40, 40, 40]]\n >>> _, coords = obj_box_imresize(im, coords=[[20, 40, 30, 30]], size=[40, 100], is_rescale=False)\n >>> print(coords)\n [[20, 20, 30, 15]]\n >>> _, coords = obj_box_imresize(im, coords=[[20, 40, 30, 30]], size=[60, 150], is_rescale=False)\n >>> print(coords)\n [[30, 30, 45, 22]]\n >>> im2, coords = obj_box_imresize(im, coords=[[0.2, 0.4, 0.3, 0.3]], size=[160, 200], is_rescale=True)\n >>> print(coords, im2.shape)\n [[0.2, 0.4, 0.3, 0.3]] (160, 200, 3)"}
{"_id": "q_1518", "text": "Return mask for sequences.\n\n Parameters\n -----------\n sequences : list of list of int\n All sequences where each row is a sequence.\n pad_val : int\n The pad value.\n\n Returns\n ----------\n list of list of int\n The mask.\n\n Examples\n ---------\n >>> sentences_ids = [[4, 0, 5, 3, 0, 0],\n ... [5, 3, 9, 4, 9, 0]]\n >>> mask = sequences_get_mask(sentences_ids, pad_val=0)\n [[1 1 1 1 0 0]\n [1 1 1 1 1 0]]"}
{"_id": "q_1519", "text": "Flip an image and corresponding keypoints.\n\n Parameters\n -----------\n image : 3 channel image\n The given image for augmentation.\n annos : list of list of floats\n The keypoints annotation of people.\n mask : single channel image or None\n The mask if available.\n prob : float, 0 to 1\n The probability to flip the image, if 1, always flip the image.\n flip_list : tuple of int\n Denotes how the keypoints number be changed after flipping which is required for pose estimation task.\n The left and right body should be maintained rather than switch.\n (Default COCO format).\n Set to an empty tuple if you don't need to maintain left and right information.\n\n Returns\n ----------\n preprocessed image, annos, mask"}
{"_id": "q_1520", "text": "Take 1D float array of rewards and compute discounted rewards for an\n episode. When encount a non-zero value, consider as the end a of an episode.\n\n Parameters\n ----------\n rewards : list\n List of rewards\n gamma : float\n Discounted factor\n mode : int\n Mode for computing the discount rewards.\n - If mode == 0, reset the discount process when encount a non-zero reward (Ping-pong game).\n - If mode == 1, would not reset the discount process.\n\n Returns\n --------\n list of float\n The discounted rewards.\n\n Examples\n ----------\n >>> rewards = np.asarray([0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1])\n >>> gamma = 0.9\n >>> discount_rewards = tl.rein.discount_episode_rewards(rewards, gamma)\n >>> print(discount_rewards)\n [ 0.72899997 0.81 0.89999998 1. 0.72899997 0.81\n 0.89999998 1. 0.72899997 0.81 0.89999998 1. ]\n >>> discount_rewards = tl.rein.discount_episode_rewards(rewards, gamma, mode=1)\n >>> print(discount_rewards)\n [ 1.52110755 1.69011939 1.87791049 2.08656716 1.20729685 1.34144104\n 1.49048996 1.65610003 0.72899997 0.81 0.89999998 1. ]"}
{"_id": "q_1521", "text": "Calculate the loss for Policy Gradient Network.\n\n Parameters\n ----------\n logits : tensor\n The network outputs without softmax. This function implements softmax inside.\n actions : tensor or placeholder\n The agent actions.\n rewards : tensor or placeholder\n The rewards.\n\n Returns\n --------\n Tensor\n The TensorFlow loss function.\n\n Examples\n ----------\n >>> states_batch_pl = tf.placeholder(tf.float32, shape=[None, D])\n >>> network = InputLayer(states_batch_pl, name='input')\n >>> network = DenseLayer(network, n_units=H, act=tf.nn.relu, name='relu1')\n >>> network = DenseLayer(network, n_units=3, name='out')\n >>> probs = network.outputs\n >>> sampling_prob = tf.nn.softmax(probs)\n >>> actions_batch_pl = tf.placeholder(tf.int32, shape=[None])\n >>> discount_rewards_batch_pl = tf.placeholder(tf.float32, shape=[None])\n >>> loss = tl.rein.cross_entropy_reward_loss(probs, actions_batch_pl, discount_rewards_batch_pl)\n >>> train_op = tf.train.RMSPropOptimizer(learning_rate, decay_rate).minimize(loss)"}
{"_id": "q_1522", "text": "Log weight.\n\n Parameters\n -----------\n probs : tensor\n If it is a network output, usually we should scale it to [0, 1] via softmax.\n weights : tensor\n The weights.\n\n Returns\n --------\n Tensor\n The Tensor after appling the log weighted expression."}
{"_id": "q_1523", "text": "Softmax cross-entropy operation, returns the TensorFlow expression of cross-entropy for two distributions,\n it implements softmax internally. See ``tf.nn.sparse_softmax_cross_entropy_with_logits``.\n\n Parameters\n ----------\n output : Tensor\n A batch of distribution with shape: [batch_size, num of classes].\n target : Tensor\n A batch of index with shape: [batch_size, ].\n name : string\n Name of this loss.\n\n Examples\n --------\n >>> ce = tl.cost.cross_entropy(y_logits, y_target_logits, 'my_loss')\n\n References\n -----------\n - About cross-entropy: `<https://en.wikipedia.org/wiki/Cross_entropy>`__.\n - The code is borrowed from: `<https://en.wikipedia.org/wiki/Cross_entropy>`__."}
{"_id": "q_1524", "text": "Sigmoid cross-entropy operation, see ``tf.nn.sigmoid_cross_entropy_with_logits``.\n\n Parameters\n ----------\n output : Tensor\n A batch of distribution with shape: [batch_size, num of classes].\n target : Tensor\n A batch of index with shape: [batch_size, ].\n name : string\n Name of this loss."}
{"_id": "q_1525", "text": "Binary cross entropy operation.\n\n Parameters\n ----------\n output : Tensor\n Tensor with type of `float32` or `float64`.\n target : Tensor\n The target distribution, format the same with `output`.\n epsilon : float\n A small value to avoid output to be zero.\n name : str\n An optional name to attach to this function.\n\n References\n -----------\n - `ericjang-DRAW <https://github.com/ericjang/draw/blob/master/draw.py#L73>`__"}
{"_id": "q_1526", "text": "Return the TensorFlow expression of normalized mean-square-error of two distributions.\n\n Parameters\n ----------\n output : Tensor\n 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel].\n target : Tensor\n The target distribution, format the same with `output`.\n name : str\n An optional name to attach to this function."}
{"_id": "q_1527", "text": "Returns the expression of cross-entropy of two sequences, implement\n softmax internally. Normally be used for Dynamic RNN with Synced sequence input and output.\n\n Parameters\n -----------\n logits : Tensor\n 2D tensor with shape of [batch_size * ?, n_classes], `?` means dynamic IDs for each example.\n - Can be get from `DynamicRNNLayer` by setting ``return_seq_2d`` to `True`.\n target_seqs : Tensor\n int of tensor, like word ID. [batch_size, ?], `?` means dynamic IDs for each example.\n input_mask : Tensor\n The mask to compute loss, it has the same size with `target_seqs`, normally 0 or 1.\n return_details : boolean\n Whether to return detailed losses.\n - If False (default), only returns the loss.\n - If True, returns the loss, losses, weights and targets (see source code).\n\n Examples\n --------\n >>> batch_size = 64\n >>> vocab_size = 10000\n >>> embedding_size = 256\n >>> input_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name=\"input\")\n >>> target_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name=\"target\")\n >>> input_mask = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name=\"mask\")\n >>> net = tl.layers.EmbeddingInputlayer(\n ... inputs = input_seqs,\n ... vocabulary_size = vocab_size,\n ... embedding_size = embedding_size,\n ... name = 'seq_embedding')\n >>> net = tl.layers.DynamicRNNLayer(net,\n ... cell_fn = tf.contrib.rnn.BasicLSTMCell,\n ... n_hidden = embedding_size,\n ... dropout = (0.7 if is_train else None),\n ... sequence_length = tl.layers.retrieve_seq_length_op2(input_seqs),\n ... return_seq_2d = True,\n ... name = 'dynamicrnn')\n >>> print(net.outputs)\n (?, 256)\n >>> net = tl.layers.DenseLayer(net, n_units=vocab_size, name=\"output\")\n >>> print(net.outputs)\n (?, 10000)\n >>> loss = tl.cost.cross_entropy_seq_with_mask(net.outputs, target_seqs, input_mask)"}
{"_id": "q_1528", "text": "Max-norm regularization returns a function that can be used to apply max-norm regularization to weights.\n\n More about max-norm, see `wiki-max norm <https://en.wikipedia.org/wiki/Matrix_norm#Max_norm>`_.\n The implementation follows `TensorFlow contrib <https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/regularizers.py>`__.\n\n Parameters\n ----------\n scale : float\n A scalar multiplier `Tensor`. 0.0 disables the regularizer.\n\n Returns\n ---------\n A function with signature `mn(weights, name=None)` that apply Lo regularization.\n\n Raises\n --------\n ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float."}
{"_id": "q_1529", "text": "Ramp activation function.\n\n Parameters\n ----------\n x : Tensor\n input.\n v_min : float\n cap input to v_min as a lower bound.\n v_max : float\n cap input to v_max as a upper bound.\n name : str\n The function name (optional).\n\n Returns\n -------\n Tensor\n A ``Tensor`` in the same type as ``x``."}
{"_id": "q_1530", "text": "Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1.\n\n Usually be used for image segmentation.\n\n Parameters\n ----------\n x : Tensor\n input.\n - For 2d image, 4D tensor (batch_size, height, weight, channel), where channel >= 2.\n - For 3d image, 5D tensor (batch_size, depth, height, weight, channel), where channel >= 2.\n name : str\n function name (optional)\n\n Returns\n -------\n Tensor\n A ``Tensor`` in the same type as ``x``.\n\n Examples\n --------\n >>> outputs = pixel_wise_softmax(network.outputs)\n >>> dice_loss = 1 - dice_coe(outputs, y_, epsilon=1e-5)\n\n References\n ----------\n - `tf.reverse <https://www.tensorflow.org/versions/master/api_docs/python/array_ops.html#reverse>`__"}
{"_id": "q_1531", "text": "Tensorflow version of np.repeat for 1D"}
{"_id": "q_1532", "text": "Batch version of tf_map_coordinates\n\n Only supports 2D feature maps\n\n Parameters\n ----------\n inputs : ``tf.Tensor``\n shape = (b*c, h, w)\n coords : ``tf.Tensor``\n shape = (b*c, h, w, n, 2)\n\n Returns\n -------\n ``tf.Tensor``\n A Tensor with the shape as (b*c, h, w, n)"}
{"_id": "q_1533", "text": "Batch map offsets into input\n\n Parameters\n ------------\n inputs : ``tf.Tensor``\n shape = (b, h, w, c)\n offsets: ``tf.Tensor``\n shape = (b, h, w, 2*n)\n grid_offset: `tf.Tensor``\n Offset grids shape = (h, w, n, 2)\n\n Returns\n -------\n ``tf.Tensor``\n A Tensor with the shape as (b, h, w, c)"}
{"_id": "q_1534", "text": "Generate a generator that input a group of example in numpy.array and\n their labels, return the examples and labels by the given batch size.\n\n Parameters\n ----------\n inputs : numpy.array\n The input features, every row is a example.\n targets : numpy.array\n The labels of inputs, every row is a example.\n batch_size : int\n The batch size.\n allow_dynamic_batch_size: boolean\n Allow the use of the last data batch in case the number of examples is not a multiple of batch_size, this may result in unexpected behaviour if other functions expect a fixed-sized batch-size.\n shuffle : boolean\n Indicating whether to use a shuffling queue, shuffle the dataset before return.\n\n Examples\n --------\n >>> X = np.asarray([['a','a'], ['b','b'], ['c','c'], ['d','d'], ['e','e'], ['f','f']])\n >>> y = np.asarray([0,1,2,3,4,5])\n >>> for batch in tl.iterate.minibatches(inputs=X, targets=y, batch_size=2, shuffle=False):\n >>> print(batch)\n (array([['a', 'a'], ['b', 'b']], dtype='<U1'), array([0, 1]))\n (array([['c', 'c'], ['d', 'd']], dtype='<U1'), array([2, 3]))\n (array([['e', 'e'], ['f', 'f']], dtype='<U1'), array([4, 5]))\n\n Notes\n -----\n If you have two inputs and one label and want to shuffle them together, e.g. X1 (1000, 100), X2 (1000, 80) and Y (1000, 1), you can stack them together (`np.hstack((X1, X2))`)\n into (1000, 180) and feed to ``inputs``. After getting a batch, you can split it back into X1 and X2."}
{"_id": "q_1535", "text": "Save model architecture and parameters into database, timestamp will be added automatically.\n\n Parameters\n ----------\n network : TensorLayer layer\n TensorLayer layer instance.\n model_name : str\n The name/key of model.\n kwargs : other events\n Other events, such as name, accuracy, loss, step number and etc (optinal).\n\n Examples\n ---------\n Save model architecture and parameters into database.\n >>> db.save_model(net, accuracy=0.8, loss=2.3, name='second_model')\n\n Load one model with parameters from database (run this in other script)\n >>> net = db.find_top_model(sess=sess, accuracy=0.8, loss=2.3)\n\n Find and load the latest model.\n >>> net = db.find_top_model(sess=sess, sort=[(\"time\", pymongo.DESCENDING)])\n >>> net = db.find_top_model(sess=sess, sort=[(\"time\", -1)])\n\n Find and load the oldest model.\n >>> net = db.find_top_model(sess=sess, sort=[(\"time\", pymongo.ASCENDING)])\n >>> net = db.find_top_model(sess=sess, sort=[(\"time\", 1)])\n\n Get model information\n >>> net._accuracy\n ... 0.8\n\n Returns\n ---------\n boolean : True for success, False for fail."}
{"_id": "q_1536", "text": "Saves one dataset into database, timestamp will be added automatically.\n\n Parameters\n ----------\n dataset : any type\n The dataset you want to store.\n dataset_name : str\n The name of dataset.\n kwargs : other events\n Other events, such as description, author and etc (optinal).\n\n Examples\n ----------\n Save dataset\n >>> db.save_dataset([X_train, y_train, X_test, y_test], 'mnist', description='this is a tutorial')\n\n Get dataset\n >>> dataset = db.find_top_dataset('mnist')\n\n Returns\n ---------\n boolean : Return True if save success, otherwise, return False."}
{"_id": "q_1537", "text": "Finds and returns a dataset from the database which matches the requirement.\n\n Parameters\n ----------\n dataset_name : str\n The name of dataset.\n sort : List of tuple\n PyMongo sort comment, search \"PyMongo find one sorting\" and `collection level operations <http://api.mongodb.com/python/current/api/pymongo/collection.html>`__ for more details.\n kwargs : other events\n Other events, such as description, author and etc (optinal).\n\n Examples\n ---------\n Save dataset\n >>> db.save_dataset([X_train, y_train, X_test, y_test], 'mnist', description='this is a tutorial')\n\n Get dataset\n >>> dataset = db.find_top_dataset('mnist')\n >>> datasets = db.find_datasets('mnist')\n\n Returns\n --------\n dataset : the dataset or False\n Return False if nothing found."}
{"_id": "q_1538", "text": "Finds and returns all datasets from the database which matches the requirement.\n In some case, the data in a dataset can be stored separately for better management.\n\n Parameters\n ----------\n dataset_name : str\n The name/key of dataset.\n kwargs : other events\n Other events, such as description, author and etc (optional).\n\n Returns\n --------\n params : the parameters, return False if nothing found."}
{"_id": "q_1539", "text": "Delete datasets.\n\n Parameters\n -----------\n kwargs : logging information\n Find items to delete, leave it empty to delete all log."}
{"_id": "q_1540", "text": "Saves the validation log, timestamp will be added automatically.\n\n Parameters\n -----------\n kwargs : logging information\n Events, such as accuracy, loss, step number and etc.\n\n Examples\n ---------\n >>> db.save_validation_log(accuracy=0.33, loss=0.98)"}
{"_id": "q_1541", "text": "Deletes training log.\n\n Parameters\n -----------\n kwargs : logging information\n Find items to delete, leave it empty to delete all log.\n\n Examples\n ---------\n Save training log\n >>> db.save_training_log(accuracy=0.33)\n >>> db.save_training_log(accuracy=0.44)\n\n Delete logs that match the requirement\n >>> db.delete_training_log(accuracy=0.33)\n\n Delete all logs\n >>> db.delete_training_log()"}
{"_id": "q_1542", "text": "Uploads a task to the database, timestamp will be added automatically.\n\n Parameters\n -----------\n task_name : str\n The task name.\n script : str\n File name of the python script.\n hyper_parameters : dictionary\n The hyper parameters pass into the script.\n saved_result_keys : list of str\n The keys of the task results to keep in the database when the task finishes.\n kwargs : other parameters\n Users customized parameters such as description, version number.\n\n Examples\n -----------\n Uploads a task\n >>> db.create_task(task_name='mnist', script='example/tutorial_mnist_simple.py', description='simple tutorial')\n\n Finds and runs the latest task\n >>> db.run_top_task(sess=sess, sort=[(\"time\", pymongo.DESCENDING)])\n >>> db.run_top_task(sess=sess, sort=[(\"time\", -1)])\n\n Finds and runs the oldest task\n >>> db.run_top_task(sess=sess, sort=[(\"time\", pymongo.ASCENDING)])\n >>> db.run_top_task(sess=sess, sort=[(\"time\", 1)])"}
{"_id": "q_1543", "text": "Finds and runs a pending task that in the first of the sorting list.\n\n Parameters\n -----------\n task_name : str\n The task name.\n sort : List of tuple\n PyMongo sort comment, search \"PyMongo find one sorting\" and `collection level operations <http://api.mongodb.com/python/current/api/pymongo/collection.html>`__ for more details.\n kwargs : other parameters\n Users customized parameters such as description, version number.\n\n Examples\n ---------\n Monitors the database and pull tasks to run\n >>> while True:\n >>> print(\"waiting task from distributor\")\n >>> db.run_top_task(task_name='mnist', sort=[(\"time\", -1)])\n >>> time.sleep(1)\n\n Returns\n --------\n boolean : True for success, False for fail."}
{"_id": "q_1544", "text": "Delete tasks.\n\n Parameters\n -----------\n kwargs : logging information\n Find items to delete, leave it empty to delete all log.\n\n Examples\n ---------\n >>> db.delete_tasks()"}
{"_id": "q_1545", "text": "Finds and runs a pending task.\n\n Parameters\n -----------\n task_name : str\n The task name.\n kwargs : other parameters\n Users customized parameters such as description, version number.\n\n Examples\n ---------\n Wait until all tasks finish in user's local console\n\n >>> while not db.check_unfinished_task():\n >>> time.sleep(1)\n >>> print(\"all tasks finished\")\n >>> sess = tf.InteractiveSession()\n >>> net = db.find_top_model(sess=sess, sort=[(\"test_accuracy\", -1)])\n >>> print(\"the best accuracy {} is from model {}\".format(net._test_accuracy, net._name))\n\n Returns\n --------\n boolean : True for success, False for fail."}
{"_id": "q_1546", "text": "Augment unigram features with hashed n-gram features."}
{"_id": "q_1547", "text": "Load IMDb data and augment with hashed n-gram features."}
{"_id": "q_1548", "text": "Read one image.\n\n Parameters\n -----------\n image : str\n The image file name.\n path : str\n The image folder path.\n\n Returns\n -------\n numpy.array\n The image."}
{"_id": "q_1549", "text": "Returns all images in list by given path and name of each image file.\n\n Parameters\n -------------\n img_list : list of str\n The image file names.\n path : str\n The image folder path.\n n_threads : int\n The number of threads to read image.\n printable : boolean\n Whether to print information when reading images.\n\n Returns\n -------\n list of numpy.array\n The images."}
{"_id": "q_1550", "text": "Save a image.\n\n Parameters\n -----------\n image : numpy array\n [w, h, c]\n image_path : str\n path"}
{"_id": "q_1551", "text": "Save multiple images into one single image.\n\n Parameters\n -----------\n images : numpy array\n (batch, w, h, c)\n size : list of 2 ints\n row and column number.\n number of images should be equal or less than size[0] * size[1]\n image_path : str\n save path\n\n Examples\n ---------\n >>> import numpy as np\n >>> import tensorlayer as tl\n >>> images = np.random.rand(64, 100, 100, 3)\n >>> tl.visualize.save_images(images, [8, 8], 'temp.png')"}
{"_id": "q_1552", "text": "Draw bboxes and class labels on image. Return or save the image with bboxes, example in the docs of ``tl.prepro``.\n\n Parameters\n -----------\n image : numpy.array\n The RGB image [height, width, channel].\n classes : list of int\n A list of class ID (int).\n coords : list of int\n A list of list for coordinates.\n - Should be [x, y, x2, y2] (up-left and botton-right format)\n - If [x_center, y_center, w, h] (set is_center to True).\n scores : list of float\n A list of score (float). (Optional)\n classes_list : list of str\n for converting ID to string on image.\n is_center : boolean\n Whether the coordinates is [x_center, y_center, w, h]\n - If coordinates are [x_center, y_center, w, h], set it to True for converting it to [x, y, x2, y2] (up-left and botton-right) internally.\n - If coordinates are [x1, x2, y1, y2], set it to False.\n is_rescale : boolean\n Whether to rescale the coordinates from pixel-unit format to ratio format.\n - If True, the input coordinates are the portion of width and high, this API will scale the coordinates to pixel unit internally.\n - If False, feed the coordinates with pixel unit format.\n save_name : None or str\n The name of image file (i.e. image.png), if None, not to save image.\n\n Returns\n -------\n numpy.array\n The saved image.\n\n References\n -----------\n - OpenCV rectangle and putText.\n - `scikit-image <http://scikit-image.org/docs/dev/api/skimage.draw.html#skimage.draw.rectangle>`__."}
{"_id": "q_1553", "text": "Display a group of RGB or Greyscale CNN masks.\n\n Parameters\n ----------\n CNN : numpy.array\n The image. e.g: 64 5x5 RGB images can be (5, 5, 3, 64).\n second : int\n The display second(s) for the image(s), if saveable is False.\n saveable : boolean\n Save or plot the figure.\n name : str\n A name to save the image, if saveable is True.\n fig_idx : int\n The matplotlib figure index.\n\n Examples\n --------\n >>> tl.visualize.CNN2d(network.all_params[0].eval(), second=10, saveable=True, name='cnn1_mnist', fig_idx=2012)"}
{"_id": "q_1554", "text": "Visualize the embeddings by using t-SNE.\n\n Parameters\n ----------\n embeddings : numpy.array\n The embedding matrix.\n reverse_dictionary : dictionary\n id_to_word, mapping id to unique word.\n plot_only : int\n The number of examples to plot, choice the most common words.\n second : int\n The display second(s) for the image(s), if saveable is False.\n saveable : boolean\n Save or plot the figure.\n name : str\n A name to save the image, if saveable is True.\n fig_idx : int\n matplotlib figure index.\n\n Examples\n --------\n >>> see 'tutorial_word2vec_basic.py'\n >>> final_embeddings = normalized_embeddings.eval()\n >>> tl.visualize.tsne_embedding(final_embeddings, labels, reverse_dictionary,\n ... plot_only=500, second=5, saveable=False, name='tsne')"}
{"_id": "q_1555", "text": "Visualize every columns of the weight matrix to a group of Greyscale img.\n\n Parameters\n ----------\n W : numpy.array\n The weight matrix\n second : int\n The display second(s) for the image(s), if saveable is False.\n saveable : boolean\n Save or plot the figure.\n shape : a list with 2 int or None\n The shape of feature image, MNIST is [28, 80].\n name : a string\n A name to save the image, if saveable is True.\n fig_idx : int\n matplotlib figure index.\n\n Examples\n --------\n >>> tl.visualize.draw_weights(network.all_params[0].eval(), second=10, saveable=True, name='weight_of_1st_layer', fig_idx=2012)"}
{"_id": "q_1556", "text": "Save data into TFRecord."}
{"_id": "q_1557", "text": "Return tensor to read from TFRecord."}
{"_id": "q_1558", "text": "Print all info of parameters in the network"}
{"_id": "q_1559", "text": "Print all info of layers in the network."}
{"_id": "q_1560", "text": "Return the parameters in a list of array."}
{"_id": "q_1561", "text": "Get all arguments of current layer for saving the graph."}
{"_id": "q_1562", "text": "Prefetches string values from disk into an input queue.\n\n In training the capacity of the queue is important because a larger queue\n means better mixing of training examples between shards. The minimum number of\n values kept in the queue is values_per_shard * input_queue_capacity_factor,\n where input_queue_memory factor should be chosen to trade-off better mixing\n with memory usage.\n\n Args:\n reader: Instance of tf.ReaderBase.\n file_pattern: Comma-separated list of file patterns (e.g.\n /tmp/train_data-?????-of-00100).\n is_training: Boolean; whether prefetching for training or eval.\n batch_size: Model batch size used to determine queue capacity.\n values_per_shard: Approximate number of values per shard.\n input_queue_capacity_factor: Minimum number of values to keep in the queue\n in multiples of values_per_shard. See comments above.\n num_reader_threads: Number of reader threads to fill the queue.\n shard_queue_name: Name for the shards filename queue.\n value_queue_name: Name for the values input queue.\n\n Returns:\n A Queue containing prefetched string values."}
{"_id": "q_1563", "text": "Batches input images and captions.\n\n This function splits the caption into an input sequence and a target sequence,\n where the target sequence is the input sequence right-shifted by 1. Input and\n target sequences are batched and padded up to the maximum length of sequences\n in the batch. A mask is created to distinguish real words from padding words.\n\n Example:\n Actual captions in the batch ('-' denotes padded character):\n [\n [ 1 2 5 4 5 ],\n [ 1 2 3 4 - ],\n [ 1 2 3 - - ],\n ]\n\n input_seqs:\n [\n [ 1 2 3 4 ],\n [ 1 2 3 - ],\n [ 1 2 - - ],\n ]\n\n target_seqs:\n [\n [ 2 3 4 5 ],\n [ 2 3 4 - ],\n [ 2 3 - - ],\n ]\n\n mask:\n [\n [ 1 1 1 1 ],\n [ 1 1 1 0 ],\n [ 1 1 0 0 ],\n ]\n\n Args:\n images_and_captions: A list of pairs [image, caption], where image is a\n Tensor of shape [height, width, channels] and caption is a 1-D Tensor of\n any length. Each pair will be processed and added to the queue in a\n separate thread.\n batch_size: Batch size.\n queue_capacity: Queue capacity.\n add_summaries: If true, add caption length summaries.\n\n Returns:\n images: A Tensor of shape [batch_size, height, width, channels].\n input_seqs: An int32 Tensor of shape [batch_size, padded_length].\n target_seqs: An int32 Tensor of shape [batch_size, padded_length].\n mask: An int32 0/1 Tensor of shape [batch_size, padded_length]."}
{"_id": "q_1564", "text": "Data Format aware version of tf.nn.batch_normalization."}
{"_id": "q_1565", "text": "Reshapes a high-dimension vector input.\n\n [batch_size, mask_row, mask_col, n_mask] ---> [batch_size, mask_row x mask_col x n_mask]\n\n Parameters\n ----------\n variable : TensorFlow variable or tensor\n The variable or tensor to be flatten.\n name : str\n A unique layer name.\n\n Returns\n -------\n Tensor\n Flatten Tensor\n\n Examples\n --------\n >>> import tensorflow as tf\n >>> import tensorlayer as tl\n >>> x = tf.placeholder(tf.float32, [None, 128, 128, 3])\n >>> # Convolution Layer with 32 filters and a kernel size of 5\n >>> network = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)\n >>> # Max Pooling (down-sampling) with strides of 2 and kernel size of 2\n >>> network = tf.layers.max_pooling2d(network, 2, 2)\n >>> print(network.get_shape()[:].as_list())\n >>> [None, 62, 62, 32]\n >>> network = tl.layers.flatten_reshape(network)\n >>> print(network.get_shape()[:].as_list()[1:])\n >>> [None, 123008]"}
{"_id": "q_1566", "text": "Get a list of layers' output in a network by a given name scope.\n\n Parameters\n -----------\n net : :class:`Layer`\n The last layer of the network.\n name : str\n Get the layers' output that contain this name.\n verbose : boolean\n If True, print information of all the layers' output\n\n Returns\n --------\n list of Tensor\n A list of layers' output (TensorFlow tensor)\n\n Examples\n ---------\n >>> import tensorlayer as tl\n >>> layers = tl.layers.get_layers_with_name(net, \"CNN\", True)"}
{"_id": "q_1567", "text": "Returns the initialized RNN state.\n The inputs are `LSTMStateTuple` or `State` of `RNNCells`, and an optional `feed_dict`.\n\n Parameters\n ----------\n state : RNN state.\n The TensorFlow's RNN state.\n feed_dict : dictionary\n Initial RNN state; if None, returns zero state.\n\n Returns\n -------\n RNN state\n The TensorFlow's RNN state."}
{"_id": "q_1568", "text": "Remove the repeated items in a list, and return the processed list.\n You may need it to create merged layer like Concat, Elementwise and etc.\n\n Parameters\n ----------\n x : list\n Input\n\n Returns\n -------\n list\n A list that after removing it's repeated items\n\n Examples\n -------\n >>> l = [2, 3, 4, 2, 3]\n >>> l = list_remove_repeat(l)\n [2, 3, 4]"}
{"_id": "q_1569", "text": "Ternary operation use threshold computed with weights."}
{"_id": "q_1570", "text": "Adds a deprecation notice to a docstring."}
{"_id": "q_1571", "text": "Creates a tensor with all elements set to `alpha_value`.\n This operation returns a tensor of type `dtype` with shape `shape` and all\n elements set to alpha.\n\n Parameters\n ----------\n shape: A list of integers, a tuple of integers, or a 1-D `Tensor` of type `int32`.\n The shape of the desired tensor\n alpha_value: `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, int32`, `int64`\n The value used to fill the resulting `Tensor`.\n name: str\n A name for the operation (optional).\n\n Returns\n -------\n A `Tensor` with all elements set to alpha.\n\n Examples\n --------\n >>> tl.alphas([2, 3], tf.int32) # [[alpha, alpha, alpha], [alpha, alpha, alpha]]"}
{"_id": "q_1572", "text": "Return the predict results of given non time-series network.\n\n Parameters\n ----------\n sess : Session\n TensorFlow Session.\n network : TensorLayer layer\n The network.\n X : numpy.array\n The inputs.\n x : placeholder\n For inputs.\n y_op : placeholder\n The argmax expression of softmax outputs.\n batch_size : int or None\n The batch size for prediction, when dataset is large, we should use minibatche for prediction;\n if dataset is small, we can set it to None.\n\n Examples\n --------\n See `tutorial_mnist_simple.py <https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_mnist_simple.py>`_\n\n >>> y = network.outputs\n >>> y_op = tf.argmax(tf.nn.softmax(y), 1)\n >>> print(tl.utils.predict(sess, network, X_test, x, y_op))"}
{"_id": "q_1573", "text": "Input the predicted results, targets results and\n the number of class, return the confusion matrix, F1-score of each class,\n accuracy and macro F1-score.\n\n Parameters\n ----------\n y_test : list\n The target results\n y_predict : list\n The predicted results\n n_classes : int\n The number of classes\n\n Examples\n --------\n >>> c_mat, f1, acc, f1_macro = tl.utils.evaluation(y_test, y_predict, n_classes)"}
{"_id": "q_1574", "text": "Return a list of random integer by the given range and quantity.\n\n Parameters\n -----------\n min_v : number\n The minimum value.\n max_v : number\n The maximum value.\n number : int\n Number of value.\n seed : int or None\n The seed for random.\n\n Examples\n ---------\n >>> r = get_random_int(min_v=0, max_v=10, number=5)\n [10, 2, 3, 3, 7]"}
{"_id": "q_1575", "text": "Clears all the placeholder variables of keep prob,\n including keeping probabilities of all dropout, denoising, dropconnect etc.\n\n Parameters\n ----------\n printable : boolean\n If True, print all deleted variables."}
{"_id": "q_1576", "text": "Sample an index from a probability array.\n\n Parameters\n ----------\n a : list of float\n List of probabilities.\n temperature : float or None\n The higher the more uniform. When a = [0.1, 0.2, 0.7],\n - temperature = 0.7, the distribution will be sharpen [0.05048273, 0.13588945, 0.81362782]\n - temperature = 1.0, the distribution will be the same [0.1, 0.2, 0.7]\n - temperature = 1.5, the distribution will be filtered [0.16008435, 0.25411807, 0.58579758]\n - If None, it will be ``np.argmax(a)``\n\n Notes\n ------\n - No matter what is the temperature and input list, the sum of all probabilities will be one. Even if input list = [1, 100, 200], the sum of all probabilities will still be one.\n - For large vocabulary size, choice a higher temperature or ``tl.nlp.sample_top`` to avoid error."}
{"_id": "q_1577", "text": "Sample from ``top_k`` probabilities.\n\n Parameters\n ----------\n a : list of float\n List of probabilities.\n top_k : int\n Number of candidates to be considered."}
{"_id": "q_1578", "text": "Creates the vocabulary of word to word_id.\n\n See ``tutorial_tfrecord3.py``.\n\n The vocabulary is saved to disk in a text file of word counts. The id of each\n word in the file is its corresponding 0-based line number.\n\n Parameters\n ------------\n sentences : list of list of str\n All sentences for creating the vocabulary.\n word_counts_output_file : str\n The file name.\n min_word_count : int\n Minimum number of occurrences for a word.\n\n Returns\n --------\n :class:`SimpleVocabulary`\n The simple vocabulary object, see :class:`Vocabulary` for more.\n\n Examples\n --------\n Pre-process sentences\n\n >>> captions = [\"one two , three\", \"four five five\"]\n >>> processed_capts = []\n >>> for c in captions:\n >>> c = tl.nlp.process_sentence(c, start_word=\"<S>\", end_word=\"</S>\")\n >>> processed_capts.append(c)\n >>> print(processed_capts)\n ...[['<S>', 'one', 'two', ',', 'three', '</S>'], ['<S>', 'four', 'five', 'five', '</S>']]\n\n Create vocabulary\n\n >>> tl.nlp.create_vocab(processed_capts, word_counts_output_file='vocab.txt', min_word_count=1)\n Creating vocabulary.\n Total words: 8\n Words in vocabulary: 8\n Wrote vocabulary file: vocab.txt\n\n Get vocabulary object\n\n >>> vocab = tl.nlp.Vocabulary('vocab.txt', start_word=\"<S>\", end_word=\"</S>\", unk_word=\"<UNK>\")\n INFO:tensorflow:Initializing vocabulary from file: vocab.txt\n [TL] Vocabulary from vocab.txt : <S> </S> <UNK>\n vocabulary with 10 words (includes start_word, end_word, unk_word)\n start_id: 2\n end_id: 3\n unk_id: 9\n pad_id: 0"}
{"_id": "q_1579", "text": "Reads through an analogy question file, return its id format.\n\n Parameters\n ----------\n eval_file : str\n The file name.\n word2id : dictionary\n a dictionary that maps word to ID.\n\n Returns\n --------\n numpy.array\n A ``[n_examples, 4]`` numpy array containing the analogy question's word IDs.\n\n Examples\n ---------\n The file should be in this format\n\n >>> : capital-common-countries\n >>> Athens Greece Baghdad Iraq\n >>> Athens Greece Bangkok Thailand\n >>> Athens Greece Beijing China\n >>> Athens Greece Berlin Germany\n >>> Athens Greece Bern Switzerland\n >>> Athens Greece Cairo Egypt\n >>> Athens Greece Canberra Australia\n >>> Athens Greece Hanoi Vietnam\n >>> Athens Greece Havana Cuba\n\n Get the tokenized analogy question data\n\n >>> words = tl.files.load_matt_mahoney_text8_dataset()\n >>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size, True)\n >>> analogy_questions = tl.nlp.read_analogies_file(eval_file='questions-words.txt', word2id=dictionary)\n >>> print(analogy_questions)\n [[ 3068 1248 7161 1581]\n [ 3068 1248 28683 5642]\n [ 3068 1248 3878 486]\n ...,\n [ 1216 4309 19982 25506]\n [ 1216 4309 3194 8650]\n [ 1216 4309 140 312]]"}
{"_id": "q_1580", "text": "Given a dictionary that maps word to integer id.\n Returns a reverse dictionary that maps a id to word.\n\n Parameters\n ----------\n word_to_id : dictionary\n that maps word to ID.\n\n Returns\n --------\n dictionary\n A dictionary that maps IDs to words."}
{"_id": "q_1581", "text": "Build the words dictionary and replace rare words with 'UNK' token.\n The most common word has the smallest integer id.\n\n Parameters\n ----------\n words : list of str or byte\n The context in list format. You may need to do preprocessing on the words, such as lower case, remove marks etc.\n vocabulary_size : int\n The maximum vocabulary size, limiting the vocabulary size. Then the script replaces rare words with 'UNK' token.\n printable : boolean\n Whether to print the read vocabulary size of the given words.\n unk_key : str\n Represent the unknown words.\n\n Returns\n --------\n data : list of int\n The context in a list of ID.\n count : list of tuple and list\n Pair words and IDs.\n - count[0] is a list : the number of rare words\n - count[1:] are tuples : the number of occurrence of each word\n - e.g. [['UNK', 418391], (b'the', 1061396), (b'of', 593677), (b'and', 416629), (b'one', 411764)]\n dictionary : dictionary\n It is `word_to_id` that maps word to ID.\n reverse_dictionary : a dictionary\n It is `id_to_word` that maps ID to word.\n\n Examples\n --------\n >>> words = tl.files.load_matt_mahoney_text8_dataset()\n >>> vocabulary_size = 50000\n >>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size)\n\n References\n -----------------\n - `tensorflow/examples/tutorials/word2vec/word2vec_basic.py <https://github.com/tensorflow/tensorflow/blob/r0.7/tensorflow/examples/tutorials/word2vec/word2vec_basic.py>`__"}
{"_id": "q_1582", "text": "Convert a string to list of integers representing token-ids.\n\n For example, a sentence \"I have a dog\" may become tokenized into\n [\"I\", \"have\", \"a\", \"dog\"] and with vocabulary {\"I\": 1, \"have\": 2,\n \"a\": 4, \"dog\": 7\"} this function will return [1, 2, 4, 7].\n\n Parameters\n -----------\n sentence : tensorflow.python.platform.gfile.GFile Object\n The sentence in bytes format to convert to token-ids, see ``basic_tokenizer()`` and ``data_to_token_ids()``.\n vocabulary : dictionary\n Mmapping tokens to integers.\n tokenizer : function\n A function to use to tokenize each sentence. If None, ``basic_tokenizer`` will be used.\n normalize_digits : boolean\n If true, all digits are replaced by 0.\n\n Returns\n --------\n list of int\n The token-ids for the sentence."}
{"_id": "q_1583", "text": "Calculate the bleu score for hypotheses and references\n using the MOSES ulti-bleu.perl script.\n\n Parameters\n ------------\n hypotheses : numpy.array.string\n A numpy array of strings where each string is a single example.\n references : numpy.array.string\n A numpy array of strings where each string is a single example.\n lowercase : boolean\n If True, pass the \"-lc\" flag to the multi-bleu script\n\n Examples\n ---------\n >>> hypotheses = [\"a bird is flying on the sky\"]\n >>> references = [\"two birds are flying on the sky\", \"a bird is on the top of the tree\", \"an airplane is on the sky\",]\n >>> score = tl.nlp.moses_multi_bleu(hypotheses, references)\n\n Returns\n --------\n float\n The BLEU score\n\n References\n ----------\n - `Google/seq2seq/metric/bleu <https://github.com/google/seq2seq>`__"}
{"_id": "q_1584", "text": "Returns the integer id of a word string."}
{"_id": "q_1585", "text": "Returns the integer word id of a word string."}
{"_id": "q_1586", "text": "Returns the word string of an integer word id."}
{"_id": "q_1587", "text": "Enable the diagnostic feature for debugging unexpected concurrency in\n acquiring ConnectionWrapper instances.\n\n NOTE: This MUST be done early in your application's execution, BEFORE any\n accesses to ConnectionFactory or connection policies from your application\n (including imports and sub-imports of your app).\n\n Parameters:\n ----------------------------------------------------------------\n maxConcurrency: A non-negative integer that represents the maximum expected\n number of outstanding connections. When this value is\n exceeded, useful information will be logged and, depending\n on the value of the raiseException arg,\n ConcurrencyExceededError may be raised.\n raiseException: If true, ConcurrencyExceededError will be raised when\n maxConcurrency is exceeded."}
{"_id": "q_1588", "text": "Check for concurrency violation and add self to\n _clsOutstandingInstances.\n\n ASSUMPTION: Called from constructor BEFORE _clsNumOutstanding is\n incremented"}
{"_id": "q_1589", "text": "Close the policy instance and its database connection pool."}
{"_id": "q_1590", "text": "Get a connection from the pool.\n\n Parameters:\n ----------------------------------------------------------------\n retval: A ConnectionWrapper instance. NOTE: Caller\n is responsible for calling the ConnectionWrapper\n instance's release() method or use it in a context manager\n expression (with ... as:) to release resources."}
{"_id": "q_1591", "text": "Create a Connection instance.\n\n Parameters:\n ----------------------------------------------------------------\n retval: A ConnectionWrapper instance. NOTE: Caller\n is responsible for calling the ConnectionWrapper\n instance's release() method or use it in a context manager\n expression (with ... as:) to release resources."}
{"_id": "q_1592", "text": "Release database connection and cursor; passed as a callback to\n ConnectionWrapper"}
{"_id": "q_1593", "text": "Reclassifies given state."}
{"_id": "q_1594", "text": "Removes the given records from the classifier.\n\n parameters\n ------------\n recordsToDelete - list of records to delete from the classififier"}
{"_id": "q_1595", "text": "Removes any stored records within the range from start to\n end. Noninclusive of end.\n\n parameters\n ------------\n start - integer representing the ROWID of the start of the deletion range,\n end - integer representing the ROWID of the end of the deletion range,\n if None, it will default to end."}
{"_id": "q_1596", "text": "Since the KNN Classifier stores categories as numbers, we must store each\n label as a number. This method converts from a label to a unique number.\n Each label is assigned a unique bit so multiple labels may be assigned to\n a single record."}
{"_id": "q_1597", "text": "Converts a category number into a list of labels"}
{"_id": "q_1598", "text": "Returns a state's anomaly vertor converting it from spare to dense"}
{"_id": "q_1599", "text": "Get the labels on classified points within range start to end. Not inclusive\n of end.\n\n :returns: (dict) with format:\n\n ::\n\n {\n 'isProcessing': boolean,\n 'recordLabels': list of results\n }\n\n ``isProcessing`` - currently always false as recalculation blocks; used if\n reprocessing of records is still being performed;\n\n Each item in ``recordLabels`` is of format:\n \n ::\n \n {\n 'ROWID': id of the row,\n 'labels': list of strings\n }"}
{"_id": "q_1600", "text": "Remove labels from each record with record ROWID in range from\n ``start`` to ``end``, noninclusive of end. Removes all records if \n ``labelFilter`` is None, otherwise only removes the labels equal to \n ``labelFilter``.\n\n This will recalculate all points from end to the last record stored in the\n internal cache of this classifier.\n \n :param start: (int) start index \n :param end: (int) end index (noninclusive)\n :param labelFilter: (string) label filter"}
{"_id": "q_1601", "text": "Returns True if the record matches any of the provided filters"}
{"_id": "q_1602", "text": "Removes the set of columns who have never been active from the set of\n active columns selected in the inhibition round. Such columns cannot\n represent learned pattern and are therefore meaningless if only inference\n is required. This should not be done when using a random, unlearned SP\n since you would end up with no active columns.\n\n :param activeArray: An array whose size is equal to the number of columns.\n Any columns marked as active with an activeDutyCycle of 0 have\n never been activated before and therefore are not active due to\n learning. Any of these (unlearned) columns will be disabled (set to 0)."}
{"_id": "q_1603", "text": "Updates the minimum duty cycles defining normal activity for a column. A\n column with activity duty cycle below this minimum threshold is boosted."}
{"_id": "q_1604", "text": "Updates the minimum duty cycles in a global fashion. Sets the minimum duty\n cycles for the overlap all columns to be a percent of the maximum in the\n region, specified by minPctOverlapDutyCycle. Functionality it is equivalent\n to _updateMinDutyCyclesLocal, but this function exploits the globality of\n the computation to perform it in a straightforward, and efficient manner."}
{"_id": "q_1605", "text": "Updates the minimum duty cycles. The minimum duty cycles are determined\n locally. Each column's minimum duty cycles are set to be a percent of the\n maximum duty cycles in the column's neighborhood. Unlike\n _updateMinDutyCyclesGlobal, here the values can be quite different for\n different columns."}
{"_id": "q_1606", "text": "Updates the duty cycles for each column. The OVERLAP duty cycle is a moving\n average of the number of inputs which overlapped with the each column. The\n ACTIVITY duty cycles is a moving average of the frequency of activation for\n each column.\n\n Parameters:\n ----------------------------\n :param overlaps:\n An array containing the overlap score for each column.\n The overlap score for a column is defined as the number\n of synapses in a \"connected state\" (connected synapses)\n that are connected to input bits which are turned on.\n :param activeColumns:\n An array containing the indices of the active columns,\n the sparse set of columns which survived inhibition"}
{"_id": "q_1607", "text": "The average number of columns per input, taking into account the topology\n of the inputs and columns. This value is used to calculate the inhibition\n radius. This function supports an arbitrary number of dimensions. If the\n number of column dimensions does not match the number of input dimensions,\n we treat the missing, or phantom dimensions as 'ones'."}
{"_id": "q_1608", "text": "The range of connected synapses for column. This is used to\n calculate the inhibition radius. This variation of the function only\n supports a 1 dimensional column topology.\n\n Parameters:\n ----------------------------\n :param columnIndex: The index identifying a column in the permanence,\n potential and connectivity matrices"}
{"_id": "q_1609", "text": "The range of connectedSynapses per column, averaged for each dimension.\n This value is used to calculate the inhibition radius. This variation of\n the function only supports a 2 dimensional column topology.\n\n Parameters:\n ----------------------------\n :param columnIndex: The index identifying a column in the permanence,\n potential and connectivity matrices"}
{"_id": "q_1610", "text": "This method ensures that each column has enough connections to input bits\n to allow it to become active. Since a column must have at least\n 'self._stimulusThreshold' overlaps in order to be considered during the\n inhibition phase, columns without such minimal number of connections, even\n if all the input bits they are connected to turn on, have no chance of\n obtaining the minimum threshold. For such columns, the permanence values\n are increased until the minimum number of connections are formed.\n\n\n Parameters:\n ----------------------------\n :param perm: An array of permanence values for a column. The array is\n \"dense\", i.e. it contains an entry for each input bit, even\n if the permanence value is 0.\n :param mask: the indices of the columns whose permanences need to be\n raised."}
{"_id": "q_1611", "text": "Returns a randomly generated permanence value for a synapses that is to be\n initialized in a non-connected state."}
{"_id": "q_1612", "text": "Initializes the permanences of a column. The method\n returns a 1-D array the size of the input, where each entry in the\n array represents the initial permanence value between the input bit\n at the particular index in the array, and the column represented by\n the 'index' parameter.\n\n Parameters:\n ----------------------------\n :param potential: A numpy array specifying the potential pool of the column.\n Permanence values will only be generated for input bits\n corresponding to indices for which the mask value is 1.\n :param connectedPct: A value between 0 or 1 governing the chance, for each\n permanence, that the initial permanence value will\n be a value that is considered connected."}
{"_id": "q_1613", "text": "Update boost factors when global inhibition is used"}
{"_id": "q_1614", "text": "Performs inhibition. This method calculates the necessary values needed to\n actually perform inhibition and then delegates the task of picking the\n active columns to helper functions.\n\n Parameters:\n ----------------------------\n :param overlaps: an array containing the overlap score for each column.\n The overlap score for a column is defined as the number\n of synapses in a \"connected state\" (connected synapses)\n that are connected to input bits which are turned on."}
{"_id": "q_1615", "text": "Gets a neighborhood of columns.\n\n Simply calls topology.neighborhood or topology.wrappingNeighborhood\n\n A subclass can insert different topology behavior by overriding this method.\n\n :param centerColumn (int)\n The center of the neighborhood.\n\n @returns (1D numpy array of integers)\n The columns in the neighborhood."}
{"_id": "q_1616", "text": "Gets a neighborhood of inputs.\n\n Simply calls topology.wrappingNeighborhood or topology.neighborhood.\n\n A subclass can insert different topology behavior by overriding this method.\n\n :param centerInput (int)\n The center of the neighborhood.\n\n @returns (1D numpy array of integers)\n The inputs in the neighborhood."}
{"_id": "q_1617", "text": "Factory function that creates typed Array or ArrayRef objects\n\n dtype - the data type of the array (as string).\n Supported types are: Byte, Int16, UInt16, Int32, UInt32, Int64, UInt64, Real32, Real64\n\n size - the size of the array. Must be positive integer."}
{"_id": "q_1618", "text": "Get parameter value"}
{"_id": "q_1619", "text": "Set parameter value"}
{"_id": "q_1620", "text": "Get the collection of regions in a network\n\n This is a tricky one. The collection of regions returned from\n from the internal network is a collection of internal regions.\n The desired collection is a collelcion of net.Region objects\n that also points to this network (net.network) and not to\n the internal network. To achieve that a CollectionWrapper\n class is used with a custom makeRegion() function (see bellow)\n as a value wrapper. The CollectionWrapper class wraps each value in the\n original collection with the result of the valueWrapper."}
{"_id": "q_1621", "text": "Write state to proto object.\n\n :param proto: SDRClassifierRegionProto capnproto object"}
{"_id": "q_1622", "text": "Read state from proto object.\n\n :param proto: SDRClassifierRegionProto capnproto object"}
{"_id": "q_1623", "text": "Runs the OPF Model\n\n Parameters:\n -------------------------------------------------------------------------\n retval: (completionReason, completionMsg)\n where completionReason is one of the ClientJobsDAO.CMPL_REASON_XXX\n equates."}
{"_id": "q_1624", "text": "Run final activities after a model has run. These include recording and\n logging the final score"}
{"_id": "q_1625", "text": "Create a checkpoint from the current model, and store it in a dir named\n after checkpoint GUID, and finally store the GUID in the Models DB"}
{"_id": "q_1626", "text": "Delete the stored checkpoint for the specified modelID. This function is\n called if the current model is now the best model, making the old model's\n checkpoint obsolete\n\n Parameters:\n -----------------------------------------------------------------------\n modelID: The modelID for the checkpoint to delete. This is NOT the\n unique checkpointID"}
{"_id": "q_1627", "text": "Writes the results of one iteration of a model. The results are written to\n this ModelRunner's in-memory cache unless this model is the \"best model\" for\n the job. If this model is the \"best model\", the predictions are written out\n to a permanent store via a prediction output stream instance\n\n\n Parameters:\n -----------------------------------------------------------------------\n result: A opf_utils.ModelResult object, which contains the input and\n output for this iteration"}
{"_id": "q_1628", "text": "Delete's the output cache associated with the given modelID. This actually\n clears up the resources associated with the cache, rather than deleting al\n the records in the cache\n\n Parameters:\n -----------------------------------------------------------------------\n modelID: The id of the model whose output cache is being deleted"}
{"_id": "q_1629", "text": "Creates and returns a PeriodicActivityMgr instance initialized with\n our periodic activities\n\n Parameters:\n -------------------------------------------------------------------------\n retval: a PeriodicActivityMgr instance"}
{"_id": "q_1630", "text": "Check if the cancelation flag has been set for this model\n in the Model DB"}
{"_id": "q_1631", "text": "Save the current metric value and see if the model's performance has\n 'leveled off.' We do this by looking at some number of previous number of\n recordings"}
{"_id": "q_1632", "text": "Set our state to that obtained from the engWorkerState field of the\n job record.\n\n\n Parameters:\n ---------------------------------------------------------------------\n stateJSON: JSON encoded state from job record"}
{"_id": "q_1633", "text": "Return the list of all swarms in the given sprint.\n\n Parameters:\n ---------------------------------------------------------------------\n retval: list of active swarm Ids in the given sprint"}
{"_id": "q_1634", "text": "Return the list of all completing swarms.\n\n Parameters:\n ---------------------------------------------------------------------\n retval: list of active swarm Ids"}
{"_id": "q_1635", "text": "Return True if the given sprint has completed."}
{"_id": "q_1636", "text": "Convert the information of the node spec to a plain dict of basic types\n\n The description and singleNodeOnly attributes are placed directly in\n the result dicts. The inputs, outputs, parameters and commands dicts\n contain Spec item objects (InputSpec, OutputSpec, etc). Each such object\n is converted also to a plain dict using the internal items2dict() function\n (see bellow)."}
{"_id": "q_1637", "text": "Create the encoder instance for our test and return it."}
{"_id": "q_1638", "text": "Validates control dictionary for the experiment context"}
{"_id": "q_1639", "text": "Extract all items from the 'allKeys' list whose key matches one of the regular\n expressions passed in 'reportKeys'.\n\n Parameters:\n ----------------------------------------------------------------------------\n reportKeyREs: List of regular expressions\n allReportKeys: List of all keys\n\n retval: list of keys from allReportKeys that match the regular expressions\n in 'reportKeyREs'\n If an invalid regular expression was included in 'reportKeys',\n then BadKeyError() is raised"}
{"_id": "q_1640", "text": "Get a specific item by name out of the results dict.\n\n The format of itemName is a string of dictionary keys separated by colons,\n each key being one level deeper into the results dict. For example,\n 'key1:key2' would fetch results['key1']['key2'].\n\n If itemName is not found in results, then None is returned"}
{"_id": "q_1641", "text": "Perform standard handling of an exception that occurs while running\n a model.\n\n Parameters:\n -------------------------------------------------------------------------\n jobID: ID for this hypersearch job in the jobs table\n modelID: model ID\n jobsDAO: ClientJobsDAO instance\n experimentDir: directory containing the experiment\n logger: the logger to use\n e: the exception that occurred\n retval: (completionReason, completionMsg)"}
{"_id": "q_1642", "text": "This creates an experiment directory with a base.py description file\n created from 'baseDescription' and a description.py generated from the\n given params dict and then runs the experiment.\n\n Parameters:\n -------------------------------------------------------------------------\n modelID: ID for this model in the models table\n jobID: ID for this hypersearch job in the jobs table\n baseDescription: Contents of a description.py with the base experiment\n description\n params: Dictionary of specific parameters to override within\n the baseDescriptionFile.\n predictedField: Name of the input field for which this model is being\n optimized\n reportKeys: Which metrics of the experiment to store into the\n results dict of the model's database entry\n optimizeKey: Which metric we are optimizing for\n jobsDAO Jobs data access object - the interface to the\n jobs database which has the model's table.\n modelCheckpointGUID: A persistent, globally-unique identifier for\n constructing the model checkpoint key\n logLevel: override logging level to this value, if not None\n\n retval: (completionReason, completionMsg)"}
{"_id": "q_1643", "text": "Recursively copies a dict and returns the result.\n\n Args:\n d: The dict to copy.\n f: A function to apply to values when copying that takes the value and the\n list of keys from the root of the dict to the value and returns a value\n for the new dict.\n discardNoneKeys: If True, discard key-value pairs when f returns None for\n the value.\n deepCopy: If True, all values in returned dict are true copies (not the\n same object).\n Returns:\n A new dict with keys and values from d replaced with the result of f."}
{"_id": "q_1644", "text": "Recursively applies f to the values in dict d.\n\n Args:\n d: The dict to recurse over.\n f: A function to apply to values in d that takes the value and a list of\n keys from the root of the dict to the value."}
{"_id": "q_1645", "text": "Return a clipped version of obj suitable for printing, This\n is useful when generating log messages by printing data structures, but\n don't want the message to be too long.\n\n If passed in a dict, list, or namedtuple, each element of the structure's\n string representation will be limited to 'maxElementSize' characters. This\n will return a new object where the string representation of each element\n has been truncated to fit within maxElementSize."}
{"_id": "q_1646", "text": "Loads a json value from a file and converts it to the corresponding python\n object.\n\n inputFilePath:\n Path of the json file;\n\n Returns:\n python value that represents the loaded json value"}
{"_id": "q_1647", "text": "Recursively updates the values in original with the values from updates."}
{"_id": "q_1648", "text": "Compares two python dictionaries at the top level and report differences,\n if any, to stdout\n\n da: first dictionary\n db: second dictionary\n\n Returns: The same value as returned by dictDiff() for the given args"}
{"_id": "q_1649", "text": "Given model params, figure out the correct resolution for the\n RandomDistributed encoder. Modifies params in place."}
{"_id": "q_1650", "text": "Remove labels from each record with record ROWID in range from\n start to end, noninclusive of end. Removes all records if labelFilter is\n None, otherwise only removes the labels eqaul to labelFilter.\n\n This will recalculate all points from end to the last record stored in the\n internal cache of this classifier."}
{"_id": "q_1651", "text": "This method will remove the given records from the classifier.\n\n parameters\n ------------\n recordsToDelete - list of records to delete from the classififier"}
{"_id": "q_1652", "text": "Construct a _HTMClassificationRecord based on the current state of the\n htm_prediction_model of this classifier.\n\n ***This will look into the internals of the model and may depend on the\n SP, TM, and KNNClassifier***"}
{"_id": "q_1653", "text": "Sets the autoDetectWaitRecords."}
{"_id": "q_1654", "text": "Run one iteration, profiling it if requested.\n\n :param inputs: (dict) mapping region input names to numpy.array values\n :param outputs: (dict) mapping region output names to numpy.arrays that \n should be populated with output values by this method"}
{"_id": "q_1655", "text": "Run one iteration of SPRegion's compute"}
{"_id": "q_1656", "text": "Figure out whether reset, sequenceId,\n both or neither are present in the data.\n Compute once instead of every time.\n\n Taken from filesource.py"}
{"_id": "q_1657", "text": "Get the default arguments from the function and assign as instance vars.\n\n Return a list of 3-tuples with (name, description, defaultValue) for each\n argument to the function.\n\n Assigns all arguments to the function as instance variables of TMRegion.\n If the argument was not provided, uses the default value.\n\n Pops any values from kwargs that go to the function."}
{"_id": "q_1658", "text": "Run one iteration of TMRegion's compute"}
{"_id": "q_1659", "text": "Perform an internal optimization step that speeds up inference if we know\n learning will not be performed anymore. This call may, for example, remove\n all potential inputs to each column."}
{"_id": "q_1660", "text": "Computes the raw anomaly score.\n\n The raw anomaly score is the fraction of active columns not predicted.\n\n :param activeColumns: array of active column indices\n :param prevPredictedColumns: array of columns indices predicted in prev step\n :returns: anomaly score 0..1 (float)"}
{"_id": "q_1661", "text": "Compute the anomaly score as the percent of active columns not predicted.\n\n :param activeColumns: array of active column indices\n :param predictedColumns: array of columns indices predicted in this step\n (used for anomaly in step T+1)\n :param inputValue: (optional) value of current input to encoders\n (eg \"cat\" for category encoder)\n (used in anomaly-likelihood)\n :param timestamp: (optional) date timestamp when the sample occured\n (used in anomaly-likelihood)\n :returns: the computed anomaly score; float 0..1"}
{"_id": "q_1662", "text": "Adds an image to the plot's figure.\n\n @param data a 2D array. See matplotlib.Axes.imshow documentation.\n @param position A 3-digit number. The first two digits define a 2D grid\n where subplots may be added. The final digit specifies the nth grid\n location for the added subplot\n @param xlabel text to be displayed on the x-axis\n @param ylabel text to be displayed on the y-axis\n @param cmap color map used in the rendering\n @param aspect how aspect ratio is handled during resize\n @param interpolation interpolation method"}
{"_id": "q_1663", "text": "Adds a subplot to the plot's figure at specified position.\n\n @param position A 3-digit number. The first two digits define a 2D grid\n where subplots may be added. The final digit specifies the nth grid\n location for the added subplot\n @param xlabel text to be displayed on the x-axis\n @param ylabel text to be displayed on the y-axis\n @returns (matplotlib.Axes) Axes instance"}
{"_id": "q_1664", "text": "Get version from local file."}
{"_id": "q_1665", "text": "Make an attempt to determine if a pre-release version of nupic.bindings is\n installed already.\n\n @return: boolean"}
{"_id": "q_1666", "text": "Read the requirements.txt file and parse into requirements for setup's\n install_requirements option."}
{"_id": "q_1667", "text": "Generates the string representation of a MetricSpec object, and returns\n the metric key associated with the metric.\n\n\n Parameters:\n -----------------------------------------------------------------------\n inferenceElement:\n An InferenceElement value that indicates which part of the inference this\n metric is computed on\n\n metric:\n The type of the metric being computed (e.g. aae, avg_error)\n\n params:\n A dictionary of parameters for the metric. The keys are the parameter names\n and the values should be the parameter values (e.g. window=200)\n\n field:\n The name of the field for which this metric is being computed\n\n returnLabel:\n If True, returns the label of the MetricSpec that was generated"}
{"_id": "q_1668", "text": "Generates a file by applying token replacements to the given template\n file\n\n templateFileName:\n A list of template file names; these files are assumed to be in\n the same directory as the running experiment_generator.py script.\n ExpGenerator will perform the substitution and concanetate\n the files in the order they are specified\n\n outputFilePath: Absolute path of the output file\n\n replacementDict:\n A dictionary of token/replacement pairs"}
{"_id": "q_1669", "text": "Returns the experiment description schema. This implementation loads it in\n from file experimentDescriptionSchema.json.\n\n Parameters:\n --------------------------------------------------------------------------\n Returns: returns a dict representing the experiment description schema."}
{"_id": "q_1670", "text": "Generates the non-default metrics specified by the expGenerator params"}
{"_id": "q_1671", "text": "Generates the token substitutions related to the predicted field\n and the supplemental arguments for prediction"}
{"_id": "q_1672", "text": "Parses, validates, and executes command-line options;\n\n On success: Performs requested operation and exits program normally\n\n On Error: Dumps exception/error info in JSON format to stdout and exits the\n program with non-zero status."}
{"_id": "q_1673", "text": "Parses a textual datetime format and return a Python datetime object.\n\n The supported format is: ``yyyy-mm-dd h:m:s.ms``\n\n The time component is optional.\n\n - hours are 00..23 (no AM/PM)\n - minutes are 00..59\n - seconds are 00..59\n - micro-seconds are 000000..999999\n\n :param s: (string) input time text\n :return: (datetime.datetime)"}
{"_id": "q_1674", "text": "Translate an index into coordinates, using the given coordinate system.\n\n Similar to ``numpy.unravel_index``.\n\n :param index: (int) The index of the point. The coordinates are expressed as a \n single index by using the dimensions as a mixed radix definition. For \n example, in dimensions 42x10, the point [1, 4] is index \n 1*420 + 4*10 = 460.\n\n :param dimensions (list of ints) The coordinate system.\n\n :returns: (list) of coordinates of length ``len(dimensions)``."}
{"_id": "q_1675", "text": "Translate coordinates into an index, using the given coordinate system.\n\n Similar to ``numpy.ravel_multi_index``.\n\n :param coordinates: (list of ints) A list of coordinates of length \n ``dimensions.size()``.\n\n :param dimensions: (list of ints) The coordinate system.\n\n :returns: (int) The index of the point. The coordinates are expressed as a \n single index by using the dimensions as a mixed radix definition. \n For example, in dimensions 42x10, the point [1, 4] is index \n 1*420 + 4*10 = 460."}
{"_id": "q_1676", "text": "Get the points in the neighborhood of a point.\n\n A point's neighborhood is the n-dimensional hypercube with sides ranging\n [center - radius, center + radius], inclusive. For example, if there are two\n dimensions and the radius is 3, the neighborhood is 6x6. Neighborhoods are\n truncated when they are near an edge.\n\n This is designed to be fast. In C++ it's fastest to iterate through neighbors\n one by one, calculating them on-demand rather than creating a list of them.\n But in Python it's faster to build up the whole list in batch via a few calls\n to C code rather than calculating them on-demand with lots of calls to Python\n code.\n\n :param centerIndex: (int) The index of the point. The coordinates are \n expressed as a single index by using the dimensions as a mixed radix \n definition. For example, in dimensions 42x10, the point [1, 4] is index \n 1*420 + 4*10 = 460.\n\n :param radius: (int) The radius of this neighborhood about the \n ``centerIndex``.\n\n :param dimensions: (indexable sequence) The dimensions of the world outside \n this neighborhood.\n\n :returns: (numpy array) The points in the neighborhood, including \n ``centerIndex``."}
{"_id": "q_1677", "text": "Returns coordinates around given coordinate, within given radius.\n Includes given coordinate.\n\n @param coordinate (numpy.array) N-dimensional integer coordinate\n @param radius (int) Radius around `coordinate`\n\n @return (numpy.array) List of coordinates"}
{"_id": "q_1678", "text": "Returns the top W coordinates by order.\n\n @param coordinates (numpy.array) A 2D numpy array, where each element\n is a coordinate\n @param w (int) Number of top coordinates to return\n @return (numpy.array) A subset of `coordinates`, containing only the\n top ones by order"}
{"_id": "q_1679", "text": "Hash a coordinate to a 64 bit integer."}
{"_id": "q_1680", "text": "Maps the coordinate to a bit in the SDR.\n\n @param coordinate (numpy.array) Coordinate\n @param n (int) The number of available bits in the SDR\n @return (int) The index to a bit in the SDR"}
{"_id": "q_1681", "text": "Function for running binary search on a sorted list.\n\n :param arr: (list) a sorted list of integers to search\n :param val: (int) a integer to search for in the sorted array\n :returns: (int) the index of the element if it is found and -1 otherwise."}
{"_id": "q_1682", "text": "Adds a new segment on a cell.\n\n :param cell: (int) Cell index\n :returns: (int) New segment index"}
{"_id": "q_1683", "text": "Destroys a segment.\n\n :param segment: (:class:`Segment`) representing the segment to be destroyed."}
{"_id": "q_1684", "text": "Creates a new synapse on a segment.\n\n :param segment: (:class:`Segment`) Segment object for synapse to be synapsed \n to.\n :param presynapticCell: (int) Source cell index.\n :param permanence: (float) Initial permanence of synapse.\n :returns: (:class:`Synapse`) created synapse"}
{"_id": "q_1685", "text": "Destroys a synapse.\n\n :param synapse: (:class:`Synapse`) synapse to destroy"}
{"_id": "q_1686", "text": "Compute each segment's number of active synapses for a given input.\n In the returned lists, a segment's active synapse count is stored at index\n ``segment.flatIdx``.\n\n :param activePresynapticCells: (iter) Active cells.\n :param connectedPermanence: (float) Permanence threshold for a synapse to be \n considered connected\n\n :returns: (tuple) (``numActiveConnectedSynapsesForSegment`` [list],\n ``numActivePotentialSynapsesForSegment`` [list])"}
{"_id": "q_1687", "text": "Returns the number of segments.\n\n :param cell: (int) Optional parameter to get the number of segments on a \n cell.\n :returns: (int) Number of segments on all cells if cell is not specified, or \n on a specific specified cell"}
{"_id": "q_1688", "text": "Reads deserialized data from proto object\n\n :param proto: (DynamicStructBuilder) Proto object\n\n :returns: (:class:`Connections`) instance"}
{"_id": "q_1689", "text": "Retrieve the requested property as a string. If property does not exist,\n then KeyError will be raised.\n\n :param prop: (string) name of the property\n :raises: KeyError\n :returns: (string) property value"}
{"_id": "q_1690", "text": "Retrieve the requested property and return it as a bool. If property\n does not exist, then KeyError will be raised. If the property value is\n neither 0 nor 1, then ValueError will be raised\n\n :param prop: (string) name of the property\n :raises: KeyError, ValueError\n :returns: (bool) property value"}
{"_id": "q_1691", "text": "Set the value of the given configuration property.\n\n :param prop: (string) name of the property\n :param value: (object) value to set"}
{"_id": "q_1692", "text": "Return a dict containing all of the configuration properties\n\n :returns: (dict) containing all configuration properties."}
{"_id": "q_1693", "text": "Parse the given XML file and store all properties it describes.\n\n :param filename: (string) name of XML file to parse (no path)\n :param path: (string) path of the XML file. If None, then use the standard\n configuration search path."}
{"_id": "q_1694", "text": "Return the list of paths to search for configuration files.\n\n :returns: (list) of paths"}
{"_id": "q_1695", "text": "Generate a list of random sparse distributed vectors. This is used to generate\n training vectors to the spatial or temporal learner and to compare the predicted\n output against.\n\n It generates a list of 'numVectors' elements, each element has length 'length'\n and has a total of 'activity' bits on.\n\n Parameters:\n -----------------------------------------------\n numVectors: the number of vectors to generate\n length: the length of each row\n activity: the number of ones to put into each row."}
{"_id": "q_1696", "text": "Generate a set of simple sequences. The elements of the sequences will be\n integers from 0 to 'nCoinc'-1. The length of each sequence will be\n randomly chosen from the 'seqLength' list.\n\n Parameters:\n -----------------------------------------------\n nCoinc: the number of elements available to use in the sequences\n seqLength: a list of possible sequence lengths. The length of each\n sequence will be randomly chosen from here.\n nSeq: The number of sequences to generate\n\n retval: a list of sequences. Each sequence is itself a list\n containing the coincidence indices for that sequence."}
{"_id": "q_1697", "text": "Generate a set of hub sequences. These are sequences which contain a hub\n element in the middle. The elements of the sequences will be integers\n from 0 to 'nCoinc'-1. The hub elements will only appear in the middle of\n each sequence. The length of each sequence will be randomly chosen from the\n 'seqLength' list.\n\n Parameters:\n -----------------------------------------------\n nCoinc: the number of elements available to use in the sequences\n hubs: which of the elements will be used as hubs.\n seqLength: a list of possible sequence lengths. The length of each\n sequence will be randomly chosen from here.\n nSeq: The number of sequences to generate\n\n retval: a list of sequences. Each sequence is itself a list\n containing the coincidence indices for that sequence."}
{"_id": "q_1698", "text": "Generate a non overlapping coincidence matrix. This is used to generate random\n inputs to the temporal learner and to compare the predicted output against.\n\n It generates a matrix of nCoinc rows, each row has length 'length' and has\n a total of 'activity' bits on.\n\n Parameters:\n -----------------------------------------------\n nCoinc: the number of rows to generate\n length: the length of each row\n activity: the number of ones to put into each row."}
{"_id": "q_1699", "text": "Function that compares two spatial pooler instances. Compares the\n static variables between the two poolers to make sure that they are equivalent.\n\n Parameters\n -----------------------------------------\n SP1 first spatial pooler to be compared\n\n SP2 second spatial pooler to be compared\n\n To establish equality, this function does the following:\n\n 1.Compares the connected synapse matrices for each coincidence\n\n 2.Compare the potential synapse matrices for each coincidence\n\n 3.Compare the permanence matrices for each coincidence\n\n 4.Compare the firing boosts between the two poolers.\n\n 5.Compare the duty cycles before and after inhibition for both poolers"}
{"_id": "q_1700", "text": "Accumulate a list of values 'values' into the frequency counts 'freqCounts',\n and return the updated frequency counts\n\n For example, if values contained the following: [1,1,3,5,1,3,5], and the initial\n freqCounts was None, then the return value would be:\n [0,3,0,2,0,2]\n which corresponds to how many of each value we saw in the input, i.e. there\n were 0 0's, 3 1's, 0 2's, 2 3's, 0 4's, and 2 5's.\n\n If freqCounts is not None, the values will be added to the existing counts and\n the length of the frequency Counts will be automatically extended as necessary\n\n Parameters:\n -----------------------------------------------\n values: The values to accumulate into the frequency counts\n freqCounts: Accumulated frequency counts so far, or none"}
{"_id": "q_1701", "text": "Helper function used by averageOnTimePerTimestep. 'durations' is a vector\n which must be the same len as vector. For each \"on\" in vector, it fills in\n the corresponding element of duration with the duration of that \"on\" signal\n up until that time\n\n Parameters:\n -----------------------------------------------\n vector: vector of output values over time\n durations: vector same length as 'vector', initialized to 0's.\n This is filled in with the durations of each 'on\" signal.\n\n Example:\n vector: 11100000001100000000011111100000\n durations: 12300000001200000000012345600000"}
{"_id": "q_1702", "text": "Computes the average on-time of the outputs that are on at each time step, and\n then averages this over all time steps.\n\n This metric is resiliant to the number of outputs that are on at each time\n step. That is, if time step 0 has many more outputs on than time step 100, it\n won't skew the results. This is particularly useful when measuring the\n average on-time of things like the temporal memory output where you might\n have many columns bursting at the start of a sequence - you don't want those\n start of sequence bursts to over-influence the calculated average on-time.\n\n Parameters:\n -----------------------------------------------\n vectors: the vectors for which the onTime is calculated. Row 0\n contains the outputs from time step 0, row 1 from time step\n 1, etc.\n numSamples: the number of elements for which on-time is calculated.\n If not specified, then all elements are looked at.\n\n Returns (scalar average on-time over all time steps,\n list containing frequency counts of each encountered on-time)"}
{"_id": "q_1703", "text": "Returns the average on-time, averaged over all on-time runs.\n\n Parameters:\n -----------------------------------------------\n vectors: the vectors for which the onTime is calculated. Row 0\n contains the outputs from time step 0, row 1 from time step\n 1, etc.\n numSamples: the number of elements for which on-time is calculated.\n If not specified, then all elements are looked at.\n\n Returns: (scalar average on-time of all outputs,\n list containing frequency counts of each encountered on-time)"}
{"_id": "q_1704", "text": "This is usually used to display a histogram of the on-times encountered\n in a particular output.\n\n The freqCounts is a vector containg the frequency counts of each on-time\n (starting at an on-time of 0 and going to an on-time = len(freqCounts)-1)\n\n The freqCounts are typically generated from the averageOnTimePerTimestep\n or averageOnTime methods of this module.\n\n Parameters:\n -----------------------------------------------\n freqCounts: The frequency counts to plot\n title: Title of the plot"}
{"_id": "q_1705", "text": "Returns the percent of the outputs that remain completely stable over\n N time steps.\n\n Parameters:\n -----------------------------------------------\n vectors: the vectors for which the stability is calculated\n numSamples: the number of time steps where stability is counted\n\n For each window of numSamples, count how many outputs are active during\n the entire window."}
{"_id": "q_1706", "text": "Compares the actual input with the predicted input and returns results\n\n Parameters:\n -----------------------------------------------\n input: The actual input\n prediction: the predicted input\n verbosity: If > 0, print debugging messages\n sparse: If true, they are in sparse form (list of\n active indices)\n\n retval (foundInInput, totalActiveInInput, missingFromInput,\n totalActiveInPrediction)\n foundInInput: The number of predicted active elements that were\n found in the actual input\n totalActiveInInput: The total number of active elements in the input.\n missingFromInput: The number of predicted active elements that were not\n found in the actual input\n totalActiveInPrediction: The total number of active elements in the prediction"}
{"_id": "q_1707", "text": "Generates centre offsets and spread offsets for block-mode based training\n regimes - star, cross, block.\n\n Parameters:\n -----------------------------------------------\n spaceShape: The (height, width) of the 2-D space to explore. This\n sets the number of center-points.\n spreadShape: The shape (height, width) of the area around each center-point\n to explore.\n stepSize: The step size. How big each step is, in pixels. This controls\n *both* the spacing of the center-points within the block and the\n points we explore around each center-point\n retval: (centreOffsets, spreadOffsets)"}
{"_id": "q_1708", "text": "Make a two-dimensional clone map mapping columns to clone master.\n\n This makes a map that is (numColumnsHigh, numColumnsWide) big that can\n be used to figure out which clone master to use for each column. Here are\n a few sample calls\n\n >>> makeCloneMap(columnsShape=(10, 6), outputCloningWidth=4)\n (array([[ 0, 1, 2, 3, 0, 1],\n [ 4, 5, 6, 7, 4, 5],\n [ 8, 9, 10, 11, 8, 9],\n [12, 13, 14, 15, 12, 13],\n [ 0, 1, 2, 3, 0, 1],\n [ 4, 5, 6, 7, 4, 5],\n [ 8, 9, 10, 11, 8, 9],\n [12, 13, 14, 15, 12, 13],\n [ 0, 1, 2, 3, 0, 1],\n [ 4, 5, 6, 7, 4, 5]], dtype=uint32), 16)\n\n >>> makeCloneMap(columnsShape=(7, 8), outputCloningWidth=3)\n (array([[0, 1, 2, 0, 1, 2, 0, 1],\n [3, 4, 5, 3, 4, 5, 3, 4],\n [6, 7, 8, 6, 7, 8, 6, 7],\n [0, 1, 2, 0, 1, 2, 0, 1],\n [3, 4, 5, 3, 4, 5, 3, 4],\n [6, 7, 8, 6, 7, 8, 6, 7],\n [0, 1, 2, 0, 1, 2, 0, 1]], dtype=uint32), 9)\n\n >>> makeCloneMap(columnsShape=(7, 11), outputCloningWidth=5)\n (array([[ 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0],\n [ 5, 6, 7, 8, 9, 5, 6, 7, 8, 9, 5],\n [10, 11, 12, 13, 14, 10, 11, 12, 13, 14, 10],\n [15, 16, 17, 18, 19, 15, 16, 17, 18, 19, 15],\n [20, 21, 22, 23, 24, 20, 21, 22, 23, 24, 20],\n [ 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0],\n [ 5, 6, 7, 8, 9, 5, 6, 7, 8, 9, 5]], dtype=uint32), 25)\n\n >>> makeCloneMap(columnsShape=(7, 8), outputCloningWidth=3, outputCloningHeight=4)\n (array([[ 0, 1, 2, 0, 1, 2, 0, 1],\n [ 3, 4, 5, 3, 4, 5, 3, 4],\n [ 6, 7, 8, 6, 7, 8, 6, 7],\n [ 9, 10, 11, 9, 10, 11, 9, 10],\n [ 0, 1, 2, 0, 1, 2, 0, 1],\n [ 3, 4, 5, 3, 4, 5, 3, 4],\n [ 6, 7, 8, 6, 7, 8, 6, 7]], dtype=uint32), 12)\n\n The basic idea with this map is that, if you imagine things stretching off\n to infinity, every instance of a given clone master is seeing the exact\n same thing in all directions. That includes:\n - All neighbors must be the same\n - The \"meaning\" of the input to each of the instances of the same clone\n master must be the same. If input is pixels and we have translation\n invariance--this is easy. At higher levels where input is the output\n of lower levels, this can be much harder.\n - The \"meaning\" of the inputs to neighbors of a clone master must be the\n same for each instance of the same clone master.\n\n\n The best way to think of this might be in terms of 'inputCloningWidth' and\n 'outputCloningWidth'.\n - The 'outputCloningWidth' is the number of columns you'd have to move\n horizontally (or vertically) before you get back to the same the same\n clone that you started with. MUST BE INTEGRAL!\n - The 'inputCloningWidth' is the 'outputCloningWidth' of the node below us.\n If we're getting input from an sensor where every element just represents\n a shift of every other element, this is 1.\n At a conceptual level, it means that if two different inputs are shown\n to the node and the only difference between them is that one is shifted\n horizontally (or vertically) by this many pixels, it means we are looking\n at the exact same real world input, but shifted by some number of pixels\n (doesn't have to be 1). MUST BE INTEGRAL!\n\n At level 1, I think you could have this:\n * inputCloningWidth = 1\n * sqrt(coincToInputRatio^2) = 2.5\n * outputCloningWidth = 5\n ...in this case, you'd end up with 25 masters.\n\n\n Let's think about this case:\n input: - - - 0 1 2 3 4 5 - - - - -\n columns: 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4\n\n ...in other words, input 0 is fed to both column 0 and column 1. Input 1\n is fed to columns 2, 3, and 4, etc. Hopefully, you can see that you'll\n get the exact same output (except shifted) with:\n input: - - - - - 0 1 2 3 4 5 - - -\n columns: 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4\n\n ...in other words, we've shifted the input 2 spaces and the output shifted\n 5 spaces.\n\n\n *** The outputCloningWidth MUST ALWAYS be an integral multiple of the ***\n *** inputCloningWidth in order for all of our rules to apply. ***\n *** NOTE: inputCloningWidth isn't passed here, so it's the caller's ***\n *** responsibility to ensure that this is true. ***\n\n *** The outputCloningWidth MUST ALWAYS be an integral multiple of ***\n *** sqrt(coincToInputRatio^2), too. ***\n\n @param columnsShape The shape (height, width) of the columns.\n @param outputCloningWidth See docstring above.\n @param outputCloningHeight If non-negative, can be used to make\n rectangular (instead of square) cloning fields.\n @return cloneMap An array (numColumnsHigh, numColumnsWide) that\n contains the clone index to use for each\n column.\n @return numDistinctClones The number of distinct clones in the map. This\n is just outputCloningWidth*outputCloningHeight."}
{"_id": "q_1709", "text": "Pretty print a numpy matrix using the given format string for each\n value. Return the string representation\n\n Parameters:\n ------------------------------------------------------------\n array: The numpy array to print. This can be either a 1D vector or 2D matrix\n format: The format string to use for each value\n includeIndices: If true, include [row,col] label for each value\n includeZeros: Can only be set to False if includeIndices is on.\n If True, include 0 values in the print-out\n If False, exclude 0 values from the print-out."}
{"_id": "q_1710", "text": "Generates a random sample from the discrete probability distribution\n and returns its value and the log of the probability of sampling that value."}
{"_id": "q_1711", "text": "Link sensor region to other region so that it can pass it data."}
{"_id": "q_1712", "text": "Get prediction results for all prediction steps."}
{"_id": "q_1713", "text": "Loads all the parameters for this dummy model. For any paramters\n specified as lists, read the appropriate value for this model using the model\n index"}
{"_id": "q_1714", "text": "Protected function that can be overridden by subclasses. Its main purpose\n is to allow the the OPFDummyModelRunner to override this with deterministic\n values\n\n Returns: All the metrics being computed for this model"}
{"_id": "q_1715", "text": "Returns a description of the dataset"}
{"_id": "q_1716", "text": "Returns the sdr for jth value at column i"}
{"_id": "q_1717", "text": "Returns the nth encoding with the predictedField zeroed out"}
{"_id": "q_1718", "text": "Returns the cumulative n for all the fields in the dataset"}
{"_id": "q_1719", "text": "Returns the cumulative w for all the fields in the dataset"}
{"_id": "q_1720", "text": "Returns the nth encoding"}
{"_id": "q_1721", "text": "Deletes all the values in the dataset"}
{"_id": "q_1722", "text": "Value is encoded as a sdr using the encoding parameters of the Field"}
{"_id": "q_1723", "text": "Set up the dataTypes and initialize encoders"}
{"_id": "q_1724", "text": "Initialize the encoders"}
{"_id": "q_1725", "text": "Loads the experiment description file from the path.\n\n :param path: (string) The path to a directory containing a description.py file\n or the file itself.\n :returns: (config, control)"}
{"_id": "q_1726", "text": "Loads the experiment description python script from the given experiment\n directory.\n\n :param experimentDir: (string) experiment directory path\n\n :returns: module of the loaded experiment description scripts"}
{"_id": "q_1727", "text": "Return the model ID of the model with the best result so far and\n it's score on the optimize metric. If swarm is None, then it returns\n the global best, otherwise it returns the best for the given swarm\n for all generatons up to and including genIdx.\n\n Parameters:\n ---------------------------------------------------------------------\n swarmId: A string representation of the sorted list of encoders in this\n swarm. For example '__address_encoder.__gym_encoder'\n genIdx: consider the best in all generations up to and including this\n generation if not None.\n retval: (modelID, result)"}
{"_id": "q_1728", "text": "Return a list of particleStates for all particles we know about in\n the given swarm, their model Ids, and metric results.\n\n Parameters:\n ---------------------------------------------------------------------\n swarmId: A string representation of the sorted list of encoders in this\n swarm. For example '__address_encoder.__gym_encoder'\n\n genIdx: If not None, only return particles at this specific generation\n index.\n\n completed: If not None, only return particles of the given state (either\n completed if 'completed' is True, or running if 'completed'\n is false\n\n matured: If not None, only return particles of the given state (either\n matured if 'matured' is True, or not matured if 'matured'\n is false. Note that any model which has completed is also\n considered matured.\n\n lastDescendent: If True, only return particles that are the last descendent,\n that is, the highest generation index for a given particle Id\n\n retval: (particleStates, modelIds, errScores, completed, matured)\n particleStates: list of particleStates\n modelIds: list of modelIds\n errScores: list of errScores, numpy.inf is plugged in\n if we don't have a result yet\n completed: list of completed booleans\n matured: list of matured booleans"}
{"_id": "q_1729", "text": "Return a list of particleStates for all particles in the given\n swarm generation that have been orphaned.\n\n Parameters:\n ---------------------------------------------------------------------\n swarmId: A string representation of the sorted list of encoders in this\n swarm. For example '__address_encoder.__gym_encoder'\n\n genIdx: If not None, only return particles at this specific generation\n index.\n\n retval: (particleStates, modelIds, errScores, completed, matured)\n particleStates: list of particleStates\n modelIds: list of modelIds\n errScores: list of errScores, numpy.inf is plugged in\n if we don't have a result yet\n completed: list of completed booleans\n matured: list of matured booleans"}
{"_id": "q_1730", "text": "Return a dict of the errors obtained on models that were run with\n each value from a PermuteChoice variable.\n\n For example, if a PermuteChoice variable has the following choices:\n ['a', 'b', 'c']\n\n The dict will have 3 elements. The keys are the stringified choiceVars,\n and each value is tuple containing (choiceVar, errors) where choiceVar is\n the original form of the choiceVar (before stringification) and errors is\n the list of errors received from models that used the specific choice:\n retval:\n ['a':('a', [0.1, 0.2, 0.3]), 'b':('b', [0.5, 0.1, 0.6]), 'c':('c', [])]\n\n\n Parameters:\n ---------------------------------------------------------------------\n swarmId: swarm Id of the swarm to retrieve info from\n maxGenIdx: max generation index to consider from other models, ignored\n if None\n varName: which variable to retrieve\n\n retval: list of the errors obtained from each choice."}
{"_id": "q_1731", "text": "Generate stream definition based on"}
{"_id": "q_1732", "text": "Test if it's OK to exit this worker. This is only called when we run\n out of prospective new models to evaluate. This method sees if all models\n have matured yet. If not, it will sleep for a bit and return False. This\n will indicate to the hypersearch worker that we should keep running, and\n check again later. This gives this worker a chance to pick up and adopt any\n model which may become orphaned by another worker before it matures.\n\n If all models have matured, this method will send a STOP message to all\n matured, running models (presummably, there will be just one - the model\n which thinks it's the best) before returning True."}
{"_id": "q_1733", "text": "Record or update the results for a model. This is called by the\n HSW whenever it gets results info for another model, or updated results\n on a model that is still running.\n\n The first time this is called for a given modelID, the modelParams will\n contain the params dict for that model and the modelParamsHash will contain\n the hash of the params. Subsequent updates of the same modelID will\n have params and paramsHash values of None (in order to save overhead).\n\n The Hypersearch object should save these results into it's own working\n memory into some table, which it then uses to determine what kind of\n new models to create next time createModels() is called.\n\n Parameters:\n ----------------------------------------------------------------------\n modelID: ID of this model in models table\n modelParams: params dict for this model, or None if this is just an update\n of a model that it already previously reported on.\n\n See the comments for the createModels() method for a\n description of this dict.\n\n modelParamsHash: hash of the modelParams dict, generated by the worker\n that put it into the model database.\n results: tuple containing (allMetrics, optimizeMetric). Each is a\n dict containing metricName:result pairs. .\n May be none if we have no results yet.\n completed: True if the model has completed evaluation, False if it\n is still running (and these are online results)\n completionReason: One of the ClientJobsDAO.CMPL_REASON_XXX equates\n matured: True if this model has matured. In most cases, once a\n model matures, it will complete as well. The only time a\n model matures and does not complete is if it's currently\n the best model and we choose to keep it running to generate\n predictions.\n numRecords: Number of records that have been processed so far by this\n model."}
{"_id": "q_1734", "text": "Run the given model.\n\n This runs the model described by 'modelParams'. Periodically, it updates\n the results seen on the model to the model database using the databaseAO\n (database Access Object) methods.\n\n Parameters:\n -------------------------------------------------------------------------\n modelID: ID of this model in models table\n\n jobID: ID for this hypersearch job in the jobs table\n\n modelParams: parameters of this specific model\n modelParams is a dictionary containing the name/value\n pairs of each variable we are permuting over. Note that\n variables within an encoder spec have their name\n structure as:\n <encoderName>.<encodrVarName>\n\n modelParamsHash: hash of modelParamValues\n\n jobsDAO jobs data access object - the interface to the jobs\n database where model information is stored\n\n modelCheckpointGUID: A persistent, globally-unique identifier for\n constructing the model checkpoint key"}
{"_id": "q_1735", "text": "Return true if the engine services are running"}
{"_id": "q_1736", "text": "Starts a swarm, given a path to a permutations.py script.\n\n This function is meant to be used with a CLI wrapper that passes command line\n arguments in through the options parameter.\n\n @param permutationsFilePath {string} Path to permutations.py.\n @param options {dict} CLI options.\n @param outputLabel {string} Label for output.\n @param permWorkDir {string} Location of working directory.\n\n @returns {object} Model parameters."}
{"_id": "q_1737", "text": "Back up a file\n\n Parameters:\n ----------------------------------------------------------------------\n retval: Filepath of the back-up"}
{"_id": "q_1738", "text": "Launch worker processes to execute the given command line\n\n Parameters:\n -----------------------------------------------\n cmdLine: The command line for each worker\n numWorkers: number of workers to launch"}
{"_id": "q_1739", "text": "Starts HyperSearch as a worker or runs it inline for the \"dryRun\" action\n\n Parameters:\n ----------------------------------------------------------------------\n retval: the new _HyperSearchJob instance representing the\n HyperSearch job"}
{"_id": "q_1740", "text": "Instantiates a _HyperSearchJob instance from info saved in file\n\n Parameters:\n ----------------------------------------------------------------------\n permWorkDir: Directory path for saved jobID file\n outputLabel: Label string for incorporating into file name for saved jobID\n retval: _HyperSearchJob instance; raises exception if not found"}
{"_id": "q_1741", "text": "Loads a saved jobID from file\n\n Parameters:\n ----------------------------------------------------------------------\n permWorkDir: Directory path for saved jobID file\n outputLabel: Label string for incorporating into file name for saved jobID\n retval: HyperSearch jobID; raises exception if not found."}
{"_id": "q_1742", "text": "Emit model info to csv file\n\n Parameters:\n ----------------------------------------------------------------------\n modelInfo: _NupicModelInfo instance\n retval: nothing"}
{"_id": "q_1743", "text": "Queuries DB for model IDs of all currently instantiated models\n associated with this HyperSearch job.\n\n See also: _iterModels()\n\n Parameters:\n ----------------------------------------------------------------------\n retval: A sequence of Nupic modelIDs"}
{"_id": "q_1744", "text": "Retrives the optimization key name and optimization function.\n\n Parameters:\n ---------------------------------------------------------\n searchJobParams:\n Parameter for passing as the searchParams arg to\n Hypersearch constructor.\n retval: (optimizationMetricKey, maximize)\n optimizationMetricKey: which report key to optimize for\n maximize: True if we should try and maximize the optimizeKey\n metric. False if we should minimize it."}
{"_id": "q_1745", "text": "Retrives a dictionary of metrics that combines all report and\n optimization metrics\n\n Parameters:\n ----------------------------------------------------------------------\n retval: a dictionary of optimization metrics that were collected\n for the model; an empty dictionary if there aren't any."}
{"_id": "q_1746", "text": "Returns the periodic checks to see if the model should\n continue running.\n\n Parameters:\n -----------------------------------------------------------------------\n terminationFunc: The function that will be called in the model main loop\n as a wrapper around this function. Must have a parameter\n called 'index'\n\n Returns: A list of PeriodicActivityRequest objects."}
{"_id": "q_1747", "text": "Iterates through stream to calculate total records after aggregation.\n This will alter the bookmark state."}
{"_id": "q_1748", "text": "Return a pattern for a number.\n\n @param number (int) Number of pattern\n\n @return (set) Indices of on bits"}
{"_id": "q_1749", "text": "Add noise to pattern.\n\n @param bits (set) Indices of on bits\n @param amount (float) Probability of switching an on bit with a random bit\n\n @return (set) Indices of on bits in noisy pattern"}
{"_id": "q_1750", "text": "Return the set of pattern numbers that match a bit.\n\n @param bit (int) Index of bit\n\n @return (set) Indices of numbers"}
{"_id": "q_1751", "text": "Return a map from number to matching on bits,\n for all numbers that match a set of bits.\n\n @param bits (set) Indices of bits\n\n @return (dict) Mapping from number => on bits."}
{"_id": "q_1752", "text": "Pretty print a pattern.\n\n @param bits (set) Indices of on bits\n @param verbosity (int) Verbosity level\n\n @return (string) Pretty-printed text"}
{"_id": "q_1753", "text": "Generates set of random patterns."}
{"_id": "q_1754", "text": "Generates set of consecutive patterns."}
{"_id": "q_1755", "text": "Calculate error signal\n\n :param bucketIdxList: list of encoder buckets\n\n :return: dict containing error. The key is the number of steps\n The value is a numpy array of error at the output layer"}
{"_id": "q_1756", "text": "Sort a potentially big file\n\n filename - the input file (standard File format)\n key - a list of field names to sort by\n outputFile - the name of the output file\n fields - a list of fields that should be included (all fields if None)\n watermark - when available memory goes bellow the watermark create a new chunk\n\n sort() works by reading as records from the file into memory\n and calling _sortChunk() on each chunk. In the process it gets\n rid of unneeded fields if any. Once all the chunks have been sorted and\n written to chunk files it calls _merge() to merge all the chunks into a\n single sorted file.\n\n Note, that sort() gets a key that contains field names, which it converts\n into field indices for _sortChunk() becuase _sortChunk() doesn't need to know\n the field name.\n\n sort() figures out by itself how many chunk files to use by reading records\n from the file until the low watermark value of availabel memory is hit and\n then it sorts the current records, generates a chunk file, clears the sorted\n records and starts on a new chunk.\n\n The key field names are turned into indices"}
{"_id": "q_1757", "text": "Sort in memory chunk of records\n\n records - a list of records read from the original dataset\n key - a list of indices to sort the records by\n chunkIndex - the index of the current chunk\n\n The records contain only the fields requested by the user.\n\n _sortChunk() will write the sorted records to a standard File\n named \"chunk_<chunk index>.csv\" (chunk_0.csv, chunk_1.csv,...)."}
{"_id": "q_1758", "text": "Feeds input record through TM, performing inference and learning.\n Updates member variables with new state.\n\n @param activeColumns (set) Indices of active columns in `t`"}
{"_id": "q_1759", "text": "Print a message to the console.\n\n Prints only if level <= self.consolePrinterVerbosity\n Printing with level 0 is equivalent to using a print statement,\n and should normally be avoided.\n\n :param level: (int) indicating the urgency of the message with\n lower values meaning more urgent (messages at level 0 are the most\n urgent and are always printed)\n\n :param message: (string) possibly with format specifiers\n\n :param args: specifies the values for any format specifiers in message\n\n :param kw: newline is the only keyword argument. True (default) if a newline\n should be printed"}
{"_id": "q_1760", "text": "Returns radius for given speed.\n\n Tries to get the encodings of consecutive readings to be\n adjacent with some overlap.\n\n :param: speed (float) Speed (in meters per second)\n :returns: (int) Radius for given speed"}
{"_id": "q_1761", "text": "Write serialized object to file.\n\n :param f: output file\n :param packed: If true, will pack contents."}
{"_id": "q_1762", "text": "Decorator for functions that require anomaly models."}
{"_id": "q_1763", "text": "Remove labels from the anomaly classifier within this model. Removes all\n records if ``labelFilter==None``, otherwise only removes the labels equal to\n ``labelFilter``.\n\n :param start: (int) index to start removing labels\n :param end: (int) index to end removing labels\n :param labelFilter: (string) If specified, only removes records that match"}
{"_id": "q_1764", "text": "Add labels from the anomaly classifier within this model.\n\n :param start: (int) index to start label\n :param end: (int) index to end label\n :param labelName: (string) name of label"}
{"_id": "q_1765", "text": "Compute Anomaly score, if required"}
{"_id": "q_1766", "text": "Returns reference to the network's Classifier region"}
{"_id": "q_1767", "text": "Attaches an 'AnomalyClassifier' region to the network. Will remove current\n 'AnomalyClassifier' region if it exists.\n\n Parameters\n -----------\n network - network to add the AnomalyClassifier region\n params - parameters to pass to the region\n spEnable - True if network has an SP region\n tmEnable - True if network has a TM region; Currently requires True"}
{"_id": "q_1768", "text": "Tell the writer which metrics should be written\n\n Parameters:\n -----------------------------------------------------------------------\n metricsNames: A list of metric lables to be written"}
{"_id": "q_1769", "text": "Get field metadate information for inferences that are of dict type"}
{"_id": "q_1770", "text": "Creates the inference output directory for the given experiment\n\n experimentDir: experiment directory path that contains description.py\n\n Returns: path of the inference output directory"}
{"_id": "q_1771", "text": "A decorator that maintains the attribute lock state of an object\n\n It coperates with the LockAttributesMetaclass (see bellow) that replaces\n the __setattr__ method with a custom one that checks the _canAddAttributes\n counter and allows setting new attributes only if _canAddAttributes > 0.\n\n New attributes can be set only from methods decorated\n with this decorator (should be only __init__ and __setstate__ normally)\n\n The decorator is reentrant (e.g. if from inside a decorated function another\n decorated function is invoked). Before invoking the target function it\n increments the counter (or sets it to 1). After invoking the target function\n it decrements the counter and if it's 0 it removed the counter."}
{"_id": "q_1772", "text": "Creates a neighboring record for each record in the inputs and adds\n new records at the end of the inputs list"}
{"_id": "q_1773", "text": "Modifies up to maxChanges number of bits in the inputVal"}
{"_id": "q_1774", "text": "Returns a random selection from the inputSpace with randomly modified\n up to maxChanges number of bits."}
{"_id": "q_1775", "text": "Creates and returns a new Network with a sensor region reading data from\n 'dataSource'. There are two hierarchical levels, each with one SP and one TM.\n @param dataSource - A RecordStream containing the input data\n @returns a Network ready to run"}
{"_id": "q_1776", "text": "Runs specified Network writing the ensuing anomaly\n scores to writer.\n\n @param network: The Network instance to be run\n @param writer: A csv.writer used to write to output file."}
{"_id": "q_1777", "text": "Removes trailing whitespace on each line."}
{"_id": "q_1778", "text": "Gets the current metric values\n\n :returns: (dict) where each key is the metric-name, and the values are\n it scalar value. Same as the output of \n :meth:`~nupic.frameworks.opf.prediction_metrics_manager.MetricsManager.update`"}
{"_id": "q_1779", "text": "Gets detailed info about a given metric, in addition to its value. This\n may including any statistics or auxilary data that are computed for a given\n metric.\n\n :param metricLabel: (string) label of the given metric (see \n :class:`~nupic.frameworks.opf.metrics.MetricSpec`)\n\n :returns: (dict) of metric information, as returned by \n :meth:`nupic.frameworks.opf.metrics.MetricsIface.getMetric`."}
{"_id": "q_1780", "text": "Stores the current model results in the manager's internal store\n\n Parameters:\n -----------------------------------------------------------------------\n results: A ModelResults object that contains the current timestep's\n input/inferences"}
{"_id": "q_1781", "text": "Get the actual value for this field\n\n Parameters:\n -----------------------------------------------------------------------\n sensorInputElement: The inference element (part of the inference) that\n is being used for this metric"}
{"_id": "q_1782", "text": "Abbreviate the given text to threshold chars and append an ellipsis if its\n length exceeds threshold; used for logging;\n\n NOTE: the resulting text could be longer than threshold due to the ellipsis"}
{"_id": "q_1783", "text": "Generates the ClientJobs database name for the given version of the\n database\n\n Parameters:\n ----------------------------------------------------------------\n dbVersion: ClientJobs database version number\n\n retval: the ClientJobs database name for the given DB version"}
{"_id": "q_1784", "text": "Locate the current version of the jobs DB or create a new one, and\n optionally delete old versions laying around. If desired, this method\n can be called at any time to re-create the tables from scratch, delete\n old versions of the database, etc.\n\n Parameters:\n ----------------------------------------------------------------\n deleteOldVersions: if true, delete any old versions of the DB left\n on the server\n recreate: if true, recreate the database from scratch even\n if it already exists."}
{"_id": "q_1785", "text": "Return a sequence of matching rows with the requested field values from\n a table or empty sequence if nothing matched.\n\n tableInfo: Table information: a ClientJobsDAO._TableInfoBase instance\n conn: Owned connection acquired from ConnectionFactory.get()\n fieldsToMatch: Dictionary of internal fieldName/value mappings that\n identify the desired rows. If a value is an instance of\n ClientJobsDAO._SEQUENCE_TYPES (list/set/tuple), then the\n operator 'IN' will be used in the corresponding SQL\n predicate; if the value is bool: \"IS TRUE/FALSE\"; if the\n value is None: \"IS NULL\"; '=' will be used for all other\n cases.\n selectFieldNames:\n list of fields to return, using internal field names\n maxRows: maximum number of rows to return; unlimited if maxRows\n is None\n\n retval: A sequence of matching rows, each row consisting of field\n values in the order of the requested field names. Empty\n sequence is returned when not match exists."}
{"_id": "q_1786", "text": "Return a single matching row with the requested field values from the\n the requested table or None if nothing matched.\n\n tableInfo: Table information: a ClientJobsDAO._TableInfoBase instance\n conn: Owned connection acquired from ConnectionFactory.get()\n fieldsToMatch: Dictionary of internal fieldName/value mappings that\n identify the desired rows. If a value is an instance of\n ClientJobsDAO._SEQUENCE_TYPES (list/set/tuple), then the\n operator 'IN' will be used in the corresponding SQL\n predicate; if the value is bool: \"IS TRUE/FALSE\"; if the\n value is None: \"IS NULL\"; '=' will be used for all other\n cases.\n selectFieldNames:\n list of fields to return, using internal field names\n\n retval: A sequence of field values of the matching row in the order\n of the given field names; or None if there was no match."}
{"_id": "q_1787", "text": "Place the given job in STATUS_RUNNING mode; the job is expected to be\n STATUS_NOTSTARTED.\n\n NOTE: this function was factored out of jobStartNext because it's also\n needed for testing (e.g., test_client_jobs_dao.py)"}
{"_id": "q_1788", "text": "Set cancel field of all currently-running jobs to true."}
{"_id": "q_1789", "text": "Look through the jobs table and count the running jobs whose\n cancel field is true.\n\n Parameters:\n ----------------------------------------------------------------\n retval: A count of running jobs with the cancel field set to true."}
{"_id": "q_1790", "text": "Generator to allow iterating slices at dynamic intervals\n\n Parameters:\n ----------------------------------------------------------------\n data: Any data structure that supports slicing (i.e. list or tuple)\n *intervals: Iterable of intervals. The sum of intervals should be less\n than, or equal to the length of data."}
{"_id": "q_1791", "text": "Get all info about a job, with model details, if available.\n\n Parameters:\n ----------------------------------------------------------------\n job: jobID of the job to query\n retval: A sequence of two-tuples if the jobID exists in the jobs\n table (exeption is raised if it doesn't exist). Each two-tuple\n contains an instance of jobInfoNamedTuple as the first element and\n an instance of modelInfoNamedTuple as the second element. NOTE: In\n the case where there are no matching model rows, a sequence of one\n two-tuple will still be returned, but the modelInfoNamedTuple\n fields will be None, and the jobInfoNamedTuple fields will be\n populated."}
{"_id": "q_1792", "text": "Get all info about a job\n\n Parameters:\n ----------------------------------------------------------------\n job: jobID of the job to query\n retval: namedtuple containing the job info."}
{"_id": "q_1793", "text": "Change the status on the given job\n\n Parameters:\n ----------------------------------------------------------------\n job: jobID of the job to change status\n status: new status string (ClientJobsDAO.STATUS_xxxxx)\n\n useConnectionID: True if the connection id of the calling function\n must be the same as the connection that created the job. Set\n to False for hypersearch workers"}
{"_id": "q_1794", "text": "Change the status on the given job to completed\n\n Parameters:\n ----------------------------------------------------------------\n job: jobID of the job to mark as completed\n completionReason: completionReason string\n completionMsg: completionMsg string\n\n useConnectionID: True if the connection id of the calling function\n must be the same as the connection that created the job. Set\n to False for hypersearch workers"}
{"_id": "q_1795", "text": "Cancel the given job. This will update the cancel field in the\n jobs table and will result in the job being cancelled.\n\n Parameters:\n ----------------------------------------------------------------\n jobID: jobID of the job to mark as completed\n\n to False for hypersearch workers"}
{"_id": "q_1796", "text": "Fetch all the modelIDs that correspond to a given jobID; empty sequence\n if none"}
{"_id": "q_1797", "text": "Return the number of jobs for the given clientKey and a status that is\n not completed."}
{"_id": "q_1798", "text": "Delete all models from the models table\n\n Parameters:\n ----------------------------------------------------------------"}
{"_id": "q_1799", "text": "Get ALL info for a set of models\n\n WARNING!!!: The order of the results are NOT necessarily in the same order as\n the order of the model IDs passed in!!!\n\n Parameters:\n ----------------------------------------------------------------\n modelIDs: list of model IDs\n retval: list of nametuples containing all the fields stored for each\n model."}
{"_id": "q_1800", "text": "Gets the specified fields for all the models for a single job. This is\n similar to modelsGetFields\n\n Parameters:\n ----------------------------------------------------------------\n jobID: jobID for the models to be searched\n fields: A list of fields to return\n ignoreKilled: (True/False). If True, this will ignore models that\n have been killed\n\n Returns: a (possibly empty) list of tuples as follows\n [\n (model_id1, [field1, ..., fieldn]),\n (model_id2, [field1, ..., fieldn]),\n (model_id3, [field1, ..., fieldn])\n ...\n ]\n\n NOTE: since there is a window of time between a job getting inserted into\n jobs table and the job's worker(s) starting up and creating models, an\n empty-list result is one of the normal outcomes."}
{"_id": "q_1801", "text": "Get the params and paramsHash for a set of models.\n\n WARNING!!!: The order of the results are NOT necessarily in the same order as\n the order of the model IDs passed in!!!\n\n Parameters:\n ----------------------------------------------------------------\n modelIDs: list of model IDs\n retval: list of result namedtuples defined in\n ClientJobsDAO._models.getParamsNamedTuple. Each tuple\n contains: (modelId, params, engParamsHash)"}
{"_id": "q_1802", "text": "Get the results string and other status fields for a set of models.\n\n WARNING!!!: The order of the results are NOT necessarily in the same order\n as the order of the model IDs passed in!!!\n\n For each model, this returns a tuple containing:\n (modelID, results, status, updateCounter, numRecords, completionReason,\n completionMsg, engParamsHash\n\n Parameters:\n ----------------------------------------------------------------\n modelIDs: list of model IDs\n retval: list of result tuples. Each tuple contains:\n (modelID, results, status, updateCounter, numRecords,\n completionReason, completionMsg, engParamsHash)"}
{"_id": "q_1803", "text": "Disable writing of output tap files."}
{"_id": "q_1804", "text": "Does nothing. Kept here for API compatibility"}
{"_id": "q_1805", "text": "Intercepts TemporalMemory deserialization request in order to initialize\n `TemporalMemoryMonitorMixin` state\n\n @param proto (DynamicStructBuilder) Proto object\n\n @return (TemporalMemory) TemporalMemory shim instance"}
{"_id": "q_1806", "text": "Pick a value according to the provided distribution.\n\n Example:\n\n ::\n\n pickByDistribution([.2, .1])\n\n Returns 0 two thirds of the time and 1 one third of the time.\n\n :param distribution: Probability distribution. Need not be normalized.\n :param r: Instance of random.Random. Uses the system instance if one is\n not provided."}
{"_id": "q_1807", "text": "Returns an array of length size and type dtype that is everywhere 0,\n except in the indices listed in sequence pos.\n\n :param pos: A single integer or sequence of integers that specify\n the position of ones to be set.\n :param size: The total size of the array to be returned.\n :param dtype: The element type (compatible with NumPy array())\n of the array to be returned.\n :returns: An array of length size and element type dtype."}
{"_id": "q_1808", "text": "Add distribution to row row.\n Distribution should be an array of probabilities or counts.\n\n :param row: Integer index of the row to add to.\n May be larger than the current number of rows, in which case\n the histogram grows.\n :param distribution: Array of length equal to the number of columns."}
{"_id": "q_1809", "text": "Run a named function specified by a filesystem path, module name\n and function name.\n\n Returns the value returned by the imported function.\n\n Use this when access is needed to code that has\n not been added to a package accessible from the ordinary Python\n path. Encapsulates the multiple lines usually needed to\n safely manipulate and restore the Python path.\n\n Parameters\n ----------\n path: filesystem path\n Path to the directory where the desired module is stored.\n This will be used to temporarily augment the Python path.\n\n moduleName: basestring\n Name of the module, without trailing extension, where the desired\n function is stored. This module should be in the directory specified\n with path.\n\n funcName: basestring\n Name of the function to import and call.\n\n keywords:\n Keyword arguments to be passed to the imported function."}
{"_id": "q_1810", "text": "Routine for computing a moving average.\n\n @param slidingWindow a list of previous values to use in computation that\n will be modified and returned\n @param total the sum of the values in slidingWindow to be used in the\n calculation of the moving average\n @param newVal a new number compute the new windowed average\n @param windowSize how many values to use in the moving window\n\n @returns an updated windowed average, the modified input slidingWindow list,\n and the new total sum of the sliding window"}
{"_id": "q_1811", "text": "Instance method wrapper around compute."}
{"_id": "q_1812", "text": "Helper function to return a scalar value representing the most\n likely outcome given a probability distribution"}
{"_id": "q_1813", "text": "Helper function to return a scalar value representing the expected\n value of a probability distribution"}
{"_id": "q_1814", "text": "Return the field names for each of the scalar values returned by\n getScalars.\n\n :param parentFieldName: The name of the encoder which is our parent. This\n name is prefixed to each of the field names within this encoder to\n form the keys of the dict() in the retval.\n\n :return: array of field names"}
{"_id": "q_1815", "text": "Gets the value of a given field from the input record"}
{"_id": "q_1816", "text": "Return the offset and length of a given field within the encoded output.\n\n :param fieldName: Name of the field\n :return: tuple(``offset``, ``width``) of the field within the encoded output"}
{"_id": "q_1817", "text": "Takes an encoded output and does its best to work backwards and generate\n the input that would have generated it.\n\n In cases where the encoded output contains more ON bits than an input\n would have generated, this routine will return one or more ranges of inputs\n which, if their encoded outputs were ORed together, would produce the\n target output. This behavior makes this method suitable for doing things\n like generating a description of a learned coincidence in the SP, which\n in many cases might be a union of one or more inputs.\n\n If instead, you want to figure the *most likely* single input scalar value\n that would have generated a specific encoded output, use the\n :meth:`.topDownCompute` method.\n\n If you want to pretty print the return value from this method, use the\n :meth:`.decodedToStr` method.\n\n :param encoded: The encoded output that you want decode\n :param parentFieldName: The name of the encoder which is our parent. This name\n is prefixed to each of the field names within this encoder to form the\n keys of the dict() in the retval.\n\n :return: tuple(``fieldsDict``, ``fieldOrder``)\n\n ``fieldsDict`` is a dict() where the keys represent field names\n (only 1 if this is a simple encoder, > 1 if this is a multi\n or date encoder) and the values are the result of decoding each\n field. If there are no bits in encoded that would have been\n generated by a field, it won't be present in the dict. The\n key of each entry in the dict is formed by joining the passed in\n parentFieldName with the child encoder name using a '.'.\n\n Each 'value' in ``fieldsDict`` consists of (ranges, desc), where\n ranges is a list of one or more (minVal, maxVal) ranges of\n input that would generate bits in the encoded output and 'desc'\n is a pretty print description of the ranges. For encoders like\n the category encoder, the 'desc' will contain the category\n names that correspond to the scalar values included in the\n ranges.\n\n ``fieldOrder`` is a list of the keys from ``fieldsDict``, in the\n same order as the fields appear in the encoded output.\n\n TODO: when we switch to Python 2.7 or 3.x, use OrderedDict\n\n Example retvals for a scalar encoder:\n\n .. code-block:: python\n\n {'amount': ( [[1,3], [7,10]], '1-3, 7-10' )}\n {'amount': ( [[2.5,2.5]], '2.5' )}\n\n Example retval for a category encoder:\n\n .. code-block:: python\n\n {'country': ( [[1,1], [5,6]], 'US, GB, ES' )}\n\n Example retval for a multi encoder:\n\n .. code-block:: python\n\n {'amount': ( [[2.5,2.5]], '2.5' ),\n 'country': ( [[1,1], [5,6]], 'US, GB, ES' )}"}
{"_id": "q_1818", "text": "create a random input vector"}
{"_id": "q_1819", "text": "Finds the category that best matches the input pattern. Returns the\n winning category index as well as a distribution over all categories.\n\n :param inputPattern: (list or array) The pattern to be classified. This\n must be a dense representation of the array (e.g. [0, 0, 1, 1, 0, 1]).\n\n :param computeScores: NO EFFECT\n\n :param overCategories: NO EFFECT\n\n :param partitionId: (int) If provided, all training vectors with partitionId\n equal to that of the input pattern are ignored.\n For example, this may be used to perform k-fold cross validation\n without repopulating the classifier. First partition all the data into\n k equal partitions numbered 0, 1, 2, ... and then call learn() for each\n vector passing in its partitionId. Then, during inference, by passing\n in the partition ID in the call to infer(), all other vectors with the\n same partitionId are ignored simulating the effect of repopulating the\n classifier while ommitting the training vectors in the same partition.\n\n :returns: 4-tuple with these keys:\n\n - ``winner``: The category with the greatest number of nearest neighbors\n within the kth nearest neighbors. If the inferenceResult contains no\n neighbors, the value of winner is None. This can happen, for example,\n in cases of exact matching, if there are no stored vectors, or if\n minSparsity is not met.\n - ``inferenceResult``: A list of length numCategories, each entry contains\n the number of neighbors within the top k neighbors that are in that\n category.\n - ``dist``: A list of length numPrototypes. Each entry is the distance\n from the unknown to that prototype. All distances are between 0.0 and\n 1.0.\n - ``categoryDist``: A list of length numCategories. Each entry is the\n distance from the unknown to the nearest prototype of\n that category. All distances are between 0 and 1.0."}
{"_id": "q_1820", "text": "Returns the index of the pattern that is closest to inputPattern,\n the distances of all patterns to inputPattern, and the indices of the k\n closest categories."}
{"_id": "q_1821", "text": "Returns the closest training pattern to inputPattern that belongs to\n category \"cat\".\n\n :param inputPattern: The pattern whose closest neighbor is sought\n\n :param cat: The required category of closest neighbor\n\n :returns: A dense version of the closest training pattern, or None if no\n such patterns exist"}
{"_id": "q_1822", "text": "Gets a training pattern either by index or category number.\n\n :param idx: Index of the training pattern\n\n :param sparseBinaryForm: If true, returns a list of the indices of the\n non-zero bits in the training pattern\n\n :param cat: If not None, get the first pattern belonging to category cat. If\n this is specified, idx must be None.\n\n :returns: The training pattern with specified index"}
{"_id": "q_1823", "text": "Gets the partition id given an index.\n\n :param i: index of partition\n :returns: the partition id associated with pattern i. Returns None if no id\n is associated with it."}
{"_id": "q_1824", "text": "Adds partition id for pattern index"}
{"_id": "q_1825", "text": "Rebuilds the partition Id map using the given partitionIdList"}
{"_id": "q_1826", "text": "Calculate the distances from inputPattern to all stored patterns. All\n distances are between 0.0 and 1.0\n\n :param inputPattern The pattern from which distances to all other patterns\n are calculated\n\n :param distanceNorm Degree of the distance norm"}
{"_id": "q_1827", "text": "Return the distances from inputPattern to all stored patterns.\n\n :param inputPattern The pattern from which distances to all other patterns\n are returned\n\n :param partitionId If provided, ignore all training vectors with this\n partitionId."}
{"_id": "q_1828", "text": "Change the category indices.\n\n Used by the Network Builder to keep the category indices in sync with the\n ImageSensor categoryInfo when the user renames or removes categories.\n\n :param mapping: List of new category indices. For example, mapping=[2,0,1]\n would change all vectors of category 0 to be category 2, category 1 to\n 0, and category 2 to 1"}
{"_id": "q_1829", "text": "Computes the width of dataOut.\n\n Overrides \n :meth:`nupic.bindings.regions.PyRegion.PyRegion.getOutputElementCount`."}
{"_id": "q_1830", "text": "Set the value of a Spec parameter. Most parameters are handled\n automatically by PyRegion's parameter set mechanism. The ones that need\n special treatment are explicitly handled here."}
{"_id": "q_1831", "text": "Saves the record in the underlying csv file.\n\n :param record: a list of Python objects that will be string-ified"}
{"_id": "q_1832", "text": "Saves multiple records in the underlying storage.\n\n :param records: array of records as in\n :meth:`~.FileRecordStream.appendRecord`\n :param progressCB: (function) callback to report progress"}
{"_id": "q_1833", "text": "Gets a bookmark or anchor to the current position.\n\n :returns: an anchor to the current position in the data. Passing this\n anchor to a constructor makes the current position to be the first\n returned record."}
{"_id": "q_1834", "text": "Seeks to ``numRecords`` from the end and returns a bookmark to the new\n position.\n\n :param numRecords: how far to seek from end of file.\n :return: bookmark to desired location."}
{"_id": "q_1835", "text": "Keep track of sequence and make sure time goes forward\n\n Check if the current record is the beginning of a new sequence\n A new sequence starts in 2 cases:\n\n 1. The sequence id changed (if there is a sequence id field)\n 2. The reset field is 1 (if there is a reset field)\n\n Note that if there is no sequenceId field or resetId field then the entire\n dataset is technically one big sequence. The function will not return True\n for the first record in this case. This is Ok because it is important to\n detect new sequences only when there are multiple sequences in the file."}
{"_id": "q_1836", "text": "Returns the number of records that elapse between when an inference is\n made and when the corresponding input record will appear. For example, a\n multistep prediction for 3 timesteps out will have a delay of 3\n\n\n Parameters:\n -----------------------------------------------------------------------\n\n inferenceElement: The InferenceElement value being delayed\n key: If the inference is a dictionary type, this specifies\n key for the sub-inference that is being delayed"}
{"_id": "q_1837", "text": "Returns True if the inference type is 'temporal', i.e. requires a\n temporal memory in the network."}
{"_id": "q_1838", "text": "Makes directory for the given directory path with default permissions.\n If the directory already exists, it is treated as success.\n\n absDirPath: absolute path of the directory to create.\n\n Returns: absDirPath arg\n\n Exceptions: OSError if directory creation fails"}
{"_id": "q_1839", "text": "Parse the given XML file and return a dict describing the file.\n\n Parameters:\n ----------------------------------------------------------------\n filename: name of XML file to parse (no path)\n path: path of the XML file. If None, then use the standard\n configuration search path.\n retval: returns a dict with each property as a key and a dict of all\n the property's attributes as value"}
{"_id": "q_1840", "text": "Set multiple custom properties and persist them to the custom\n configuration store.\n\n Parameters:\n ----------------------------------------------------------------\n properties: a dict of property name/value pairs to set"}
{"_id": "q_1841", "text": "Clear all custom configuration settings and delete the persistent\n custom configuration store."}
{"_id": "q_1842", "text": "If persistent is True, delete the temporary file\n\n Parameters:\n ----------------------------------------------------------------\n persistent: if True, custom configuration file is deleted"}
{"_id": "q_1843", "text": "Returns a dict of all temporary values in custom configuration file"}
{"_id": "q_1844", "text": "Edits the XML configuration file with the parameters specified by\n properties\n\n Parameters:\n ----------------------------------------------------------------\n properties: dict of settings to be applied to the custom configuration store\n (key is property name, value is value)"}
{"_id": "q_1845", "text": "Sets the path of the custom configuration file"}
{"_id": "q_1846", "text": "Get the particle state as a dict. This is enough information to\n instantiate this particle on another worker."}
{"_id": "q_1847", "text": "Init all of our variable positions, velocities, and optionally the best\n result and best position from the given particle.\n\n If newBest is true, we get the best result and position for this new\n generation from the resultsDB, This is used when evoloving a particle\n because the bestResult and position as stored in was the best AT THE TIME\n THAT PARTICLE STARTED TO RUN and does not include the best since that\n particle completed."}
{"_id": "q_1848", "text": "Copy specific variables from particleState into this particle.\n\n Parameters:\n --------------------------------------------------------------\n particleState: dict produced by a particle's getState() method\n varNames: which variables to copy"}
{"_id": "q_1849", "text": "Return the position of a particle given its state dict.\n\n Parameters:\n --------------------------------------------------------------\n retval: dict() of particle position, keys are the variable names,\n values are their positions"}
{"_id": "q_1850", "text": "Agitate this particle so that it is likely to go to a new position.\n Every time agitate is called, the particle is jiggled an even greater\n amount.\n\n Parameters:\n --------------------------------------------------------------\n retval: None"}
{"_id": "q_1851", "text": "Choose a new position based on results obtained so far from all other\n particles.\n\n Parameters:\n --------------------------------------------------------------\n whichVars: If not None, only move these variables\n retval: new position"}
{"_id": "q_1852", "text": "Get the logger for this object.\n\n :returns: (Logger) A Logger object."}
{"_id": "q_1853", "text": "Create a new model instance, given a description dictionary.\n\n :param modelConfig: (dict)\n A dictionary describing the current model,\n `described here <../../quick-start/example-model-params.html>`_.\n\n :param logLevel: (int) The level of logging output that should be generated\n\n :raises Exception: Unsupported model type\n\n :returns: :class:`nupic.frameworks.opf.model.Model`"}
{"_id": "q_1854", "text": "Perform one time step of the Temporal Memory algorithm.\n\n This method calls :meth:`activateCells`, then calls \n :meth:`activateDendrites`. Using :class:`TemporalMemory` via its \n :meth:`compute` method ensures that you'll always be able to call \n :meth:`getPredictiveCells` to get predictions for the next time step.\n\n :param activeColumns: (iter) Indices of active columns.\n\n :param learn: (bool) Whether or not learning is enabled."}
{"_id": "q_1855", "text": "Calculate the active cells, using the current active columns and dendrite\n segments. Grow and reinforce synapses.\n\n :param activeColumns: (iter) A sorted list of active column indices.\n\n :param learn: (bool) If true, reinforce / punish / grow synapses.\n\n **Pseudocode:**\n \n ::\n\n for each column\n if column is active and has active distal dendrite segments\n call activatePredictedColumn\n if column is active and doesn't have active distal dendrite segments\n call burstColumn\n if column is inactive and has matching distal dendrite segments\n call punishPredictedColumn"}
{"_id": "q_1856", "text": "Determines which cells in a predicted column should be added to winner cells\n list, and learns on the segments that correctly predicted this column.\n\n :param column: (int) Index of bursting column.\n\n :param columnActiveSegments: (iter) Active segments in this column.\n\n :param columnMatchingSegments: (iter) Matching segments in this column.\n\n :param prevActiveCells: (list) Active cells in ``t-1``.\n\n :param prevWinnerCells: (list) Winner cells in ``t-1``.\n\n :param learn: (bool) If true, grow and reinforce synapses.\n\n :returns: (list) A list of predicted cells that will be added to \n active cells and winner cells."}
{"_id": "q_1857", "text": "Punishes the Segments that incorrectly predicted a column to be active.\n\n :param column: (int) Index of bursting column.\n\n :param columnActiveSegments: (iter) Active segments for this column, or None \n if there aren't any.\n\n :param columnMatchingSegments: (iter) Matching segments for this column, or \n None if there aren't any.\n\n :param prevActiveCells: (list) Active cells in ``t-1``.\n\n :param prevWinnerCells: (list) Winner cells in ``t-1``."}
{"_id": "q_1858", "text": "Create a segment on the connections, enforcing the maxSegmentsPerCell\n parameter."}
{"_id": "q_1859", "text": "Creates nDesiredNewSynapes synapses on the segment passed in if\n possible, choosing random cells from the previous winner cells that are\n not already on the segment.\n\n :param connections: (Object) Connections instance for the tm\n :param random: (Object) TM object used to generate random\n numbers\n :param segment: (int) Segment to grow synapses on.\n :param nDesiredNewSynapes: (int) Desired number of synapses to grow\n :param prevWinnerCells: (list) Winner cells in `t-1`\n :param initialPermanence: (float) Initial permanence of a new synapse."}
{"_id": "q_1860", "text": "Updates synapses on segment.\n Strengthens active synapses; weakens inactive synapses.\n\n :param connections: (Object) Connections instance for the tm\n :param segment: (int) Segment to adapt\n :param prevActiveCells: (list) Active cells in `t-1`\n :param permanenceIncrement: (float) Amount to increment active synapses\n :param permanenceDecrement: (float) Amount to decrement inactive synapses"}
{"_id": "q_1861", "text": "Returns the index of the column that a cell belongs to.\n\n :param cell: (int) Cell index\n\n :returns: (int) Column index"}
{"_id": "q_1862", "text": "Reads deserialized data from proto object.\n\n :param proto: (DynamicStructBuilder) Proto object\n\n :returns: (:class:TemporalMemory) TemporalMemory instance"}
{"_id": "q_1863", "text": "Generate a sequence from a list of numbers.\n\n Note: Any `None` in the list of numbers is considered a reset.\n\n @param numbers (list) List of numbers\n\n @return (list) Generated sequence"}
{"_id": "q_1864", "text": "Add spatial noise to each pattern in the sequence.\n\n @param sequence (list) Sequence\n @param amount (float) Amount of spatial noise\n\n @return (list) Sequence with spatial noise"}
{"_id": "q_1865", "text": "Pretty print a sequence.\n\n @param sequence (list) Sequence\n @param verbosity (int) Verbosity level\n\n @return (string) Pretty-printed text"}
{"_id": "q_1866", "text": "Returns pretty-printed table of traces.\n\n @param traces (list) Traces to print in table\n @param breakOnResets (BoolsTrace) Trace of resets to break table on\n\n @return (string) Pretty-printed table of traces."}
{"_id": "q_1867", "text": "Compute updated probabilities for anomalyScores using the given params.\n\n :param anomalyScores: a list of records. Each record is a list with the\n following three elements: [timestamp, value, score]\n\n Example::\n\n [datetime.datetime(2013, 8, 10, 23, 0), 6.0, 1.0]\n\n :param params: the JSON dict returned by estimateAnomalyLikelihoods\n :param verbosity: integer controlling extent of printouts for debugging\n :type verbosity: int\n\n :returns: 3-tuple consisting of:\n\n - likelihoods\n\n numpy array of likelihoods, one for each aggregated point\n\n - avgRecordList\n\n list of averaged input records\n\n - params\n\n an updated JSON object containing the state of this metric."}
{"_id": "q_1868", "text": "Return the value of skipRecords for passing to estimateAnomalyLikelihoods\n\n If `windowSize` is very large (bigger than the amount of data) then this\n could just return `learningPeriod`. But when some values have fallen out of\n the historical sliding window of anomaly records, then we have to take those\n into account as well so we return the `learningPeriod` minus the number\n shifted out.\n\n :param numIngested - (int) number of data points that have been added to the\n sliding window of historical data points.\n :param windowSize - (int) size of sliding window of historical data points.\n :param learningPeriod - (int) the number of iterations required for the\n algorithm to learn the basic patterns in the dataset and for the anomaly\n score to 'settle down'."}
{"_id": "q_1869", "text": "capnp serialization method for the anomaly likelihood object\n\n :param proto: (Object) capnp proto object specified in\n nupic.regions.anomaly_likelihood.capnp"}
{"_id": "q_1870", "text": "Replaces the Iteration Cycle phases\n\n :param phaseSpecs: Iteration cycle description consisting of a sequence of\n IterationPhaseSpecXXXXX elements that are performed in the\n given order"}
{"_id": "q_1871", "text": "Processes the given record according to the current iteration cycle phase\n\n :param inputRecord: (object) record expected to be returned from\n :meth:`nupic.data.record_stream.RecordStreamIface.getNextRecord`.\n\n :returns: :class:`nupic.frameworks.opf.opf_utils.ModelResult`"}
{"_id": "q_1872", "text": "Advances the iteration;\n\n Returns: True if more iterations remain; False if this is the final\n iteration."}
{"_id": "q_1873", "text": "Serialize via capnp\n\n :param proto: capnp PreviousValueModelProto message builder"}
{"_id": "q_1874", "text": "Accepts log-values as input, exponentiates them, computes the sum,\n then converts the sum back to log-space and returns the result.\n Handles underflow by rescaling so that the largest values is exactly 1.0."}
{"_id": "q_1875", "text": "Accepts log-values as input, exponentiates them,\n normalizes and returns the result.\n Handles underflow by rescaling so that the largest values is exactly 1.0."}
{"_id": "q_1876", "text": "Log 'msg % args' with severity 'DEBUG'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.debug(\"Houston, we have a %s\", \"thorny problem\", exc_info=1)"}
{"_id": "q_1877", "text": "Log 'msg % args' with severity 'INFO'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.info(\"Houston, we have a %s\", \"interesting problem\", exc_info=1)"}
{"_id": "q_1878", "text": "Log 'msg % args' with severity 'ERROR'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.error(\"Houston, we have a %s\", \"major problem\", exc_info=1)"}
{"_id": "q_1879", "text": "Log 'msg % args' with severity 'CRITICAL'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.critical(\"Houston, we have a %s\", \"major disaster\", exc_info=1)"}
{"_id": "q_1880", "text": "Log 'msg % args' with the integer severity 'level'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)"}
{"_id": "q_1881", "text": "Returns sum of the elements in the list. Missing items are replaced with\n the mean value"}
{"_id": "q_1882", "text": "Returns mean of non-None elements of the list"}
{"_id": "q_1883", "text": "Returns most common value seen in the non-None elements of the list"}
{"_id": "q_1884", "text": "Generate a dataset of aggregated values\n\n Parameters:\n ----------------------------------------------------------------------------\n aggregationInfo: a dictionary that contains the following entries\n - fields: a list of pairs. Each pair is a field name and an\n aggregation function (e.g. sum). The function will be used to aggregate\n multiple values during the aggregation period.\n\n aggregation period: 0 or more of unit=value fields; allowed units are:\n [years months] |\n [weeks days hours minutes seconds milliseconds microseconds]\n NOTE: years and months are mutually-exclusive with the other units.\n See getEndTime() and _aggregate() for more details.\n Example1: years=1, months=6,\n Example2: hours=1, minutes=30,\n If none of the period fields are specified or if all that are specified\n have values of 0, then aggregation will be suppressed, and the given\n inputFile parameter value will be returned.\n\n inputFilename: filename of the input dataset within examples/prediction/data\n\n outputFilename: name for the output file. If not given, a name will be\n generated based on the input filename and the aggregation params\n\n retval: Name of the generated output file. This will be the same as the input\n file name if no aggregation needed to be performed\n\n\n\n If the input file contained a time field, sequence id field or reset field\n that were not specified in aggregationInfo fields, those fields will be\n added automatically with the following rules:\n\n 1. The order will be R, S, T, rest of the fields\n 2. The aggregation function for all will be to pick the first: lambda x: x[0]\n\n Returns: the path of the aggregated data file if aggregation was performed\n (in the same directory as the given input file); if aggregation did not\n need to be performed, then the given inputFile argument value is returned."}
{"_id": "q_1885", "text": "Generate the filename for aggregated dataset\n\n The filename is based on the input filename and the\n aggregation period.\n\n Returns the inputFile if no aggregation required (aggregation\n info has all 0's)"}
{"_id": "q_1886", "text": "Add the aggregation period to the input time t and return a datetime object\n\n Years and months are handled as aspecial case due to leap years\n and months with different number of dates. They can't be converted\n to a strict timedelta because a period of 3 months will have different\n durations actually. The solution is to just add the years and months\n fields directly to the current time.\n\n Other periods are converted to timedelta and just added to current time."}
{"_id": "q_1887", "text": "Given the name of an aggregation function, returns the function pointer\n and param.\n\n Parameters:\n ------------------------------------------------------------------------\n funcName: a string (name of function) or funcPtr\n retval: (funcPtr, param)"}
{"_id": "q_1888", "text": "Generate the aggregated output record\n\n Parameters:\n ------------------------------------------------------------------------\n retval: outputRecord"}
{"_id": "q_1889", "text": "Run one iteration of this model.\n\n :param inputRecord: (object)\n A record object formatted according to\n :meth:`~nupic.data.record_stream.RecordStreamIface.getNextRecord` or\n :meth:`~nupic.data.record_stream.RecordStreamIface.getNextRecordDict`\n result format.\n :returns: (:class:`~nupic.frameworks.opf.opf_utils.ModelResult`)\n An ModelResult namedtuple. The contents of ModelResult.inferences\n depends on the the specific inference type of this model, which\n can be queried by :meth:`.getInferenceType`."}
{"_id": "q_1890", "text": "Return the absolute path of the model's checkpoint file.\n\n :param checkpointDir: (string)\n Directory of where the experiment is to be or was saved\n :returns: (string) An absolute path."}
{"_id": "q_1891", "text": "Serializes model using capnproto and writes data to ``checkpointDir``"}
{"_id": "q_1892", "text": "Deserializes model from checkpointDir using capnproto"}
{"_id": "q_1893", "text": "Save the state maintained by the Model base class\n\n :param proto: capnp ModelProto message builder"}
{"_id": "q_1894", "text": "Return the absolute path of the model's pickle file.\n\n :param saveModelDir: (string)\n Directory of where the experiment is to be or was saved\n :returns: (string) An absolute path."}
{"_id": "q_1895", "text": "Used as optparse callback for reaping a variable number of option args.\n The option may be specified multiple times, and all the args associated with\n that option name will be accumulated in the order that they are encountered"}
{"_id": "q_1896", "text": "Report usage error and exit program with error indication."}
{"_id": "q_1897", "text": "Creates and runs the experiment\n\n Args:\n options: namedtuple ParseCommandLineOptionsResult\n model: For testing: may pass in an existing OPF Model instance\n to use instead of creating a new one.\n\n Returns: reference to OPFExperiment instance that was constructed (this\n is provided to aid with debugging) or None, if none was\n created."}
{"_id": "q_1898", "text": "Creates directory for serialization of the model\n\n checkpointLabel:\n Checkpoint label (string)\n\n Returns:\n absolute path to the serialization directory"}
{"_id": "q_1899", "text": "Returns a checkpoint label string for the given model checkpoint directory\n\n checkpointDir: relative or absolute model checkpoint directory path"}
{"_id": "q_1900", "text": "Return true iff checkpointDir appears to be a checkpoint directory."}
{"_id": "q_1901", "text": "List available checkpoints for the specified experiment."}
{"_id": "q_1902", "text": "Creates and returns a list of activites for this TaskRunner instance\n\n Returns: a list of PeriodicActivityRequest elements"}
{"_id": "q_1903", "text": "Shows predictions of the TM when presented with the characters A, B, C, D, X, and\n Y without any contextual information, that is, not embedded within a sequence."}
{"_id": "q_1904", "text": "Utility function to get information about function callers\n\n The information is the tuple (function/method name, filename, class)\n The class will be None if the caller is just a function and not an object\n method.\n\n :param depth: (int) how far back in the callstack to go to extract the caller\n info"}
{"_id": "q_1905", "text": "Get the arguments, default values, and argument descriptions for a function.\n\n Parses the argument descriptions out of the function docstring, using a\n format something lke this:\n\n ::\n\n [junk]\n argument_name: description...\n description...\n description...\n [junk]\n [more arguments]\n\n It will find an argument as long as the exact argument name starts the line.\n It will then strip a trailing colon, if present, then strip the rest of the\n line and use it to start the description. It will then strip and append any\n subsequent lines with a greater indent level than the original argument name.\n\n :param f: (function) to inspect\n :returns: (list of tuples) (``argName``, ``argDescription``, ``defaultValue``)\n If an argument has no default value, the tuple is only two elements long (as\n ``None`` cannot be used, since it could be a default value itself)."}
{"_id": "q_1906", "text": "Generate a filepath for the calling app"}
{"_id": "q_1907", "text": "Return the number of months and seconds from an aggregation dict that\n represents a date and time.\n\n Interval is a dict that contain one or more of the following keys: 'years',\n 'months', 'weeks', 'days', 'hours', 'minutes', seconds', 'milliseconds',\n 'microseconds'.\n\n For example:\n\n ::\n\n aggregationMicroseconds({'years': 1, 'hours': 4, 'microseconds':42}) ==\n {'months':12, 'seconds':14400.000042}\n\n :param interval: (dict) The aggregation interval representing a date and time\n :returns: (dict) number of months and seconds in the interval:\n ``{months': XX, 'seconds': XX}``. The seconds is\n a floating point that can represent resolutions down to a\n microsecond."}
{"_id": "q_1908", "text": "Return the result from dividing two dicts that represent date and time.\n\n Both dividend and divisor are dicts that contain one or more of the following\n keys: 'years', 'months', 'weeks', 'days', 'hours', 'minutes', seconds',\n 'milliseconds', 'microseconds'.\n\n For example:\n\n ::\n\n aggregationDivide({'hours': 4}, {'minutes': 15}) == 16\n\n :param dividend: (dict) The numerator, as a dict representing a date and time\n :param divisor: (dict) the denominator, as a dict representing a date and time\n :returns: (float) number of times divisor goes into dividend"}
{"_id": "q_1909", "text": "Helper function to create a logger object for the current object with\n the standard Numenta prefix.\n\n :param obj: (object) to add a logger to"}
{"_id": "q_1910", "text": "Returns a subset of the keys that match any of the given patterns\n\n :param patterns: (list) regular expressions to match\n :param keys: (list) keys to search for matches"}
{"_id": "q_1911", "text": "Convert the input, which is in normal space, into log space"}
{"_id": "q_1912", "text": "Exports a network as a networkx MultiDiGraph intermediate representation\n suitable for visualization.\n\n :return: networkx MultiDiGraph"}
{"_id": "q_1913", "text": "Computes the percentage of overlap between vectors x1 and x2.\n\n @param x1 (array) binary vector\n @param x2 (array) binary vector\n @param size (int) length of binary vectors\n\n @return percentOverlap (float) percentage overlap between x1 and x2"}
{"_id": "q_1914", "text": "Poll CPU usage, make predictions, and plot the results. Runs forever."}
{"_id": "q_1915", "text": "List of our member variables that we don't need to be saved"}
{"_id": "q_1916", "text": "If state is allocated in CPP, copy over the data into our numpy arrays."}
{"_id": "q_1917", "text": "If we are having CPP use numpy-allocated buffers, set these buffer\n pointers. This is a relatively fast operation and, for safety, should be\n done before every call to the cells4 compute methods. This protects us\n in situations where code can cause Python or numpy to create copies."}
{"_id": "q_1918", "text": "A segment is active if it has >= activationThreshold connected\n synapses that are active due to infActiveState."}
{"_id": "q_1919", "text": "Given a bucket index, return the list of non-zero bits. If the bucket\n index does not exist, it is created. If the index falls outside our range\n we clip it.\n\n :param index The bucket index to get non-zero bits for.\n @returns numpy array of indices of non-zero bits for specified index."}
{"_id": "q_1920", "text": "Create the given bucket index. Recursively create as many in-between\n bucket indices as necessary."}
{"_id": "q_1921", "text": "Return a new representation for newIndex that overlaps with the\n representation at index by exactly w-1 bits"}
{"_id": "q_1922", "text": "Return the overlap between bucket indices i and j"}
{"_id": "q_1923", "text": "Return the overlap between two representations. rep1 and rep2 are lists of\n non-zero indices."}
{"_id": "q_1924", "text": "Return True if the given overlap between bucket indices i and j are\n acceptable. If overlap is not specified, calculate it from the bucketMap"}
{"_id": "q_1925", "text": "Create a SDR classifier factory.\n The implementation of the SDR Classifier can be specified with\n the \"implementation\" keyword argument.\n\n The SDRClassifierFactory uses the implementation as specified in\n `Default NuPIC Configuration <default-config.html>`_."}
{"_id": "q_1926", "text": "Convenience method to compute a metric over an indices trace, excluding\n resets.\n\n @param (IndicesTrace) Trace of indices\n\n @return (Metric) Metric over trace excluding resets"}
{"_id": "q_1927", "text": "Metric for number of predicted => active cells per column for each sequence\n\n @return (Metric) metric"}
{"_id": "q_1928", "text": "Metric for number of sequences each predicted => active cell appears in\n\n Note: This metric is flawed when it comes to high-order sequences.\n\n @return (Metric) metric"}
{"_id": "q_1929", "text": "Pretty print the connections in the temporal memory.\n\n TODO: Use PrettyTable.\n\n @return (string) Pretty-printed text"}
{"_id": "q_1930", "text": "Generates a Network with connected RecordSensor, SP, TM.\n\n This function takes care of generating regions and the canonical links.\n The network has a sensor region reading data from a specified input and\n passing the encoded representation to an SPRegion.\n The SPRegion output is passed to a TMRegion.\n\n Note: this function returns a network that needs to be initialized. This\n allows the user to extend the network by adding further regions and\n connections.\n\n :param recordParams: a dict with parameters for creating RecordSensor region.\n :param spatialParams: a dict with parameters for creating SPRegion.\n :param temporalParams: a dict with parameters for creating TMRegion.\n :param verbosity: an integer representing how chatty the network will be."}
{"_id": "q_1931", "text": "Multiplies a value over a range of rows.\n\n Args:\n reader: A FileRecordStream object with input data.\n writer: A FileRecordStream object to write output data to.\n column: The column of data to modify.\n start: The first row in the range to modify.\n end: The last row in the range to modify.\n multiple: The value to scale/multiply by."}
{"_id": "q_1932", "text": "Copies a range of values to a new location in the data set.\n\n Args:\n reader: A FileRecordStream object with input data.\n writer: A FileRecordStream object to write output data to.\n start: The first row in the range to copy.\n stop: The last row in the range to copy.\n insertLocation: The location to insert the copied range. If not specified,\n the range is inserted immediately following itself."}
{"_id": "q_1933", "text": "generate description from a text description of the ranges"}
{"_id": "q_1934", "text": "Reset the state of all cells.\n\n This is normally used between sequences while training. All internal states\n are reset to 0."}
{"_id": "q_1935", "text": "Called at the end of learning and inference, this routine will update\n a number of stats in our _internalStats dictionary, including our computed\n prediction score.\n\n :param stats internal stats dictionary\n :param bottomUpNZ list of the active bottom-up inputs\n :param predictedState The columns we predicted on the last time step (should\n match the current bottomUpNZ in the best case)\n :param colConfidence Column confidences we determined on the last time step"}
{"_id": "q_1936", "text": "Print an integer array that is the same shape as activeState.\n\n :param aState: TODO: document"}
{"_id": "q_1937", "text": "Print a floating point array that is the same shape as activeState.\n\n :param aState: TODO: document\n :param maxCols: TODO: document"}
{"_id": "q_1938", "text": "Print up to maxCols number from a flat floating point array.\n\n :param aState: TODO: document\n :param maxCols: TODO: document"}
{"_id": "q_1939", "text": "Print the parameter settings for the TM."}
{"_id": "q_1940", "text": "Called at the end of inference to print out various diagnostic\n information based on the current verbosity level.\n\n :param output: TODO: document\n :param learn: TODO: document"}
{"_id": "q_1941", "text": "Update our moving average of learned sequence length."}
{"_id": "q_1942", "text": "A utility method called from learnBacktrack. This will backtrack\n starting from the given startOffset in our prevLrnPatterns queue.\n\n It returns True if the backtrack was successful and we managed to get\n predictions all the way up to the current time step.\n\n If readOnly, then no segments are updated or modified, otherwise, all\n segment updates that belong to the given path are applied.\n \n This updates/modifies:\n\n - lrnActiveState['t']\n\n This trashes:\n\n - lrnPredictedState['t']\n - lrnPredictedState['t-1']\n - lrnActiveState['t-1']\n\n :param startOffset: Start offset within the prevLrnPatterns input history\n :param readOnly: \n :return: True if we managed to lock on to a sequence that started\n earlier.\n If False, we lost predictions somewhere along the way\n leading up to the current time."}
{"_id": "q_1943", "text": "This \"backtracks\" our learning state, trying to see if we can lock onto\n the current set of inputs by assuming the sequence started up to N steps\n ago on start cells.\n\n This will adjust @ref lrnActiveState['t'] if it does manage to lock on to a\n sequence that started earlier.\n\n :returns: >0 if we managed to lock on to a sequence that started\n earlier. The value returned is how many steps in the\n past we locked on.\n If 0 is returned, the caller needs to change active\n state to start on start cells.\n\n How it works:\n -------------------------------------------------------------------\n This method gets called from updateLearningState when we detect either of\n the following two conditions:\n\n #. Our PAM counter (@ref pamCounter) expired\n #. We reached the max allowed learned sequence length\n\n Either of these two conditions indicate that we want to start over on start\n cells.\n\n Rather than start over on start cells on the current input, we can\n accelerate learning by backtracking a few steps ago and seeing if perhaps\n a sequence we already at least partially know already started.\n\n This updates/modifies:\n - @ref lrnActiveState['t']\n\n This trashes:\n - @ref lrnActiveState['t-1']\n - @ref lrnPredictedState['t']\n - @ref lrnPredictedState['t-1']"}
{"_id": "q_1944", "text": "Compute the learning active state given the predicted state and\n the bottom-up input.\n\n :param activeColumns list of active bottom-ups\n :param readOnly True if being called from backtracking logic.\n This tells us not to increment any segment\n duty cycles or queue up any updates.\n :returns: True if the current input was sufficiently predicted, OR\n if we started over on startCells. False indicates that the current\n input was NOT predicted, well enough to consider it as \"inSequence\"\n\n This looks at:\n - @ref lrnActiveState['t-1']\n - @ref lrnPredictedState['t-1']\n\n This modifies:\n - @ref lrnActiveState['t']\n - @ref lrnActiveState['t-1']"}
{"_id": "q_1945", "text": "Compute the predicted segments given the current set of active cells.\n\n :param readOnly True if being called from backtracking logic.\n This tells us not to increment any segment\n duty cycles or queue up any updates.\n\n This computes the lrnPredictedState['t'] and queues up any segments that\n became active (and the list of active synapses for each segment) into\n the segmentUpdates queue\n\n This looks at:\n - @ref lrnActiveState['t']\n\n This modifies:\n - @ref lrnPredictedState['t']\n - @ref segmentUpdates"}
{"_id": "q_1946", "text": "Handle one compute, possibly learning.\n\n .. note:: It is an error to have both ``enableLearn`` and \n ``enableInference`` set to False\n\n .. note:: By default, we don't compute the inference output when learning \n because it slows things down, but you can override this by passing \n in True for ``enableInference``.\n\n :param bottomUpInput: The bottom-up input as numpy list, typically from a \n spatial pooler.\n :param enableLearn: (bool) If true, perform learning\n :param enableInference: (bool) If None, default behavior is to disable the \n inference output when ``enableLearn`` is on. If true, compute the \n inference output. If false, do not compute the inference output.\n\n :returns: TODO: document"}
{"_id": "q_1947", "text": "This method goes through a list of segments for a given cell and\n deletes all synapses whose permanence is less than minPermanence and deletes\n any segments that have less than minNumSyns synapses remaining.\n\n :param colIdx Column index\n :param cellIdx Cell index within the column\n :param segList List of segment references\n :param minPermanence Any syn whose permamence is 0 or < minPermanence will\n be deleted.\n :param minNumSyns Any segment with less than minNumSyns synapses remaining\n in it will be deleted.\n\n :returns: tuple (numSegsRemoved, numSynsRemoved)"}
{"_id": "q_1948", "text": "This method deletes all synapses whose permanence is less than\n minPermanence and deletes any segments that have less than\n minNumSyns synapses remaining.\n\n :param minPermanence: (float) Any syn whose permanence is 0 or < \n ``minPermanence`` will be deleted. If None is passed in, then \n ``self.connectedPerm`` is used.\n :param minNumSyns: (int) Any segment with less than ``minNumSyns`` synapses \n remaining in it will be deleted. If None is passed in, then \n ``self.activationThreshold`` is used.\n :returns: (tuple) ``numSegsRemoved``, ``numSynsRemoved``"}
{"_id": "q_1949", "text": "Removes any update that would be for the given col, cellIdx, segIdx.\n\n NOTE: logically, we need to do this when we delete segments, so that if\n an update refers to a segment that was just deleted, we also remove\n that update from the update list. However, I haven't seen it trigger\n in any of the unit tests yet, so it might mean that it's not needed\n and that situation doesn't occur, by construction."}
{"_id": "q_1950", "text": "Find weakly activated cell in column with at least minThreshold active\n synapses.\n\n :param c which column to look at\n :param activeState the active cells\n :param minThreshold minimum number of synapses required\n\n :returns: tuple (cellIdx, segment, numActiveSynapses)"}
{"_id": "q_1951", "text": "For the given cell, find the segment with the largest number of active\n synapses. This routine is aggressive in finding the best match. The\n permanence value of synapses is allowed to be below connectedPerm. The number\n of active synapses is allowed to be below activationThreshold, but must be\n above minThreshold. The routine returns the segment index. If no segments are\n found, then an index of -1 is returned.\n\n :param c TODO: document\n :param i TODO: document\n :param activeState TODO: document"}
{"_id": "q_1952", "text": "This function applies segment update information to a segment in a\n cell.\n\n Synapses on the active list get their permanence counts incremented by\n permanenceInc. All other synapses get their permanence counts decremented\n by permanenceDec.\n\n We also increment the positiveActivations count of the segment.\n\n :param segUpdate SegmentUpdate instance\n :returns: True if some synapses were decremented to 0 and the segment is a\n candidate for trimming"}
{"_id": "q_1953", "text": "Create training sequences that share some elements in the middle.\n\n Parameters:\n -----------------------------------------------------\n numSequences: Number of unique training sequences to generate\n seqLen: Overall length of each sequence\n sharedElements: Which element indices of each sequence are shared. These\n will be in the range between 0 and seqLen-1\n numOnBitsPerPattern: Number of ON bits in each TM input pattern\n patternOverlap: Max number of bits of overlap between any 2 patterns\n retval: (numCols, trainingSequences)\n numCols - width of the patterns\n trainingSequences - a list of training sequences"}
{"_id": "q_1954", "text": "Create a bunch of sequences of various lengths, all built from\n a fixed set of patterns.\n\n Parameters:\n -----------------------------------------------------\n numSequences: Number of training sequences to generate\n seqLen: List of possible sequence lengths\n numPatterns: How many possible patterns there are to use within\n sequences\n numOnBitsPerPattern: Number of ON bits in each TM input pattern\n patternOverlap: Max number of bits of overlap between any 2 patterns\n retval: (numCols, trainingSequences)\n numCols - width of the patterns\n trainingSequences - a list of training sequences"}
{"_id": "q_1955", "text": "Create one or more TM instances, placing each into a dict keyed by\n name.\n\n Parameters:\n ------------------------------------------------------------------\n retval: tms - dict of TM instances"}
{"_id": "q_1956", "text": "Check for diffs among the TM instances in the passed in tms dict and\n raise an assert if any are detected\n\n Parameters:\n ---------------------------------------------------------------------\n tms: dict of TM instances"}
{"_id": "q_1957", "text": "Compress a byte string.\n\n Args:\n string (bytes): The input data.\n mode (int, optional): The compression mode can be MODE_GENERIC (default),\n MODE_TEXT (for UTF-8 format text input) or MODE_FONT (for WOFF 2.0).\n quality (int, optional): Controls the compression-speed vs compression-\n density tradeoff. The higher the quality, the slower the compression.\n Range is 0 to 11. Defaults to 11.\n lgwin (int, optional): Base 2 logarithm of the sliding window size. Range\n is 10 to 24. Defaults to 22.\n lgblock (int, optional): Base 2 logarithm of the maximum input block size.\n Range is 16 to 24. If set to 0, the value will be set based on the\n quality. Defaults to 0.\n\n Returns:\n The compressed byte string.\n\n Raises:\n brotli.error: If arguments are invalid, or compressor fails."}
{"_id": "q_1958", "text": "Show string or char."}
{"_id": "q_1959", "text": "Read n bytes from the stream on a byte boundary."}
{"_id": "q_1960", "text": "Store decodeTable,\n and compute lengthTable, minLength, maxLength from encodings."}
{"_id": "q_1961", "text": "Show all words of the code in a nice format."}
{"_id": "q_1962", "text": "Read symbol from stream. Returns symbol, length."}
{"_id": "q_1963", "text": "Override if you don't define value0 and extraTable"}
{"_id": "q_1964", "text": "Give the range of possible values in a tuple\n Useful for mnemonic and explanation"}
{"_id": "q_1965", "text": "Give count and value."}
{"_id": "q_1966", "text": "Make a nice mnemonic"}
{"_id": "q_1967", "text": "Give mnemonic representation of meaning.\n verbose compresses strings of x's"}
{"_id": "q_1968", "text": "Perform the proper action"}
{"_id": "q_1969", "text": "Read MNIBBLES and meta block length;\n if empty block, skip block and return true."}
{"_id": "q_1970", "text": "In place inverse move to front transform."}
{"_id": "q_1971", "text": "Implementation of Dataset.to_arrow_table"}
{"_id": "q_1972", "text": "Adds method f to the Dataset class"}
{"_id": "q_1973", "text": "Convert proper motion to perpendicular velocities.\n\n :param distance:\n :param pm_long:\n :param pm_lat:\n :param vl:\n :param vb:\n :param cov_matrix_distance_pm_long_pm_lat:\n :param uncertainty_postfix:\n :param covariance_postfix:\n :param radians:\n :return:"}
{"_id": "q_1974", "text": "Return a graphviz.Digraph object with a graph of the expression"}
{"_id": "q_1975", "text": "Map values of an expression or in memory column accoring to an input\n dictionary or a custom callable function.\n\n Example:\n\n >>> import vaex\n >>> df = vaex.from_arrays(color=['red', 'red', 'blue', 'red', 'green'])\n >>> mapper = {'red': 1, 'blue': 2, 'green': 3}\n >>> df['color_mapped'] = df.color.map(mapper)\n >>> df\n # color color_mapped\n 0 red 1\n 1 red 1\n 2 blue 2\n 3 red 1\n 4 green 3\n >>> import numpy as np\n >>> df = vaex.from_arrays(type=[0, 1, 2, 2, 2, np.nan])\n >>> df['role'] = df['type'].map({0: 'admin', 1: 'maintainer', 2: 'user', np.nan: 'unknown'})\n >>> df\n # type role\n 0 0 admin\n 1 1 maintainer\n 2 2 user\n 3 2 user\n 4 2 user\n 5 nan unknown \n\n :param mapper: dict like object used to map the values from keys to values\n :param nan_mapping: value to be used when a nan is present (and not in the mapper)\n :param null_mapping: value to use used when there is a missing value\n :return: A vaex expression\n :rtype: vaex.expression.Expression"}
{"_id": "q_1976", "text": "Create a vaex app, the QApplication mainloop must be started.\n\n In ipython notebook/jupyter do the following:\n\n >>> import vaex.ui.main # this causes the qt api level to be set properly\n >>> import vaex\n\n Next cell:\n\n >>> %gui qt\n\n Next cell:\n\n >>> app = vaex.app()\n\n From now on, you can run the app along with jupyter"}
{"_id": "q_1977", "text": "Open a list of filenames, and return a DataFrame with all DataFrames cocatenated.\n\n :param list[str] filenames: list of filenames/paths\n :rtype: DataFrame"}
{"_id": "q_1978", "text": "Create a vaex DataFrame from an Astropy Table."}
{"_id": "q_1979", "text": "Create an in memory DataFrame from numpy arrays.\n\n Example\n\n >>> import vaex, numpy as np\n >>> x = np.arange(5)\n >>> y = x ** 2\n >>> vaex.from_arrays(x=x, y=y)\n # x y\n 0 0 0\n 1 1 1\n 2 2 4\n 3 3 9\n 4 4 16\n >>> some_dict = {'x': x, 'y': y}\n >>> vaex.from_arrays(**some_dict) # in case you have your columns in a dict\n # x y\n 0 0 0\n 1 1 1\n 2 2 4\n 3 3 9\n 4 4 16\n\n :param arrays: keyword arguments with arrays\n :rtype: DataFrame"}
{"_id": "q_1980", "text": "Creates a zeldovich DataFrame."}
{"_id": "q_1981", "text": "Concatenate a list of DataFrames.\n\n :rtype: DataFrame"}
{"_id": "q_1982", "text": "Add a dataset and add it to the UI"}
{"_id": "q_1983", "text": "Decorator to transparantly accept delayed computation.\n\n Example:\n\n >>> delayed_sum = ds.sum(ds.E, binby=ds.x, limits=limits,\n >>> shape=4, delay=True)\n >>> @vaex.delayed\n >>> def total_sum(sums):\n >>> return sums.sum()\n >>> sum_of_sums = total_sum(delayed_sum)\n >>> ds.execute()\n >>> sum_of_sums.get()\n See the tutorial for a more complete example https://docs.vaex.io/en/latest/tutorial.html#Parallel-computations"}
{"_id": "q_1984", "text": "Helper function for returning tasks results, result when immediate is True, otherwise the task itself, which is a promise"}
{"_id": "q_1985", "text": "Sort table by given column number."}
{"_id": "q_1986", "text": "Used for unittesting to make sure the plots are all done"}
{"_id": "q_1987", "text": "Evaluates expression, and drop the result, usefull for benchmarking, since vaex is usually lazy"}
{"_id": "q_1988", "text": "Calculate the sum for the given expression, possible on a grid defined by binby\n\n Example:\n\n >>> df.sum(\"L\")\n 304054882.49378014\n >>> df.sum(\"L\", binby=\"E\", shape=4)\n array([ 8.83517994e+06, 5.92217598e+07, 9.55218726e+07,\n 1.40008776e+08])\n\n :param expression: {expression}\n :param binby: {binby}\n :param limits: {limits}\n :param shape: {shape}\n :param selection: {selection}\n :param delay: {delay}\n :param progress: {progress}\n :return: {return_stat_scalar}"}
{"_id": "q_1989", "text": "Calculate the standard deviation for the given expression, possible on a grid defined by binby\n\n\n >>> df.std(\"vz\")\n 110.31773397535071\n >>> df.std(\"vz\", binby=[\"(x**2+y**2)**0.5\"], shape=4)\n array([ 123.57954851, 85.35190177, 61.14345748, 38.0740619 ])\n\n :param expression: {expression}\n :param binby: {binby}\n :param limits: {limits}\n :param shape: {shape}\n :param selection: {selection}\n :param delay: {delay}\n :param progress: {progress}\n :return: {return_stat_scalar}"}
{"_id": "q_1990", "text": "Calculate the covariance matrix for x and y or more expressions, possibly on a grid defined by binby.\n\n Either x and y are expressions, e.g:\n\n >>> df.cov(\"x\", \"y\")\n\n Or only the x argument is given with a list of expressions, e,g.:\n\n >>> df.cov([\"x, \"y, \"z\"])\n\n Example:\n\n >>> df.cov(\"x\", \"y\")\n array([[ 53.54521742, -3.8123135 ],\n [ -3.8123135 , 60.62257881]])\n >>> df.cov([\"x\", \"y\", \"z\"])\n array([[ 53.54521742, -3.8123135 , -0.98260511],\n [ -3.8123135 , 60.62257881, 1.21381057],\n [ -0.98260511, 1.21381057, 25.55517638]])\n\n >>> df.cov(\"x\", \"y\", binby=\"E\", shape=2)\n array([[[ 9.74852878e+00, -3.02004780e-02],\n [ -3.02004780e-02, 9.99288215e+00]],\n [[ 8.43996546e+01, -6.51984181e+00],\n [ -6.51984181e+00, 9.68938284e+01]]])\n\n\n :param x: {expression}\n :param y: {expression_single}\n :param binby: {binby}\n :param limits: {limits}\n :param shape: {shape}\n :param selection: {selection}\n :param delay: {delay}\n :return: {return_stat_scalar}, the last dimensions are of shape (2,2)"}
{"_id": "q_1991", "text": "Calculate the median , possibly on a grid defined by binby.\n\n NOTE: this value is approximated by calculating the cumulative distribution on a grid defined by\n percentile_shape and percentile_limits\n\n\n :param expression: {expression}\n :param binby: {binby}\n :param limits: {limits}\n :param shape: {shape}\n :param percentile_limits: {percentile_limits}\n :param percentile_shape: {percentile_shape}\n :param selection: {selection}\n :param delay: {delay}\n :return: {return_stat_scalar}"}
{"_id": "q_1992", "text": "Viz data in 2d using a healpix column.\n\n :param healpix_expression: {healpix_max_level}\n :param healpix_max_level: {healpix_max_level}\n :param healpix_level: {healpix_level}\n :param what: {what}\n :param selection: {selection}\n :param grid: {grid}\n :param healpix_input: Specificy if the healpix index is in \"equatorial\", \"galactic\" or \"ecliptic\".\n :param healpix_output: Plot in \"equatorial\", \"galactic\" or \"ecliptic\".\n :param f: function to apply to the data\n :param colormap: matplotlib colormap\n :param grid_limits: Optional sequence [minvalue, maxvalue] that determine the min and max value that map to the colormap (values below and above these are clipped to the the min/max). (default is [min(f(grid)), max(f(grid)))\n :param image_size: size for the image that healpy uses for rendering\n :param nest: If the healpix data is in nested (True) or ring (False)\n :param figsize: If given, modify the matplotlib figure size. Example (14,9)\n :param interactive: (Experimental, uses healpy.mollzoom is True)\n :param title: Title of figure\n :param smooth: apply gaussian smoothing, in degrees\n :param show: Call matplotlib's show (True) or not (False, defaut)\n :param rotation: Rotatate the plot, in format (lon, lat, psi) such that (lon, lat) is the center, and rotate on the screen by angle psi. All angles are degrees.\n :return:"}
{"_id": "q_1993", "text": "Use at own risk, requires ipyvolume"}
{"_id": "q_1994", "text": "Return the numpy dtype for the given expression, if not a column, the first row will be evaluated to get the dtype."}
{"_id": "q_1995", "text": "Each DataFrame has a directory where files are stored for metadata etc.\n\n Example\n\n >>> import vaex\n >>> ds = vaex.example()\n >>> vaex.get_private_dir()\n '/Users/users/breddels/.vaex/dfs/_Users_users_breddels_vaex-testing_data_helmi-dezeeuw-2000-10p.hdf5'\n\n :param bool create: is True, it will create the directory if it does not exist"}
{"_id": "q_1996", "text": "Return the internal state of the DataFrame in a dictionary\n\n Example:\n\n >>> import vaex\n >>> df = vaex.from_scalars(x=1, y=2)\n >>> df['r'] = (df.x**2 + df.y**2)**0.5\n >>> df.state_get()\n {'active_range': [0, 1],\n 'column_names': ['x', 'y', 'r'],\n 'description': None,\n 'descriptions': {},\n 'functions': {},\n 'renamed_columns': [],\n 'selections': {'__filter__': None},\n 'ucds': {},\n 'units': {},\n 'variables': {},\n 'virtual_columns': {'r': '(((x ** 2) + (y ** 2)) ** 0.5)'}}"}
{"_id": "q_1997", "text": "Writes virtual columns, variables and their ucd,description and units.\n\n The default implementation is to write this to a file called virtual_meta.yaml in the directory defined by\n :func:`DataFrame.get_private_dir`. Other implementation may store this in the DataFrame file itself.\n\n This method is called after virtual columns or variables are added. Upon opening a file, :func:`DataFrame.update_virtual_meta`\n is called, so that the information is not lost between sessions.\n\n Note: opening a DataFrame twice may result in corruption of this file."}
{"_id": "q_1998", "text": "Writes all meta data, ucd,description and units\n\n The default implementation is to write this to a file called meta.yaml in the directory defined by\n :func:`DataFrame.get_private_dir`. Other implementation may store this in the DataFrame file itself.\n (For instance the vaex hdf5 implementation does this)\n\n This method is called after virtual columns or variables are added. Upon opening a file, :func:`DataFrame.update_meta`\n is called, so that the information is not lost between sessions.\n\n Note: opening a DataFrame twice may result in corruption of this file."}
{"_id": "q_1999", "text": "Generate a Subspaces object, based on a custom list of expressions or all possible combinations based on\n dimension\n\n :param expressions_list: list of list of expressions, where the inner list defines the subspace\n :param dimensions: if given, generates a subspace with all possible combinations for that dimension\n :param exclude: list of"}
{"_id": "q_2000", "text": "Set the variable to an expression or value defined by expression_or_value.\n\n Example\n\n >>> df.set_variable(\"a\", 2.)\n >>> df.set_variable(\"b\", \"a**2\")\n >>> df.get_variable(\"b\")\n 'a**2'\n >>> df.evaluate_variable(\"b\")\n 4.0\n\n :param name: Name of the variable\n :param write: write variable to meta file\n :param expression: value or expression"}
{"_id": "q_2001", "text": "Evaluates the variable given by name."}
{"_id": "q_2002", "text": "Internal use, ignores the filter"}
{"_id": "q_2003", "text": "Return a dict containing the ndarray corresponding to the evaluated data\n\n :param column_names: list of column names, to export, when None DataFrame.get_column_names(strings=strings, virtual=virtual) is used\n :param selection: {selection}\n :param strings: argument passed to DataFrame.get_column_names when column_names is None\n :param virtual: argument passed to DataFrame.get_column_names when column_names is None\n :return: dict"}
{"_id": "q_2004", "text": "Return a copy of the DataFrame, if selection is None, it does not copy the data, it just has a reference\n\n :param column_names: list of column names, to copy, when None DataFrame.get_column_names(strings=strings, virtual=virtual) is used\n :param selection: {selection}\n :param strings: argument passed to DataFrame.get_column_names when column_names is None\n :param virtual: argument passed to DataFrame.get_column_names when column_names is None\n :param selections: copy selections to a new DataFrame\n :return: dict"}
{"_id": "q_2005", "text": "Return a pandas DataFrame containing the ndarray corresponding to the evaluated data\n\n If index is given, that column is used for the index of the dataframe.\n\n Example\n\n >>> df_pandas = df.to_pandas_df([\"x\", \"y\", \"z\"])\n >>> df_copy = vaex.from_pandas(df_pandas)\n\n :param column_names: list of column names, to export, when None DataFrame.get_column_names(strings=strings, virtual=virtual) is used\n :param selection: {selection}\n :param strings: argument passed to DataFrame.get_column_names when column_names is None\n :param virtual: argument passed to DataFrame.get_column_names when column_names is None\n :param index_column: if this column is given it is used for the index of the DataFrame\n :return: pandas.DataFrame object"}
{"_id": "q_2006", "text": "Returns a astropy table object containing the ndarrays corresponding to the evaluated data\n\n :param column_names: list of column names, to export, when None DataFrame.get_column_names(strings=strings, virtual=virtual) is used\n :param selection: {selection}\n :param strings: argument passed to DataFrame.get_column_names when column_names is None\n :param virtual: argument passed to DataFrame.get_column_names when column_names is None\n :param index: if this column is given it is used for the index of the DataFrame\n :return: astropy.table.Table object"}
{"_id": "q_2007", "text": "Add an in memory array as a column."}
{"_id": "q_2008", "text": "Renames a column, not this is only the in memory name, this will not be reflected on disk"}
{"_id": "q_2009", "text": "Convert cartesian to polar coordinates\n\n :param x: expression for x\n :param y: expression for y\n :param radius_out: name for the virtual column for the radius\n :param azimuth_out: name for the virtual column for the azimuth angle\n :param propagate_uncertainties: {propagate_uncertainties}\n :param radians: if True, azimuth is in radians, defaults to degrees\n :return:"}
{"_id": "q_2010", "text": "Convert cartesian to polar velocities.\n\n :param x:\n :param y:\n :param vx:\n :param radius_polar: Optional expression for the radius, may lead to a better performance when given.\n :param vy:\n :param vr_out:\n :param vazimuth_out:\n :param propagate_uncertainties: {propagate_uncertainties}\n :return:"}
{"_id": "q_2011", "text": "Rotation in 2d.\n\n :param str x: Name/expression of x column\n :param str y: idem for y\n :param str xnew: name of transformed x column\n :param str ynew:\n :param float angle_degrees: rotation in degrees, anti clockwise\n :return:"}
{"_id": "q_2012", "text": "Convert spherical to cartesian coordinates.\n\n\n\n :param alpha:\n :param delta: polar angle, ranging from the -90 (south pole) to 90 (north pole)\n :param distance: radial distance, determines the units of x, y and z\n :param xname:\n :param yname:\n :param zname:\n :param propagate_uncertainties: {propagate_uncertainties}\n :param center:\n :param center_name:\n :param radians:\n :return:"}
{"_id": "q_2013", "text": "Convert cartesian to spherical coordinates.\n\n\n\n :param x:\n :param y:\n :param z:\n :param alpha:\n :param delta: name for polar angle, ranges from -90 to 90 (or -pi to pi when radians is True).\n :param distance:\n :param radians:\n :param center:\n :param center_name:\n :return:"}
{"_id": "q_2014", "text": "Add a virtual column to the DataFrame.\n\n Example:\n\n >>> df.add_virtual_column(\"r\", \"sqrt(x**2 + y**2 + z**2)\")\n >>> df.select(\"r < 10\")\n\n :param: str name: name of virtual column\n :param: expression: expression for the column\n :param str unique: if name is already used, make it unique by adding a postfix, e.g. _1, or _2"}
{"_id": "q_2015", "text": "Deletes a virtual column from a DataFrame."}
{"_id": "q_2016", "text": "Add a variable to to a DataFrame.\n\n A variable may refer to other variables, and virtual columns and expression may refer to variables.\n\n Example\n\n >>> df.add_variable('center', 0)\n >>> df.add_virtual_column('x_prime', 'x-center')\n >>> df.select('x_prime < 0')\n\n :param: str name: name of virtual varible\n :param: expression: expression for the variable"}
{"_id": "q_2017", "text": "Display the first and last n elements of a DataFrame."}
{"_id": "q_2018", "text": "Give a description of the DataFrame.\n\n >>> import vaex\n >>> df = vaex.example()[['x', 'y', 'z']]\n >>> df.describe()\n x y z\n dtype float64 float64 float64\n count 330000 330000 330000\n missing 0 0 0\n mean -0.0671315 -0.0535899 0.0169582\n std 7.31746 7.78605 5.05521\n min -128.294 -71.5524 -44.3342\n max 271.366 146.466 50.7185\n >>> df.describe(selection=df.x > 0)\n x y z\n dtype float64 float64 float64\n count 164060 164060 164060\n missing 165940 165940 165940\n mean 5.13572 -0.486786 -0.0868073\n std 5.18701 7.61621 5.02831\n min 1.51635e-05 -71.5524 -44.3342\n max 271.366 78.0724 40.2191\n\n :param bool strings: Describe string columns or not\n :param bool virtual: Describe virtual columns or not\n :param selection: Optional selection to use.\n :return: Pandas dataframe"}
{"_id": "q_2019", "text": "Set the current row, and emit the signal signal_pick."}
{"_id": "q_2020", "text": "Return a DataFrame, where all columns are 'trimmed' by the active range.\n\n For the returned DataFrame, df.get_active_range() returns (0, df.length_original()).\n\n {note_copy}\n\n :param inplace: {inplace}\n :rtype: DataFrame"}
{"_id": "q_2021", "text": "Returns a DataFrame containing only rows indexed by indices\n\n {note_copy}\n\n Example:\n\n >>> import vaex, numpy as np\n >>> df = vaex.from_arrays(s=np.array(['a', 'b', 'c', 'd']), x=np.arange(1,5))\n >>> df.take([0,2])\n # s x\n 0 a 1\n 1 c 3\n\n :param indices: sequence (list or numpy array) with row numbers\n :return: DataFrame which is a shallow copy of the original data.\n :rtype: DataFrame"}
{"_id": "q_2022", "text": "Return a DataFrame containing only the filtered rows.\n\n {note_copy}\n\n The resulting DataFrame may be more efficient to work with when the original DataFrame is\n heavily filtered (contains just a small number of rows).\n\n If no filtering is applied, it returns a trimmed view.\n For the returned df, len(df) == df.length_original() == df.length_unfiltered()\n\n :rtype: DataFrame"}
{"_id": "q_2023", "text": "Returns a list containing random portions of the DataFrame.\n\n {note_copy}\n\n Example:\n\n >>> import vaex, import numpy as np\n >>> np.random.seed(111)\n >>> df = vaex.from_arrays(x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n >>> for dfs in df.split_random(frac=0.3, random_state=42):\n ... print(dfs.x.values)\n ...\n [8 1 5]\n [0 7 2 9 4 3 6]\n >>> for split in df.split_random(frac=[0.2, 0.3, 0.5], random_state=42):\n ... print(dfs.x.values)\n [8 1]\n [5 0 7]\n [2 9 4 3 6]\n\n :param int/list frac: If int will split the DataFrame in two portions, the first of which will have size as specified by this parameter. If list, the generator will generate as many portions as elements in the list, where each element defines the relative fraction of that portion.\n :param int random_state: (default, None) Random number seed for reproducibility.\n :return: A list of DataFrames.\n :rtype: list"}
{"_id": "q_2024", "text": "Returns a list containing ordered subsets of the DataFrame.\n\n {note_copy}\n\n Example:\n\n >>> import vaex\n >>> df = vaex.from_arrays(x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n >>> for dfs in df.split(frac=0.3):\n ... print(dfs.x.values)\n ...\n [0 1 3]\n [3 4 5 6 7 8 9]\n >>> for split in df.split(frac=[0.2, 0.3, 0.5]):\n ... print(dfs.x.values)\n [0 1]\n [2 3 4]\n [5 6 7 8 9]\n\n :param int/list frac: If int will split the DataFrame in two portions, the first of which will have size as specified by this parameter. If list, the generator will generate as many portions as elements in the list, where each element defines the relative fraction of that portion.\n :return: A list of DataFrames.\n :rtype: list"}
{"_id": "q_2025", "text": "Undo selection, for the name."}
{"_id": "q_2026", "text": "Can selection name be redone?"}
{"_id": "q_2027", "text": "Perform a selection, defined by the boolean expression, and combined with the previous selection using the given mode.\n\n Selections are recorded in a history tree, per name, undo/redo can be done for them separately.\n\n :param str boolean_expression: Any valid column expression, with comparison operators\n :param str mode: Possible boolean operator: replace/and/or/xor/subtract\n :param str name: history tree or selection 'slot' to use\n :param executor:\n :return:"}
{"_id": "q_2028", "text": "Create a selection that selects rows having non missing values for all columns in column_names.\n\n The name reflect Panda's, no rows are really dropped, but a mask is kept to keep track of the selection\n\n :param drop_nan: drop rows when there is a NaN in any of the columns (will only affect float values)\n :param drop_masked: drop rows when there is a masked value in any of the columns\n :param column_names: The columns to consider, default: all (real, non-virtual) columns\n :param str mode: Possible boolean operator: replace/and/or/xor/subtract\n :param str name: history tree or selection 'slot' to use\n :return:"}
{"_id": "q_2029", "text": "Create a shallow copy of a DataFrame, with filtering set using select_non_missing.\n\n :param drop_nan: drop rows when there is a NaN in any of the columns (will only affect float values)\n :param drop_masked: drop rows when there is a masked value in any of the columns\n :param column_names: The columns to consider, default: all (real, non-virtual) columns\n :rtype: DataFrame"}
{"_id": "q_2030", "text": "Select a 2d rectangular box in the space given by x and y, bounds by limits.\n\n Example:\n\n >>> df.select_box('x', 'y', [(0, 10), (0, 1)])\n\n :param x: expression for the x space\n :param y: expression fo the y space\n :param limits: sequence of shape [(x1, x2), (y1, y2)]\n :param mode:"}
{"_id": "q_2031", "text": "Select a n-dimensional rectangular box bounded by limits.\n\n The following examples are equivalent:\n\n >>> df.select_box(['x', 'y'], [(0, 10), (0, 1)])\n >>> df.select_rectangle('x', 'y', [(0, 10), (0, 1)])\n\n :param spaces: list of expressions\n :param limits: sequence of shape [(x1, x2), (y1, y2)]\n :param mode:\n :param name:\n :return:"}
{"_id": "q_2032", "text": "Select an elliptical region centred on xc, yc, with a certain width, height\n and angle.\n\n Example:\n\n >>> df.select_ellipse('x','y', 2, -1, 5,1, 30, name='my_ellipse')\n\n :param x: expression for the x space\n :param y: expression for the y space\n :param xc: location of the centre of the ellipse in x\n :param yc: location of the centre of the ellipse in y\n :param width: the width of the ellipse (diameter)\n :param height: the width of the ellipse (diameter)\n :param angle: (degrees) orientation of the ellipse, counter-clockwise\n measured from the y axis\n :param name: name of the selection\n :param mode:\n :return:"}
{"_id": "q_2033", "text": "Invert the selection, i.e. what is selected will not be, and vice versa\n\n :param str name:\n :param executor:\n :return:"}
{"_id": "q_2034", "text": "Sets the selection object\n\n :param selection: Selection object\n :param name: selection 'slot'\n :param executor:\n :return:"}
{"_id": "q_2035", "text": "Finds a non-colliding name by optional postfixing"}
{"_id": "q_2036", "text": "Return a graphviz.Digraph object with a graph of all virtual columns"}
{"_id": "q_2037", "text": "Mark column as categorical, with given labels, assuming zero indexing"}
{"_id": "q_2038", "text": "Gives direct access to the data as numpy arrays.\n\n Convenient when working with IPython in combination with small DataFrames, since this gives tab-completion.\n Only real columns (i.e. no virtual) columns can be accessed, for getting the data from virtual columns, use\n DataFrame.evalulate(...).\n\n Columns can be accesed by there names, which are attributes. The attribues are of type numpy.ndarray.\n\n Example:\n\n >>> df = vaex.example()\n >>> r = np.sqrt(df.data.x**2 + df.data.y**2)"}
{"_id": "q_2039", "text": "Get the length of the DataFrames, for the selection of the whole DataFrame.\n\n If selection is False, it returns len(df).\n\n TODO: Implement this in DataFrameRemote, and move the method up in :func:`DataFrame.length`\n\n :param selection: When True, will return the number of selected rows\n :return:"}
{"_id": "q_2040", "text": "Join the columns of the other DataFrame to this one, assuming the ordering is the same"}
{"_id": "q_2041", "text": "Concatenates two DataFrames, adding the rows of one the other DataFrame to the current, returned in a new DataFrame.\n\n No copy of the data is made.\n\n :param other: The other DataFrame that is concatenated with this DataFrame\n :return: New DataFrame with the rows concatenated\n :rtype: DataFrameConcatenated"}
{"_id": "q_2042", "text": "Exports the DataFrame to a vaex hdf5 file\n\n :param DataFrameLocal df: DataFrame to export\n :param str path: path for file\n :param lis[str] column_names: list of column names to export or None for all columns\n :param str byteorder: = for native, < for little endian and > for big endian\n :param bool shuffle: export rows in random order\n :param bool selection: export selection or not\n :param progress: progress callback that gets a progress fraction as argument and should return True to continue,\n or a default progress bar when progress=True\n :param: bool virtual: When True, export virtual columns\n :param str sort: expression used for sorting the output\n :param bool ascending: sort ascending (True) or descending\n :return:"}
{"_id": "q_2043", "text": "Add a column to the DataFrame\n\n :param str name: name of column\n :param data: numpy array with the data"}
{"_id": "q_2044", "text": "Adds method f to the DataFrame class"}
{"_id": "q_2045", "text": "Returns an array where missing values are replaced by value.\n\n If the dtype is object, nan values and 'nan' string values\n are replaced by value when fill_nan==True."}
{"_id": "q_2046", "text": "Obtain the day of the week with Monday=0 and Sunday=6\n\n :returns: an expression containing the day of week.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.dayofweek\n Expression = dt_dayofweek(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 0\n 1 3\n 2 3"}
{"_id": "q_2047", "text": "The ordinal day of the year.\n\n :returns: an expression containing the ordinal day of the year.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.dayofyear\n Expression = dt_dayofyear(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 285\n 1 42\n 2 316"}
{"_id": "q_2048", "text": "Extracts the month out of a datetime sample.\n\n :returns: an expression containing the month extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.month\n Expression = dt_month(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 10\n 1 2\n 2 11"}
{"_id": "q_2049", "text": "Returns the month names of a datetime sample in English.\n\n :returns: an expression containing the month names extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.month_name\n Expression = dt_month_name(date)\n Length: 3 dtype: str (expression)\n ---------------------------------\n 0 October\n 1 February\n 2 November"}
{"_id": "q_2050", "text": "Returns the day names of a datetime sample in English.\n\n :returns: an expression containing the day names extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.day_name\n Expression = dt_day_name(date)\n Length: 3 dtype: str (expression)\n ---------------------------------\n 0 Monday\n 1 Thursday\n 2 Thursday"}
{"_id": "q_2051", "text": "Returns the week ordinal of the year.\n\n :returns: an expression containing the week ordinal of the year, extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.weekofyear\n Expression = dt_weekofyear(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 42\n 1 6\n 2 46"}
{"_id": "q_2052", "text": "Extracts the hour out of a datetime samples.\n\n :returns: an expression containing the hour extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.hour\n Expression = dt_hour(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 3\n 1 10\n 2 11"}
{"_id": "q_2053", "text": "Extracts the minute out of a datetime samples.\n\n :returns: an expression containing the minute extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.minute\n Expression = dt_minute(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 31\n 1 17\n 2 34"}
{"_id": "q_2054", "text": "Capitalize the first letter of a string sample.\n\n :returns: an expression containing the capitalized strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.capitalize()\n Expression = str_capitalize(text)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 Something\n 1 Very pretty\n 2 Is coming\n 3 Our\n 4 Way."}
{"_id": "q_2055", "text": "Concatenate two string columns on a row-by-row basis.\n\n :param expression other: The expression of the other column to be concatenated.\n :returns: an expression containing the concatenated columns.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.cat(df.text)\n Expression = str_cat(text, text)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 SomethingSomething\n 1 very prettyvery pretty\n 2 is comingis coming\n 3 ourour\n 4 way.way."}
{"_id": "q_2056", "text": "Check if a string pattern or regex is contained within a sample of a string column.\n\n :param str pattern: A string or regex pattern\n :param bool regex: If True,\n :returns: an expression which is evaluated to True if the pattern is found in a given sample, and it is False otherwise.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.contains('very')\n Expression = str_contains(text, 'very')\n Length: 5 dtype: bool (expression)\n ----------------------------------\n 0 False\n 1 True\n 2 False\n 3 False\n 4 False"}
{"_id": "q_2057", "text": "Count the occurences of a pattern in sample of a string column.\n\n :param str pat: A string or regex pattern\n :param bool regex: If True,\n :returns: an expression containing the number of times a pattern is found in each sample.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.count(pat=\"et\", regex=False)\n Expression = str_count(text, pat='et', regex=False)\n Length: 5 dtype: int64 (expression)\n -----------------------------------\n 0 1\n 1 1\n 2 0\n 3 0\n 4 0"}
{"_id": "q_2058", "text": "Returns the lowest indices in each string in a column, where the provided substring is fully contained between within a\n sample. If the substring is not found, -1 is returned.\n\n :param str sub: A substring to be found in the samples\n :param int start:\n :param int end:\n :returns: an expression containing the lowest indices specifying the start of the substring.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.find(sub=\"et\")\n Expression = str_find(text, sub='et')\n Length: 5 dtype: int64 (expression)\n -----------------------------------\n 0 3\n 1 7\n 2 -1\n 3 -1\n 4 -1"}
{"_id": "q_2059", "text": "Converts string samples to lower case.\n\n :returns: an expression containing the converted strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.lower()\n Expression = str_lower(text)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way."}
{"_id": "q_2060", "text": "Remove leading characters from a string sample.\n\n :param str to_strip: The string to be removed\n :returns: an expression containing the modified string column.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.lstrip(to_strip='very ')\n Expression = str_lstrip(text, to_strip='very ')\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 Something\n 1 pretty\n 2 is coming\n 3 our\n 4 way."}
{"_id": "q_2061", "text": "Pad strings in a given column.\n\n :param int width: The total width of the string\n :param str side: If 'left' than pad on the left, if 'right' than pad on the right side the string.\n :param str fillchar: The character used for padding.\n :returns: an expression containing the padded strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.pad(width=10, side='left', fillchar='!')\n Expression = str_pad(text, width=10, side='left', fillchar='!')\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 !Something\n 1 very pretty\n 2 !is coming\n 3 !!!!!!!our\n 4 !!!!!!way."}
{"_id": "q_2062", "text": "Duplicate each string in a column.\n\n :param int repeats: number of times each string sample is to be duplicated.\n :returns: an expression containing the duplicated strings\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.repeat(3)\n Expression = str_repeat(text, 3)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 SomethingSomethingSomething\n 1 very prettyvery prettyvery pretty\n 2 is comingis comingis coming\n 3 ourourour\n 4 way.way.way."}
{"_id": "q_2063", "text": "Returns the highest indices in each string in a column, where the provided substring is fully contained between within a\n sample. If the substring is not found, -1 is returned.\n\n :param str sub: A substring to be found in the samples\n :param int start:\n :param int end:\n :returns: an expression containing the highest indices specifying the start of the substring.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.rfind(sub=\"et\")\n Expression = str_rfind(text, sub='et')\n Length: 5 dtype: int64 (expression)\n -----------------------------------\n 0 3\n 1 7\n 2 -1\n 3 -1\n 4 -1"}
{"_id": "q_2064", "text": "Returns the highest indices in each string in a column, where the provided substring is fully contained between within a\n sample. If the substring is not found, -1 is returned. Same as `str.rfind`.\n\n :param str sub: A substring to be found in the samples\n :param int start:\n :param int end:\n :returns: an expression containing the highest indices specifying the start of the substring.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.rindex(sub=\"et\")\n Expression = str_rindex(text, sub='et')\n Length: 5 dtype: int64 (expression)\n -----------------------------------\n 0 3\n 1 7\n 2 -1\n 3 -1\n 4 -1"}
{"_id": "q_2065", "text": "Fills the left side of string samples with a specified character such that the strings are left-hand justified.\n\n :param int width: The minimal width of the strings.\n :param str fillchar: The character used for filling.\n :returns: an expression containing the filled strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.rjust(width=10, fillchar='!')\n Expression = str_rjust(text, width=10, fillchar='!')\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 !Something\n 1 very pretty\n 2 !is coming\n 3 !!!!!!!our\n 4 !!!!!!way."}
{"_id": "q_2066", "text": "Remove trailing characters from a string sample.\n\n :param str to_strip: The string to be removed\n :returns: an expression containing the modified string column.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.rstrip(to_strip='ing')\n Expression = str_rstrip(text, to_strip='ing')\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 Someth\n 1 very pretty\n 2 is com\n 3 our\n 4 way."}
{"_id": "q_2067", "text": "Slice substrings from each string element in a column.\n\n :param int start: The start position for the slice operation.\n :param int end: The stop position for the slice operation.\n :returns: an expression containing the sliced substrings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.slice(start=2, stop=5)\n Expression = str_pandas_slice(text, start=2, stop=5)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 met\n 1 ry\n 2 co\n 3 r\n 4 y."}
{"_id": "q_2068", "text": "Removes leading and trailing characters.\n\n Strips whitespaces (including new lines), or a set of specified\n characters from each string saple in a column, both from the left\n right sides.\n\n :param str to_strip: The characters to be removed. All combinations of the characters will be removed.\n If None, it removes whitespaces.\n :param returns: an expression containing the modified string samples.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.strip(to_strip='very')\n Expression = str_strip(text, to_strip='very')\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 Something\n 1 prett\n 2 is coming\n 3 ou\n 4 way."}
{"_id": "q_2069", "text": "Converts all string samples to titlecase.\n\n :returns: an expression containing the converted strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.title()\n Expression = str_title(text)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 Something\n 1 Very Pretty\n 2 Is Coming\n 3 Our\n 4 Way."}
{"_id": "q_2070", "text": "Converts all strings in a column to uppercase.\n\n :returns: an expression containing the converted strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n\n >>> df.text.str.upper()\n Expression = str_upper(text)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 SOMETHING\n 1 VERY PRETTY\n 2 IS COMING\n 3 OUR\n 4 WAY."}
{"_id": "q_2071", "text": "Writes a comment to the file in Java properties format.\n\n Newlines in the comment text are automatically turned into a continuation\n of the comment by adding a \"#\" to the beginning of each line.\n\n :param fh: a writable file-like object\n :param comment: comment string to write"}
{"_id": "q_2072", "text": "Incrementally read properties from a Java .properties file.\n\n Yields tuples of key/value pairs.\n\n If ``comments`` is `True`, comments will be included with ``jprops.COMMENT``\n in place of the key.\n\n :param fh: a readable file-like object\n :param comments: should include comments (default: False)"}
{"_id": "q_2073", "text": "Return the version information for all librosa dependencies."}
{"_id": "q_2074", "text": "Handle renamed arguments.\n\n Parameters\n ----------\n old_name : str\n old_value\n The name and value of the old argument\n\n new_name : str\n new_value\n The name and value of the new argument\n\n version_deprecated : str\n The version at which the old name became deprecated\n\n version_removed : str\n The version at which the old name will be removed\n\n Returns\n -------\n value\n - `new_value` if `old_value` of type `Deprecated`\n - `old_value` otherwise\n\n Warnings\n --------\n if `old_value` is not of type `Deprecated`"}
{"_id": "q_2075", "text": "Set the FFT library used by librosa.\n\n Parameters\n ----------\n lib : None or module\n Must implement an interface compatible with `numpy.fft`.\n If `None`, reverts to `numpy.fft`.\n\n Examples\n --------\n Use `pyfftw`:\n\n >>> import pyfftw\n >>> librosa.set_fftlib(pyfftw.interfaces.numpy_fft)\n\n Reset to default `numpy` implementation\n\n >>> librosa.set_fftlib()"}
{"_id": "q_2076", "text": "Beat tracking function\n\n :parameters:\n - input_file : str\n Path to input audio file (wav, mp3, m4a, flac, etc.)\n\n - output_file : str\n Path to save beat event timestamps as a CSV file"}
{"_id": "q_2077", "text": "Load audio, estimate tuning, apply pitch correction, and save."}
{"_id": "q_2078", "text": "Convert one or more MIDI numbers to note strings.\n\n MIDI numbers will be rounded to the nearest integer.\n\n Notes will be of the format 'C0', 'C#0', 'D0', ...\n\n Examples\n --------\n >>> librosa.midi_to_note(0)\n 'C-1'\n >>> librosa.midi_to_note(37)\n 'C#2'\n >>> librosa.midi_to_note(-2)\n 'A#-2'\n >>> librosa.midi_to_note(104.7)\n 'A7'\n >>> librosa.midi_to_note(104.7, cents=True)\n 'A7-30'\n >>> librosa.midi_to_note(list(range(12, 24)))\n ['C0', 'C#0', 'D0', 'D#0', 'E0', 'F0', 'F#0', 'G0', 'G#0', 'A0', 'A#0', 'B0']\n\n Parameters\n ----------\n midi : int or iterable of int\n Midi numbers to convert.\n\n octave: bool\n If True, include the octave number\n\n cents: bool\n If true, cent markers will be appended for fractional notes.\n Eg, `midi_to_note(69.3, cents=True)` == `A4+03`\n\n Returns\n -------\n notes : str or iterable of str\n Strings describing each midi note.\n\n Raises\n ------\n ParameterError\n if `cents` is True and `octave` is False\n\n See Also\n --------\n midi_to_hz\n note_to_midi\n hz_to_note"}
{"_id": "q_2079", "text": "Convert Hz to Mels\n\n Examples\n --------\n >>> librosa.hz_to_mel(60)\n 0.9\n >>> librosa.hz_to_mel([110, 220, 440])\n array([ 1.65, 3.3 , 6.6 ])\n\n Parameters\n ----------\n frequencies : number or np.ndarray [shape=(n,)] , float\n scalar or array of frequencies\n htk : bool\n use HTK formula instead of Slaney\n\n Returns\n -------\n mels : number or np.ndarray [shape=(n,)]\n input frequencies in Mels\n\n See Also\n --------\n mel_to_hz"}
{"_id": "q_2080", "text": "Convert mel bin numbers to frequencies\n\n Examples\n --------\n >>> librosa.mel_to_hz(3)\n 200.\n\n >>> librosa.mel_to_hz([1,2,3,4,5])\n array([ 66.667, 133.333, 200. , 266.667, 333.333])\n\n Parameters\n ----------\n mels : np.ndarray [shape=(n,)], float\n mel bins to convert\n htk : bool\n use HTK formula instead of Slaney\n\n Returns\n -------\n frequencies : np.ndarray [shape=(n,)]\n input mels in Hz\n\n See Also\n --------\n hz_to_mel"}
{"_id": "q_2081", "text": "Alternative implementation of `np.fft.fftfreq`\n\n Parameters\n ----------\n sr : number > 0 [scalar]\n Audio sampling rate\n\n n_fft : int > 0 [scalar]\n FFT window size\n\n\n Returns\n -------\n freqs : np.ndarray [shape=(1 + n_fft/2,)]\n Frequencies `(0, sr/n_fft, 2*sr/n_fft, ..., sr/2)`\n\n\n Examples\n --------\n >>> librosa.fft_frequencies(sr=22050, n_fft=16)\n array([ 0. , 1378.125, 2756.25 , 4134.375,\n 5512.5 , 6890.625, 8268.75 , 9646.875, 11025. ])"}
{"_id": "q_2082", "text": "Compute the A-weighting of a set of frequencies.\n\n Parameters\n ----------\n frequencies : scalar or np.ndarray [shape=(n,)]\n One or more frequencies (in Hz)\n\n min_db : float [scalar] or None\n Clip weights below this threshold.\n If `None`, no clipping is performed.\n\n Returns\n -------\n A_weighting : scalar or np.ndarray [shape=(n,)]\n `A_weighting[i]` is the A-weighting of `frequencies[i]`\n\n See Also\n --------\n perceptual_weighting\n\n\n Examples\n --------\n\n Get the A-weighting for CQT frequencies\n\n >>> import matplotlib.pyplot as plt\n >>> freqs = librosa.cqt_frequencies(108, librosa.note_to_hz('C1'))\n >>> aw = librosa.A_weighting(freqs)\n >>> plt.plot(freqs, aw)\n >>> plt.xlabel('Frequency (Hz)')\n >>> plt.ylabel('Weighting (log10)')\n >>> plt.title('A-Weighting of CQT frequencies')"}
{"_id": "q_2083", "text": "Return an array of time values to match the time axis from a feature matrix.\n\n Parameters\n ----------\n X : np.ndarray or scalar\n - If ndarray, X is a feature matrix, e.g. STFT, chromagram, or mel spectrogram.\n - If scalar, X represents the number of frames.\n\n sr : number > 0 [scalar]\n audio sampling rate\n\n hop_length : int > 0 [scalar]\n number of samples between successive frames\n\n n_fft : None or int > 0 [scalar]\n Optional: length of the FFT window.\n If given, time conversion will include an offset of `n_fft / 2`\n to counteract windowing effects when using a non-centered STFT.\n\n axis : int [scalar]\n The axis representing the time axis of X.\n By default, the last axis (-1) is taken.\n\n Returns\n -------\n times : np.ndarray [shape=(n,)]\n ndarray of times (in seconds) corresponding to each frame of X.\n\n See Also\n --------\n samples_like : Return an array of sample indices to match the time axis from a feature matrix.\n\n Examples\n --------\n Provide a feature matrix input:\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> X = librosa.stft(y)\n >>> times = librosa.times_like(X)\n >>> times\n array([ 0.00000000e+00, 2.32199546e-02, 4.64399093e-02, ...,\n 6.13935601e+01, 6.14167800e+01, 6.14400000e+01])\n\n Provide a scalar input:\n\n >>> n_frames = 2647\n >>> times = librosa.times_like(n_frames)\n >>> times\n array([ 0.00000000e+00, 2.32199546e-02, 4.64399093e-02, ...,\n 6.13935601e+01, 6.14167800e+01, 6.14400000e+01])"}
{"_id": "q_2084", "text": "Return an array of sample indices to match the time axis from a feature matrix.\n\n Parameters\n ----------\n X : np.ndarray or scalar\n - If ndarray, X is a feature matrix, e.g. STFT, chromagram, or mel spectrogram.\n - If scalar, X represents the number of frames.\n\n hop_length : int > 0 [scalar]\n number of samples between successive frames\n\n n_fft : None or int > 0 [scalar]\n Optional: length of the FFT window.\n If given, time conversion will include an offset of `n_fft / 2`\n to counteract windowing effects when using a non-centered STFT.\n\n axis : int [scalar]\n The axis representing the time axis of X.\n By default, the last axis (-1) is taken.\n\n Returns\n -------\n samples : np.ndarray [shape=(n,)]\n ndarray of sample indices corresponding to each frame of X.\n\n See Also\n --------\n times_like : Return an array of time values to match the time axis from a feature matrix.\n\n Examples\n --------\n Provide a feature matrix input:\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> X = librosa.stft(y)\n >>> samples = librosa.samples_like(X)\n >>> samples\n array([ 0, 512, 1024, ..., 1353728, 1354240, 1354752])\n\n Provide a scalar input:\n\n >>> n_frames = 2647\n >>> samples = librosa.samples_like(n_frames)\n >>> samples\n array([ 0, 512, 1024, ..., 1353728, 1354240, 1354752])"}
{"_id": "q_2085", "text": "Compute the hybrid constant-Q transform of an audio signal.\n\n Here, the hybrid CQT uses the pseudo CQT for higher frequencies where\n the hop_length is longer than half the filter length and the full CQT\n for lower frequencies.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n hop_length : int > 0 [scalar]\n number of samples between successive CQT columns.\n\n fmin : float > 0 [scalar]\n Minimum frequency. Defaults to C1 ~= 32.70 Hz\n\n n_bins : int > 0 [scalar]\n Number of frequency bins, starting at `fmin`\n\n bins_per_octave : int > 0 [scalar]\n Number of bins per octave\n\n tuning : None or float in `[-0.5, 0.5)`\n Tuning offset in fractions of a bin (cents).\n\n If `None`, tuning will be automatically estimated from the signal.\n\n filter_scale : float > 0\n Filter filter_scale factor. Larger values use longer windows.\n\n sparsity : float in [0, 1)\n Sparsify the CQT basis by discarding up to `sparsity`\n fraction of the energy in each basis.\n\n Set `sparsity=0` to disable sparsification.\n\n window : str, tuple, number, or function\n Window specification for the basis filters.\n See `filters.get_window` for details.\n\n pad_mode : string\n Padding mode for centered frame analysis.\n\n See also: `librosa.core.stft` and `np.pad`.\n\n res_type : string\n Resampling mode. See `librosa.core.cqt` for details.\n\n Returns\n -------\n CQT : np.ndarray [shape=(n_bins, t), dtype=np.float]\n Constant-Q energy for each frequency at each time.\n\n Raises\n ------\n ParameterError\n If `hop_length` is not an integer multiple of\n `2**(n_bins / bins_per_octave)`\n\n Or if `y` is too short to support the frequency range of the CQT.\n\n See Also\n --------\n cqt\n pseudo_cqt\n\n Notes\n -----\n This function caches at level 20."}
{"_id": "q_2086", "text": "Compute the pseudo constant-Q transform of an audio signal.\n\n This uses a single fft size that is the smallest power of 2 that is greater\n than or equal to the max of:\n\n 1. The longest CQT filter\n 2. 2x the hop_length\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n hop_length : int > 0 [scalar]\n number of samples between successive CQT columns.\n\n fmin : float > 0 [scalar]\n Minimum frequency. Defaults to C1 ~= 32.70 Hz\n\n n_bins : int > 0 [scalar]\n Number of frequency bins, starting at `fmin`\n\n bins_per_octave : int > 0 [scalar]\n Number of bins per octave\n\n tuning : None or float in `[-0.5, 0.5)`\n Tuning offset in fractions of a bin (cents).\n\n If `None`, tuning will be automatically estimated from the signal.\n\n filter_scale : float > 0\n Filter filter_scale factor. Larger values use longer windows.\n\n sparsity : float in [0, 1)\n Sparsify the CQT basis by discarding up to `sparsity`\n fraction of the energy in each basis.\n\n Set `sparsity=0` to disable sparsification.\n\n window : str, tuple, number, or function\n Window specification for the basis filters.\n See `filters.get_window` for details.\n\n pad_mode : string\n Padding mode for centered frame analysis.\n\n See also: `librosa.core.stft` and `np.pad`.\n\n Returns\n -------\n CQT : np.ndarray [shape=(n_bins, t), dtype=np.float]\n Pseudo Constant-Q energy for each frequency at each time.\n\n Raises\n ------\n ParameterError\n If `hop_length` is not an integer multiple of\n `2**(n_bins / bins_per_octave)`\n\n Or if `y` is too short to support the frequency range of the CQT.\n\n Notes\n -----\n This function caches at level 20."}
{"_id": "q_2087", "text": "Generate the frequency domain constant-Q filter basis."}
{"_id": "q_2088", "text": "Helper function to trim and stack a collection of CQT responses"}
{"_id": "q_2089", "text": "Compute the filter response with a target STFT hop."}
{"_id": "q_2090", "text": "Compute the number of early downsampling operations"}
{"_id": "q_2091", "text": "Perform early downsampling on an audio signal, if it applies."}
{"_id": "q_2092", "text": "Calculate the accumulated cost matrix D.\n\n Use dynamic programming to calculate the accumulated costs.\n\n Parameters\n ----------\n C : np.ndarray [shape=(N, M)]\n pre-computed cost matrix\n\n D : np.ndarray [shape=(N, M)]\n accumulated cost matrix\n\n D_steps : np.ndarray [shape=(N, M)]\n steps which were used for calculating D\n\n step_sizes_sigma : np.ndarray [shape=[n, 2]]\n Specifies allowed step sizes as used by the dtw.\n\n weights_add : np.ndarray [shape=[n, ]]\n Additive weights to penalize certain step sizes.\n\n weights_mul : np.ndarray [shape=[n, ]]\n Multiplicative weights to penalize certain step sizes.\n\n max_0 : int\n maximum number of steps in step_sizes_sigma in dim 0.\n\n max_1 : int\n maximum number of steps in step_sizes_sigma in dim 1.\n\n Returns\n -------\n D : np.ndarray [shape=(N,M)]\n accumulated cost matrix.\n D[N,M] is the total alignment cost.\n When doing subsequence DTW, D[N,:] indicates a matching function.\n\n D_steps : np.ndarray [shape=(N,M)]\n steps which were used for calculating D.\n\n See Also\n --------\n dtw"}
{"_id": "q_2093", "text": "Backtrack optimal warping path.\n\n Uses the saved step sizes from the cost accumulation\n step to backtrack the index pairs for an optimal\n warping path.\n\n\n Parameters\n ----------\n D_steps : np.ndarray [shape=(N, M)]\n Saved indices of the used steps used in the calculation of D.\n\n step_sizes_sigma : np.ndarray [shape=[n, 2]]\n Specifies allowed step sizes as used by the dtw.\n\n Returns\n -------\n wp : list [shape=(N,)]\n Warping path with index pairs.\n Each list entry contains an index pair\n (n,m) as a tuple\n\n See Also\n --------\n dtw"}
{"_id": "q_2094", "text": "Core Viterbi algorithm.\n\n This is intended for internal use only.\n\n Parameters\n ----------\n log_prob : np.ndarray [shape=(T, m)]\n `log_prob[t, s]` is the conditional log-likelihood\n log P[X = X(t) | State(t) = s]\n\n log_trans : np.ndarray [shape=(m, m)]\n The log transition matrix\n `log_trans[i, j]` = log P[State(t+1) = j | State(t) = i]\n\n log_p_init : np.ndarray [shape=(m,)]\n log of the initial state distribution\n\n state : np.ndarray [shape=(T,), dtype=int]\n Pre-allocated state index array\n\n value : np.ndarray [shape=(T, m)] float\n Pre-allocated value array\n\n ptr : np.ndarray [shape=(T, m), dtype=int]\n Pre-allocated pointer array\n\n Returns\n -------\n None\n All computations are performed in-place on `state, value, ptr`."}
{"_id": "q_2095", "text": "Construct a self-loop transition matrix over `n_states`.\n\n The transition matrix will have the following properties:\n\n - `transition[i, i] = p` for all i\n - `transition[i, j] = (1 - p) / (n_states - 1)` for all `j != i`\n\n This type of transition matrix is appropriate when states tend to be\n locally stable, and there is no additional structure between different\n states. This is primarily useful for de-noising frame-wise predictions.\n\n Parameters\n ----------\n n_states : int > 1\n The number of states\n\n prob : float in [0, 1] or iterable, length=n_states\n If a scalar, this is the probability of a self-transition.\n\n If a vector of length `n_states`, `p[i]` is the probability of state `i`'s self-transition.\n\n Returns\n -------\n transition : np.ndarray [shape=(n_states, n_states)]\n The transition matrix\n\n Examples\n --------\n >>> librosa.sequence.transition_loop(3, 0.5)\n array([[0.5 , 0.25, 0.25],\n [0.25, 0.5 , 0.25],\n [0.25, 0.25, 0.5 ]])\n\n >>> librosa.sequence.transition_loop(3, [0.8, 0.5, 0.25])\n array([[0.8 , 0.1 , 0.1 ],\n [0.25 , 0.5 , 0.25 ],\n [0.375, 0.375, 0.25 ]])"}
{"_id": "q_2096", "text": "Construct a localized transition matrix.\n\n The transition matrix will have the following properties:\n\n - `transition[i, j] = 0` if `|i - j| > width`\n - `transition[i, i]` is maximal\n - `transition[i, i - width//2 : i + width//2]` has shape `window`\n\n This type of transition matrix is appropriate for state spaces\n that discretely approximate continuous variables, such as in fundamental\n frequency estimation.\n\n Parameters\n ----------\n n_states : int > 1\n The number of states\n\n width : int >= 1 or iterable\n The maximum number of states to treat as \"local\".\n If iterable, it should have length equal to `n_states`,\n and specify the width independently for each state.\n\n window : str, callable, or window specification\n The window function to determine the shape of the \"local\" distribution.\n\n Any window specification supported by `filters.get_window` will work here.\n\n .. note:: Certain windows (e.g., 'hann') are identically 0 at the boundaries,\n so and effectively have `width-2` non-zero values. You may have to expand\n `width` to get the desired behavior.\n\n\n wrap : bool\n If `True`, then state locality `|i - j|` is computed modulo `n_states`.\n If `False` (default), then locality is absolute.\n\n See Also\n --------\n filters.get_window\n\n Returns\n -------\n transition : np.ndarray [shape=(n_states, n_states)]\n The transition matrix\n\n Examples\n --------\n\n Triangular distributions with and without wrapping\n\n >>> librosa.sequence.transition_local(5, 3, window='triangle', wrap=False)\n array([[0.667, 0.333, 0. , 0. , 0. ],\n [0.25 , 0.5 , 0.25 , 0. , 0. ],\n [0. , 0.25 , 0.5 , 0.25 , 0. ],\n [0. , 0. , 0.25 , 0.5 , 0.25 ],\n [0. , 0. , 0. , 0.333, 0.667]])\n\n >>> librosa.sequence.transition_local(5, 3, window='triangle', wrap=True)\n array([[0.5 , 0.25, 0. , 0. , 0.25],\n [0.25, 0.5 , 0.25, 0. , 0. ],\n [0. , 0.25, 0.5 , 0.25, 0. ],\n [0. , 0. , 0.25, 0.5 , 0.25],\n [0.25, 0. , 0. , 0.25, 0.5 ]])\n\n Uniform local distributions with variable widths and no wrapping\n\n >>> librosa.sequence.transition_local(5, [1, 2, 3, 3, 1], window='ones', wrap=False)\n array([[1. , 0. , 0. , 0. , 0. ],\n [0.5 , 0.5 , 0. , 0. , 0. ],\n [0. , 0.333, 0.333, 0.333, 0. ],\n [0. , 0. , 0.333, 0.333, 0.333],\n [0. , 0. , 0. , 0. , 1. ]])"}
{"_id": "q_2097", "text": "Basic onset detector. Locate note onset events by picking peaks in an\n onset strength envelope.\n\n The `peak_pick` parameters were chosen by large-scale hyper-parameter\n optimization over the dataset provided by [1]_.\n\n .. [1] https://github.com/CPJKU/onset_db\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n onset_envelope : np.ndarray [shape=(m,)]\n (optional) pre-computed onset strength envelope\n\n hop_length : int > 0 [scalar]\n hop length (in samples)\n\n units : {'frames', 'samples', 'time'}\n The units to encode detected onset events in.\n By default, 'frames' are used.\n\n backtrack : bool\n If `True`, detected onset events are backtracked to the nearest\n preceding minimum of `energy`.\n\n This is primarily useful when using onsets as slice points for segmentation.\n\n energy : np.ndarray [shape=(m,)] (optional)\n An energy function to use for backtracking detected onset events.\n If none is provided, then `onset_envelope` is used.\n\n kwargs : additional keyword arguments\n Additional parameters for peak picking.\n\n See `librosa.util.peak_pick` for details.\n\n\n Returns\n -------\n\n onsets : np.ndarray [shape=(n_onsets,)]\n estimated positions of detected onsets, in whichever units\n are specified. By default, frame indices.\n\n .. note::\n If no onset strength could be detected, onset_detect returns\n an empty list.\n\n\n Raises\n ------\n ParameterError\n if neither `y` nor `onsets` are provided\n\n or if `units` is not one of 'frames', 'samples', or 'time'\n\n See Also\n --------\n onset_strength : compute onset strength per-frame\n onset_backtrack : backtracking onset events\n librosa.util.peak_pick : pick peaks from a time series\n\n\n Examples\n --------\n Get onset times from a signal\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... offset=30, duration=2.0)\n >>> onset_frames = librosa.onset.onset_detect(y=y, sr=sr)\n >>> librosa.frames_to_time(onset_frames, sr=sr)\n array([ 0.07 , 0.395, 0.511, 0.627, 0.766, 0.975,\n 1.207, 1.324, 1.44 , 1.788, 1.881])\n\n Or use a pre-computed onset envelope\n\n >>> o_env = librosa.onset.onset_strength(y, sr=sr)\n >>> times = librosa.frames_to_time(np.arange(len(o_env)), sr=sr)\n >>> onset_frames = librosa.onset.onset_detect(onset_envelope=o_env, sr=sr)\n\n\n >>> import matplotlib.pyplot as plt\n >>> D = np.abs(librosa.stft(y))\n >>> plt.figure()\n >>> ax1 = plt.subplot(2, 1, 1)\n >>> librosa.display.specshow(librosa.amplitude_to_db(D, ref=np.max),\n ... x_axis='time', y_axis='log')\n >>> plt.title('Power spectrogram')\n >>> plt.subplot(2, 1, 2, sharex=ax1)\n >>> plt.plot(times, o_env, label='Onset strength')\n >>> plt.vlines(times[onset_frames], 0, o_env.max(), color='r', alpha=0.9,\n ... linestyle='--', label='Onsets')\n >>> plt.axis('tight')\n >>> plt.legend(frameon=True, framealpha=0.75)"}
{"_id": "q_2098", "text": "Compute a spectral flux onset strength envelope across multiple channels.\n\n Onset strength for channel `i` at time `t` is determined by:\n\n `mean_{f in channels[i]} max(0, S[f, t+1] - S[f, t])`\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time-series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n S : np.ndarray [shape=(d, m)]\n pre-computed (log-power) spectrogram\n\n lag : int > 0\n time lag for computing differences\n\n max_size : int > 0\n size (in frequency bins) of the local max filter.\n set to `1` to disable filtering.\n\n ref : None or np.ndarray [shape=(d, m)]\n An optional pre-computed reference spectrum, of the same shape as `S`.\n If not provided, it will be computed from `S`.\n If provided, it will override any local max filtering governed by `max_size`.\n\n detrend : bool [scalar]\n Filter the onset strength to remove the DC component\n\n center : bool [scalar]\n Shift the onset function by `n_fft / (2 * hop_length)` frames\n\n feature : function\n Function for computing time-series features, eg, scaled spectrograms.\n By default, uses `librosa.feature.melspectrogram` with `fmax=11025.0`\n\n aggregate : function or False\n Aggregation function to use when combining onsets\n at different frequency bins.\n\n If `False`, then no aggregation is performed.\n\n Default: `np.mean`\n\n channels : list or None\n Array of channel boundaries or slice objects.\n If `None`, then a single channel is generated to span all bands.\n\n kwargs : additional keyword arguments\n Additional parameters to `feature()`, if `S` is not provided.\n\n\n Returns\n -------\n onset_envelope : np.ndarray [shape=(n_channels, m)]\n array containing the onset strength envelope for each specified channel\n\n\n Raises\n ------\n ParameterError\n if neither `(y, sr)` nor `S` are provided\n\n\n See Also\n --------\n onset_strength\n\n Notes\n -----\n This function caches at level 30.\n\n Examples\n --------\n First, load some audio and plot the spectrogram\n\n >>> import matplotlib.pyplot as plt\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... duration=10.0)\n >>> D = np.abs(librosa.stft(y))\n >>> plt.figure()\n >>> plt.subplot(2, 1, 1)\n >>> librosa.display.specshow(librosa.amplitude_to_db(D, ref=np.max),\n ... y_axis='log')\n >>> plt.title('Power spectrogram')\n\n Construct a standard onset function over four sub-bands\n\n >>> onset_subbands = librosa.onset.onset_strength_multi(y=y, sr=sr,\n ... channels=[0, 32, 64, 96, 128])\n >>> plt.subplot(2, 1, 2)\n >>> librosa.display.specshow(onset_subbands, x_axis='time')\n >>> plt.ylabel('Sub-bands')\n >>> plt.title('Sub-band onset strength')"}
{"_id": "q_2099", "text": "r\"\"\"Save time steps as in CSV format. This can be used to store the output\n of a beat-tracker or segmentation algorithm.\n\n If only `times` are provided, the file will contain each value\n of `times` on a row::\n\n times[0]\\n\n times[1]\\n\n times[2]\\n\n ...\n\n If `annotations` are also provided, the file will contain\n delimiter-separated values::\n\n times[0],annotations[0]\\n\n times[1],annotations[1]\\n\n times[2],annotations[2]\\n\n ...\n\n\n Parameters\n ----------\n path : string\n path to save the output CSV file\n\n times : list-like of floats\n list of frame numbers for beat events\n\n annotations : None or list-like\n optional annotations for each time step\n\n delimiter : str\n character to separate fields\n\n fmt : str\n format-string for rendering time\n\n Raises\n ------\n ParameterError\n if `annotations` is not `None` and length does not\n match `times`\n\n Examples\n --------\n Write beat-tracker time to CSV\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> tempo, beats = librosa.beat.beat_track(y, sr=sr, units='time')\n >>> librosa.output.times_csv('beat_times.csv', beats)"}
{"_id": "q_2100", "text": "Output a time series as a .wav file\n\n Note: only mono or stereo, floating-point data is supported.\n For more advanced and flexible output options, refer to\n `soundfile`.\n\n Parameters\n ----------\n path : str\n path to save the output wav file\n\n y : np.ndarray [shape=(n,) or (2,n), dtype=np.float]\n audio time series (mono or stereo).\n\n Note that only floating-point values are supported.\n\n sr : int > 0 [scalar]\n sampling rate of `y`\n\n norm : boolean [scalar]\n enable amplitude normalization.\n For floating point `y`, scale the data to the range [-1, +1].\n\n Examples\n --------\n Trim a signal to 5 seconds and save it back\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... duration=5.0)\n >>> librosa.output.write_wav('file_trim_5s.wav', y, sr)\n\n See Also\n --------\n soundfile.write"}
{"_id": "q_2101", "text": "Get a default colormap from the given data.\n\n If the data is boolean, use a black and white colormap.\n\n If the data has both positive and negative values,\n use a diverging colormap.\n\n Otherwise, use a sequential colormap.\n\n Parameters\n ----------\n data : np.ndarray\n Input data\n\n robust : bool\n If True, discard the top and bottom 2% of data when calculating\n range.\n\n cmap_seq : str\n The sequential colormap name\n\n cmap_bool : str\n The boolean colormap name\n\n cmap_div : str\n The diverging colormap name\n\n Returns\n -------\n cmap : matplotlib.colors.Colormap\n The colormap to use for `data`\n\n See Also\n --------\n matplotlib.pyplot.colormaps"}
{"_id": "q_2102", "text": "Plot the amplitude envelope of a waveform.\n\n If `y` is monophonic, a filled curve is drawn between `[-abs(y), abs(y)]`.\n\n If `y` is stereo, the curve is drawn between `[-abs(y[1]), abs(y[0])]`,\n so that the left and right channels are drawn above and below the axis,\n respectively.\n\n Long signals (`duration >= max_points`) are down-sampled to at\n most `max_sr` before plotting.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,) or (2,n)]\n audio time series (mono or stereo)\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n max_points : postive number or None\n Maximum number of time-points to plot: if `max_points` exceeds\n the duration of `y`, then `y` is downsampled.\n\n If `None`, no downsampling is performed.\n\n x_axis : str {'time', 'off', 'none'} or None\n If 'time', the x-axis is given time tick-marks.\n\n ax : matplotlib.axes.Axes or None\n Axes to plot on instead of the default `plt.gca()`.\n\n offset : float\n Horizontal offset (in seconds) to start the waveform plot\n\n max_sr : number > 0 [scalar]\n Maximum sampling rate for the visualization\n\n kwargs\n Additional keyword arguments to `matplotlib.pyplot.fill_between`\n\n Returns\n -------\n pc : matplotlib.collections.PolyCollection\n The PolyCollection created by `fill_between`.\n\n See also\n --------\n librosa.core.resample\n matplotlib.pyplot.fill_between\n\n\n Examples\n --------\n Plot a monophonic waveform\n\n >>> import matplotlib.pyplot as plt\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=10)\n >>> plt.figure()\n >>> plt.subplot(3, 1, 1)\n >>> librosa.display.waveplot(y, sr=sr)\n >>> plt.title('Monophonic')\n\n Or a stereo waveform\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... mono=False, duration=10)\n >>> plt.subplot(3, 1, 2)\n >>> librosa.display.waveplot(y, sr=sr)\n >>> plt.title('Stereo')\n\n Or harmonic and percussive components with transparency\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=10)\n >>> y_harm, y_perc = librosa.effects.hpss(y)\n >>> plt.subplot(3, 1, 3)\n >>> librosa.display.waveplot(y_harm, sr=sr, alpha=0.25)\n >>> librosa.display.waveplot(y_perc, sr=sr, color='r', alpha=0.5)\n >>> plt.title('Harmonic + Percussive')\n >>> plt.tight_layout()"}
{"_id": "q_2103", "text": "Helper to set the current image in pyplot mode.\n\n If the provided `ax` is not `None`, then we assume that the user is using the object API.\n In this case, the pyplot current image is not set."}
{"_id": "q_2104", "text": "Compute axis coordinates"}
{"_id": "q_2105", "text": "Check if \"axes\" is an instance of an axis object. If not, use `gca`."}
{"_id": "q_2106", "text": "Get CQT bin frequencies"}
{"_id": "q_2107", "text": "Get chroma bin numbers"}
{"_id": "q_2108", "text": "Get time coordinates from frames"}
{"_id": "q_2109", "text": "Estimate the tuning of an audio time series or spectrogram input.\n\n Parameters\n ----------\n y: np.ndarray [shape=(n,)] or None\n audio signal\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n S: np.ndarray [shape=(d, t)] or None\n magnitude or power spectrogram\n\n n_fft : int > 0 [scalar] or None\n number of FFT bins to use, if `y` is provided.\n\n resolution : float in `(0, 1)`\n Resolution of the tuning as a fraction of a bin.\n 0.01 corresponds to measurements in cents.\n\n bins_per_octave : int > 0 [scalar]\n How many frequency bins per octave\n\n kwargs : additional keyword arguments\n Additional arguments passed to `piptrack`\n\n Returns\n -------\n tuning: float in `[-0.5, 0.5)`\n estimated tuning deviation (fractions of a bin)\n\n See Also\n --------\n piptrack\n Pitch tracking by parabolic interpolation\n\n Examples\n --------\n >>> # With time-series input\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.estimate_tuning(y=y, sr=sr)\n 0.089999999999999969\n\n >>> # In tenths of a cent\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.estimate_tuning(y=y, sr=sr, resolution=1e-3)\n 0.093999999999999972\n\n >>> # Using spectrogram input\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> S = np.abs(librosa.stft(y))\n >>> librosa.estimate_tuning(S=S, sr=sr)\n 0.089999999999999969\n\n >>> # Using pass-through arguments to `librosa.piptrack`\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.estimate_tuning(y=y, sr=sr, n_fft=8192,\n ... fmax=librosa.note_to_hz('G#9'))\n 0.070000000000000062"}
{"_id": "q_2110", "text": "Pitch tracking on thresholded parabolically-interpolated STFT.\n\n This implementation uses the parabolic interpolation method described by [1]_.\n\n .. [1] https://ccrma.stanford.edu/~jos/sasp/Sinusoidal_Peak_Interpolation.html\n\n Parameters\n ----------\n y: np.ndarray [shape=(n,)] or None\n audio signal\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n S: np.ndarray [shape=(d, t)] or None\n magnitude or power spectrogram\n\n n_fft : int > 0 [scalar] or None\n number of FFT bins to use, if `y` is provided.\n\n hop_length : int > 0 [scalar] or None\n number of samples to hop\n\n threshold : float in `(0, 1)`\n A bin in spectrum `S` is considered a pitch when it is greater than\n `threshold*ref(S)`.\n\n By default, `ref(S)` is taken to be `max(S, axis=0)` (the maximum value in\n each column).\n\n fmin : float > 0 [scalar]\n lower frequency cutoff.\n\n fmax : float > 0 [scalar]\n upper frequency cutoff.\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n ref : scalar or callable [default=np.max]\n If scalar, the reference value against which `S` is compared for determining\n pitches.\n\n If callable, the reference value is computed as `ref(S, axis=0)`.\n\n .. note::\n One of `S` or `y` must be provided.\n\n If `S` is not given, it is computed from `y` using\n the default parameters of `librosa.core.stft`.\n\n Returns\n -------\n pitches : np.ndarray [shape=(d, t)]\n magnitudes : np.ndarray [shape=(d,t)]\n Where `d` is the subset of FFT bins within `fmin` and `fmax`.\n\n `pitches[f, t]` contains instantaneous frequency at bin\n `f`, time `t`\n\n `magnitudes[f, t]` contains the corresponding magnitudes.\n\n Both `pitches` and `magnitudes` take value 0 at bins\n of non-maximal magnitude.\n\n Notes\n -----\n This function caches at level 30.\n\n Examples\n --------\n Computing pitches from a waveform input\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> pitches, magnitudes = librosa.piptrack(y=y, sr=sr)\n\n Or from a spectrogram input\n\n >>> S = np.abs(librosa.stft(y))\n >>> pitches, magnitudes = librosa.piptrack(S=S, sr=sr)\n\n Or with an alternate reference value for pitch detection, where\n values above the mean spectral energy in each frame are counted as pitches\n\n >>> pitches, magnitudes = librosa.piptrack(S=S, sr=sr, threshold=1,\n ... ref=np.mean)"}
{"_id": "q_2111", "text": "Decompose an audio time series into harmonic and percussive components.\n\n This function automates the STFT->HPSS->ISTFT pipeline, and ensures that\n the output waveforms have equal length to the input waveform `y`.\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n kwargs : additional keyword arguments.\n See `librosa.decompose.hpss` for details.\n\n\n Returns\n -------\n y_harmonic : np.ndarray [shape=(n,)]\n audio time series of the harmonic elements\n\n y_percussive : np.ndarray [shape=(n,)]\n audio time series of the percussive elements\n\n See Also\n --------\n harmonic : Extract only the harmonic component\n percussive : Extract only the percussive component\n librosa.decompose.hpss : HPSS on spectrograms\n\n\n Examples\n --------\n >>> # Extract harmonic and percussive components\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> y_harmonic, y_percussive = librosa.effects.hpss(y)\n\n >>> # Get a more isolated percussive component by widening its margin\n >>> y_harmonic, y_percussive = librosa.effects.hpss(y, margin=(1.0,5.0))"}
{"_id": "q_2112", "text": "Extract percussive elements from an audio time-series.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n kwargs : additional keyword arguments.\n See `librosa.decompose.hpss` for details.\n\n Returns\n -------\n y_percussive : np.ndarray [shape=(n,)]\n audio time series of just the percussive portion\n\n See Also\n --------\n hpss : Separate harmonic and percussive components\n harmonic : Extract only the harmonic component\n librosa.decompose.hpss : HPSS for spectrograms\n\n Examples\n --------\n >>> # Extract percussive component\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> y_percussive = librosa.effects.percussive(y)\n\n >>> # Use a margin > 1.0 for greater percussive separation\n >>> y_percussive = librosa.effects.percussive(y, margin=3.0)"}
{"_id": "q_2113", "text": "Time-stretch an audio series by a fixed rate.\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n\n rate : float > 0 [scalar]\n Stretch factor. If `rate > 1`, then the signal is sped up.\n\n If `rate < 1`, then the signal is slowed down.\n\n Returns\n -------\n y_stretch : np.ndarray [shape=(rate * n,)]\n audio time series stretched by the specified rate\n\n See Also\n --------\n pitch_shift : pitch shifting\n librosa.core.phase_vocoder : spectrogram phase vocoder\n\n\n Examples\n --------\n Compress to be twice as fast\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> y_fast = librosa.effects.time_stretch(y, 2.0)\n\n Or half the original speed\n\n >>> y_slow = librosa.effects.time_stretch(y, 0.5)"}
{"_id": "q_2114", "text": "Pitch-shift the waveform by `n_steps` half-steps.\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time-series\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n n_steps : float [scalar]\n how many (fractional) half-steps to shift `y`\n\n bins_per_octave : float > 0 [scalar]\n how many steps per octave\n\n res_type : string\n Resample type.\n Possible options: 'kaiser_best', 'kaiser_fast', and 'scipy', 'polyphase',\n 'fft'.\n By default, 'kaiser_best' is used.\n \n See `core.resample` for more information.\n\n Returns\n -------\n y_shift : np.ndarray [shape=(n,)]\n The pitch-shifted audio time-series\n\n\n See Also\n --------\n time_stretch : time stretching\n librosa.core.phase_vocoder : spectrogram phase vocoder\n\n\n Examples\n --------\n Shift up by a major third (four half-steps)\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> y_third = librosa.effects.pitch_shift(y, sr, n_steps=4)\n\n Shift down by a tritone (six half-steps)\n\n >>> y_tritone = librosa.effects.pitch_shift(y, sr, n_steps=-6)\n\n Shift up by 3 quarter-tones\n\n >>> y_three_qt = librosa.effects.pitch_shift(y, sr, n_steps=3,\n ... bins_per_octave=24)"}
{"_id": "q_2115", "text": "Split an audio signal into non-silent intervals.\n\n Parameters\n ----------\n y : np.ndarray, shape=(n,) or (2, n)\n An audio signal\n\n top_db : number > 0\n The threshold (in decibels) below reference to consider as\n silence\n\n ref : number or callable\n The reference power. By default, it uses `np.max` and compares\n to the peak power in the signal.\n\n frame_length : int > 0\n The number of samples per analysis frame\n\n hop_length : int > 0\n The number of samples between analysis frames\n\n Returns\n -------\n intervals : np.ndarray, shape=(m, 2)\n `intervals[i] == (start_i, end_i)` are the start and end time\n (in samples) of non-silent interval `i`."}
{"_id": "q_2116", "text": "Phase vocoder. Given an STFT matrix D, speed up by a factor of `rate`\n\n Based on the implementation provided by [1]_.\n\n .. [1] Ellis, D. P. W. \"A phase vocoder in Matlab.\"\n Columbia University, 2002.\n http://www.ee.columbia.edu/~dpwe/resources/matlab/pvoc/\n\n Examples\n --------\n >>> # Play at double speed\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> D = librosa.stft(y, n_fft=2048, hop_length=512)\n >>> D_fast = librosa.phase_vocoder(D, 2.0, hop_length=512)\n >>> y_fast = librosa.istft(D_fast, hop_length=512)\n\n >>> # Or play at 1/3 speed\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> D = librosa.stft(y, n_fft=2048, hop_length=512)\n >>> D_slow = librosa.phase_vocoder(D, 1./3, hop_length=512)\n >>> y_slow = librosa.istft(D_slow, hop_length=512)\n\n Parameters\n ----------\n D : np.ndarray [shape=(d, t), dtype=complex]\n STFT matrix\n\n rate : float > 0 [scalar]\n Speed-up factor: `rate > 1` is faster, `rate < 1` is slower.\n\n hop_length : int > 0 [scalar] or None\n The number of samples between successive columns of `D`.\n\n If None, defaults to `n_fft/4 = (D.shape[0]-1)/2`\n\n Returns\n -------\n D_stretched : np.ndarray [shape=(d, t / rate), dtype=complex]\n time-stretched STFT"}
{"_id": "q_2117", "text": "Convert an amplitude spectrogram to dB-scaled spectrogram.\n\n This is equivalent to ``power_to_db(S**2)``, but is provided for convenience.\n\n Parameters\n ----------\n S : np.ndarray\n input amplitude\n\n ref : scalar or callable\n If scalar, the amplitude `abs(S)` is scaled relative to `ref`:\n `20 * log10(S / ref)`.\n Zeros in the output correspond to positions where `S == ref`.\n\n If callable, the reference value is computed as `ref(S)`.\n\n amin : float > 0 [scalar]\n minimum threshold for `S` and `ref`\n\n top_db : float >= 0 [scalar]\n threshold the output at `top_db` below the peak:\n ``max(20 * log10(S)) - top_db``\n\n\n Returns\n -------\n S_db : np.ndarray\n ``S`` measured in dB\n\n See Also\n --------\n power_to_db, db_to_amplitude\n\n Notes\n -----\n This function caches at level 30."}
{"_id": "q_2118", "text": "Helper function to retrieve a magnitude spectrogram.\n\n This is primarily used in feature extraction functions that can operate on\n either audio time-series or spectrogram input.\n\n\n Parameters\n ----------\n y : None or np.ndarray [ndim=1]\n If provided, an audio time series\n\n S : None or np.ndarray\n Spectrogram input, optional\n\n n_fft : int > 0\n STFT window size\n\n hop_length : int > 0\n STFT hop length\n\n power : float > 0\n Exponent for the magnitude spectrogram,\n e.g., 1 for energy, 2 for power, etc.\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n\n Returns\n -------\n S_out : np.ndarray [dtype=np.float32]\n - If `S` is provided as input, then `S_out == S`\n - Else, `S_out = |stft(y, ...)|**power`\n\n n_fft : int > 0\n - If `S` is provided, then `n_fft` is inferred from `S`\n - Else, copied from input"}
{"_id": "q_2119", "text": "HPSS beat tracking\n\n :parameters:\n - input_file : str\n Path to input audio file (wav, mp3, m4a, flac, etc.)\n\n - output_file : str\n Path to save beat event timestamps as a CSV file"}
{"_id": "q_2120", "text": "Filtering by nearest-neighbors.\n\n Each data point (e.g, spectrogram column) is replaced\n by aggregating its nearest neighbors in feature space.\n\n This can be useful for de-noising a spectrogram or feature matrix.\n\n The non-local means method [1]_ can be recovered by providing a\n weighted recurrence matrix as input and specifying `aggregate=np.average`.\n\n Similarly, setting `aggregate=np.median` produces sparse de-noising\n as in REPET-SIM [2]_.\n\n .. [1] Buades, A., Coll, B., & Morel, J. M.\n (2005, June). A non-local algorithm for image denoising.\n In Computer Vision and Pattern Recognition, 2005.\n CVPR 2005. IEEE Computer Society Conference on (Vol. 2, pp. 60-65). IEEE.\n\n .. [2] Rafii, Z., & Pardo, B.\n (2012, October). \"Music/Voice Separation Using the Similarity Matrix.\"\n International Society for Music Information Retrieval Conference, 2012.\n\n Parameters\n ----------\n S : np.ndarray\n The input data (spectrogram) to filter\n\n rec : (optional) scipy.sparse.spmatrix or np.ndarray\n Optionally, a pre-computed nearest-neighbor matrix\n as provided by `librosa.segment.recurrence_matrix`\n\n aggregate : function\n aggregation function (default: `np.mean`)\n\n If `aggregate=np.average`, then a weighted average is\n computed according to the (per-row) weights in `rec`.\n\n For all other aggregation functions, all neighbors\n are treated equally.\n\n\n axis : int\n The axis along which to filter (by default, columns)\n\n kwargs\n Additional keyword arguments provided to\n `librosa.segment.recurrence_matrix` if `rec` is not provided\n\n Returns\n -------\n S_filtered : np.ndarray\n The filtered data\n\n Raises\n ------\n ParameterError\n if `rec` is provided and its shape is incompatible with `S`.\n\n See also\n --------\n decompose\n hpss\n librosa.segment.recurrence_matrix\n\n\n Notes\n -----\n This function caches at level 30.\n\n\n Examples\n --------\n\n De-noise a chromagram by non-local median filtering.\n By default this would use euclidean distance to select neighbors,\n but this can be overridden directly by setting the `metric` parameter.\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... offset=30, duration=10)\n >>> chroma = librosa.feature.chroma_cqt(y=y, sr=sr)\n >>> chroma_med = librosa.decompose.nn_filter(chroma,\n ... aggregate=np.median,\n ... metric='cosine')\n\n To use non-local means, provide an affinity matrix and `aggregate=np.average`.\n\n >>> rec = librosa.segment.recurrence_matrix(chroma, mode='affinity',\n ... metric='cosine', sparse=True)\n >>> chroma_nlm = librosa.decompose.nn_filter(chroma, rec=rec,\n ... aggregate=np.average)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(10, 8))\n >>> plt.subplot(5, 1, 1)\n >>> librosa.display.specshow(chroma, y_axis='chroma')\n >>> plt.colorbar()\n >>> plt.title('Unfiltered')\n >>> plt.subplot(5, 1, 2)\n >>> librosa.display.specshow(chroma_med, y_axis='chroma')\n >>> plt.colorbar()\n >>> plt.title('Median-filtered')\n >>> plt.subplot(5, 1, 3)\n >>> librosa.display.specshow(chroma_nlm, y_axis='chroma')\n >>> plt.colorbar()\n >>> plt.title('Non-local means')\n >>> plt.subplot(5, 1, 4)\n >>> librosa.display.specshow(chroma - chroma_med,\n ... y_axis='chroma')\n >>> plt.colorbar()\n >>> plt.title('Original - median')\n >>> plt.subplot(5, 1, 5)\n >>> librosa.display.specshow(chroma - chroma_nlm,\n ... y_axis='chroma', x_axis='time')\n >>> plt.colorbar()\n >>> plt.title('Original - NLM')\n >>> plt.tight_layout()"}
{"_id": "q_2121", "text": "Nearest-neighbor filter helper function.\n\n This is an internal function, not for use outside of the decompose module.\n\n It applies the nearest-neighbor filter to S, assuming that the first index\n corresponds to observations.\n\n Parameters\n ----------\n R_data, R_indices, R_ptr : np.ndarrays\n The `data`, `indices`, and `indptr` of a scipy.sparse matrix\n\n S : np.ndarray\n The observation data to filter\n\n aggregate : callable\n The aggregation operator\n\n\n Returns\n -------\n S_out : np.ndarray like S\n The filtered data array"}
{"_id": "q_2122", "text": "Create a Filterbank matrix to combine FFT bins into Mel-frequency bins\n\n Parameters\n ----------\n sr : number > 0 [scalar]\n sampling rate of the incoming signal\n\n n_fft : int > 0 [scalar]\n number of FFT components\n\n n_mels : int > 0 [scalar]\n number of Mel bands to generate\n\n fmin : float >= 0 [scalar]\n lowest frequency (in Hz)\n\n fmax : float >= 0 [scalar]\n highest frequency (in Hz).\n If `None`, use `fmax = sr / 2.0`\n\n htk : bool [scalar]\n use HTK formula instead of Slaney\n\n norm : {None, 1, np.inf} [scalar]\n if 1, divide the triangular mel weights by the width of the mel band\n (area normalization). Otherwise, leave all the triangles aiming for\n a peak value of 1.0\n\n dtype : np.dtype\n The data type of the output basis.\n By default, uses 32-bit (single-precision) floating point.\n\n Returns\n -------\n M : np.ndarray [shape=(n_mels, 1 + n_fft/2)]\n Mel transform matrix\n\n Notes\n -----\n This function caches at level 10.\n\n Examples\n --------\n >>> melfb = librosa.filters.mel(22050, 2048)\n >>> melfb\n array([[ 0. , 0.016, ..., 0. , 0. ],\n [ 0. , 0. , ..., 0. , 0. ],\n ...,\n [ 0. , 0. , ..., 0. , 0. ],\n [ 0. , 0. , ..., 0. , 0. ]])\n\n\n Clip the maximum frequency to 8KHz\n\n >>> librosa.filters.mel(22050, 2048, fmax=8000)\n array([[ 0. , 0.02, ..., 0. , 0. ],\n [ 0. , 0. , ..., 0. , 0. ],\n ...,\n [ 0. , 0. , ..., 0. , 0. ],\n [ 0. , 0. , ..., 0. , 0. ]])\n\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> librosa.display.specshow(melfb, x_axis='linear')\n >>> plt.ylabel('Mel filter')\n >>> plt.title('Mel filter bank')\n >>> plt.colorbar()\n >>> plt.tight_layout()"}
{"_id": "q_2123", "text": "r'''Construct a constant-Q basis.\n\n This uses the filter bank described by [1]_.\n\n .. [1] McVicar, Matthew.\n \"A machine learning approach to automatic chord extraction.\"\n Dissertation, University of Bristol. 2013.\n\n\n Parameters\n ----------\n sr : number > 0 [scalar]\n Audio sampling rate\n\n fmin : float > 0 [scalar]\n Minimum frequency bin. Defaults to `C1 ~= 32.70`\n\n n_bins : int > 0 [scalar]\n Number of frequencies. Defaults to 7 octaves (84 bins).\n\n bins_per_octave : int > 0 [scalar]\n Number of bins per octave\n\n tuning : float in `[-0.5, +0.5)` [scalar]\n Tuning deviation from A440 in fractions of a bin\n\n window : string, tuple, number, or function\n Windowing function to apply to filters.\n\n filter_scale : float > 0 [scalar]\n Scale of filter windows.\n Small values (<1) use shorter windows for higher temporal resolution.\n\n pad_fft : boolean\n Center-pad all filters up to the nearest integral power of 2.\n\n By default, padding is done with zeros, but this can be overridden\n by setting the `mode=` field in *kwargs*.\n\n norm : {inf, -inf, 0, float > 0}\n Type of norm to use for basis function normalization.\n See librosa.util.normalize\n\n dtype : np.dtype\n The data type of the output basis.\n By default, uses 64-bit (single precision) complex floating point.\n\n kwargs : additional keyword arguments\n Arguments to `np.pad()` when `pad==True`.\n\n Returns\n -------\n filters : np.ndarray, `len(filters) == n_bins`\n `filters[i]` is `i`\\ th time-domain CQT basis filter\n\n lengths : np.ndarray, `len(lengths) == n_bins`\n The (fractional) length of each filter\n\n Notes\n -----\n This function caches at level 10.\n\n See Also\n --------\n constant_q_lengths\n librosa.core.cqt\n librosa.util.normalize\n\n\n Examples\n --------\n Use a shorter window for each filter\n\n >>> basis, lengths = librosa.filters.constant_q(22050, filter_scale=0.5)\n\n Plot one octave of filters in time and frequency\n\n >>> import matplotlib.pyplot as plt\n >>> basis, lengths = librosa.filters.constant_q(22050)\n >>> plt.figure(figsize=(10, 6))\n >>> plt.subplot(2, 1, 1)\n >>> notes = librosa.midi_to_note(np.arange(24, 24 + len(basis)))\n >>> for i, (f, n) in enumerate(zip(basis, notes[:12])):\n ... f_scale = librosa.util.normalize(f) / 2\n ... plt.plot(i + f_scale.real)\n ... plt.plot(i + f_scale.imag, linestyle=':')\n >>> plt.axis('tight')\n >>> plt.yticks(np.arange(len(notes[:12])), notes[:12])\n >>> plt.ylabel('CQ filters')\n >>> plt.title('CQ filters (one octave, time domain)')\n >>> plt.xlabel('Time (samples at 22050 Hz)')\n >>> plt.legend(['Real', 'Imaginary'], frameon=True, framealpha=0.8)\n >>> plt.subplot(2, 1, 2)\n >>> F = np.abs(np.fft.fftn(basis, axes=[-1]))\n >>> # Keep only the positive frequencies\n >>> F = F[:, :(1 + F.shape[1] // 2)]\n >>> librosa.display.specshow(F, x_axis='linear')\n >>> plt.yticks(np.arange(len(notes))[::12], notes[::12])\n >>> plt.ylabel('CQ filters')\n >>> plt.title('CQ filter magnitudes (frequency domain)')\n >>> plt.tight_layout()"}
{"_id": "q_2124", "text": "Convert a Constant-Q basis to Chroma.\n\n\n Parameters\n ----------\n n_input : int > 0 [scalar]\n Number of input components (CQT bins)\n\n bins_per_octave : int > 0 [scalar]\n How many bins per octave in the CQT\n\n n_chroma : int > 0 [scalar]\n Number of output bins (per octave) in the chroma\n\n fmin : None or float > 0\n Center frequency of the first constant-Q channel.\n Default: 'C1' ~= 32.7 Hz\n\n window : None or np.ndarray\n If provided, the cq_to_chroma filter bank will be\n convolved with `window`.\n\n base_c : bool\n If True, the first chroma bin will start at 'C'\n If False, the first chroma bin will start at 'A'\n\n dtype : np.dtype\n The data type of the output basis.\n By default, uses 32-bit (single-precision) floating point.\n\n\n Returns\n -------\n cq_to_chroma : np.ndarray [shape=(n_chroma, n_input)]\n Transformation matrix: `Chroma = np.dot(cq_to_chroma, CQT)`\n\n Raises\n ------\n ParameterError\n If `n_input` is not an integer multiple of `n_chroma`\n\n Notes\n -----\n This function caches at level 10.\n\n Examples\n --------\n Get a CQT, and wrap bins to chroma\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> CQT = np.abs(librosa.cqt(y, sr=sr))\n >>> chroma_map = librosa.filters.cq_to_chroma(CQT.shape[0])\n >>> chromagram = chroma_map.dot(CQT)\n >>> # Max-normalize each time step\n >>> chromagram = librosa.util.normalize(chromagram, axis=0)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.subplot(3, 1, 1)\n >>> librosa.display.specshow(librosa.amplitude_to_db(CQT,\n ... ref=np.max),\n ... y_axis='cqt_note')\n >>> plt.title('CQT Power')\n >>> plt.colorbar()\n >>> plt.subplot(3, 1, 2)\n >>> librosa.display.specshow(chromagram, y_axis='chroma')\n >>> plt.title('Chroma (wrapped CQT)')\n >>> plt.colorbar()\n >>> plt.subplot(3, 1, 3)\n >>> chroma = librosa.feature.chroma_stft(y=y, sr=sr)\n >>> librosa.display.specshow(chroma, y_axis='chroma', x_axis='time')\n >>> plt.title('librosa.feature.chroma_stft')\n >>> plt.colorbar()\n >>> plt.tight_layout()"}
{"_id": "q_2125", "text": "Get the equivalent noise bandwidth of a window function.\n\n\n Parameters\n ----------\n window : callable or string\n A window function, or the name of a window function.\n Examples:\n - scipy.signal.hann\n - 'boxcar'\n\n n : int > 0\n The number of coefficients to use in estimating the\n window bandwidth\n\n Returns\n -------\n bandwidth : float\n The equivalent noise bandwidth (in FFT bins) of the\n given window function\n\n Notes\n -----\n This function caches at level 10.\n\n See Also\n --------\n get_window"}
{"_id": "q_2126", "text": "Compute a window function.\n\n This is a wrapper for `scipy.signal.get_window` that additionally\n supports callable or pre-computed windows.\n\n Parameters\n ----------\n window : string, tuple, number, callable, or list-like\n The window specification:\n\n - If string, it's the name of the window function (e.g., `'hann'`)\n - If tuple, it's the name of the window function and any parameters\n (e.g., `('kaiser', 4.0)`)\n - If numeric, it is treated as the beta parameter of the `'kaiser'`\n window, as in `scipy.signal.get_window`.\n - If callable, it's a function that accepts one integer argument\n (the window length)\n - If list-like, it's a pre-computed window of the correct length `Nx`\n\n Nx : int > 0\n The length of the window\n\n fftbins : bool, optional\n If True (default), create a periodic window for use with FFT\n If False, create a symmetric window for filter design applications.\n\n Returns\n -------\n get_window : np.ndarray\n A window of length `Nx` and type `window`\n\n See Also\n --------\n scipy.signal.get_window\n\n Notes\n -----\n This function caches at level 10.\n\n Raises\n ------\n ParameterError\n If `window` is supplied as a vector of length != `n_fft`,\n or is otherwise mis-specified."}
{"_id": "q_2127", "text": "r'''Helper function for generating center frequency and sample rate pairs.\n\n This function will return center frequency and corresponding sample rates\n to obtain similar pitch filterbank settings as described in [1]_.\n Instead of starting with MIDI pitch `A0`, we start with `C0`.\n\n .. [1] M\u00fcller, Meinard.\n \"Information Retrieval for Music and Motion.\"\n Springer Verlag. 2007.\n\n\n Parameters\n ----------\n tuning : float in `[-0.5, +0.5)` [scalar]\n Tuning deviation from A440, measure as a fraction of the equally\n tempered semitone (1/12 of an octave).\n\n Returns\n -------\n center_freqs : np.ndarray [shape=(n,), dtype=float]\n Center frequencies of the filter kernels.\n Also defines the number of filters in the filterbank.\n\n sample_rates : np.ndarray [shape=(n,), dtype=float]\n Sample rate for each filter, used for multirate filterbank.\n\n Notes\n -----\n This function caches at level 10.\n\n\n See Also\n --------\n librosa.filters.semitone_filterbank\n librosa.filters._multirate_fb"}
{"_id": "q_2128", "text": "Helper function for window sum-square calculation."}
{"_id": "q_2129", "text": "Compute the sum-square envelope of a window function at a given hop length.\n\n This is used to estimate modulation effects induced by windowing observations\n in short-time fourier transforms.\n\n Parameters\n ----------\n window : string, tuple, number, callable, or list-like\n Window specification, as in `get_window`\n\n n_frames : int > 0\n The number of analysis frames\n\n hop_length : int > 0\n The number of samples to advance between frames\n\n win_length : [optional]\n The length of the window function. By default, this matches `n_fft`.\n\n n_fft : int > 0\n The length of each analysis frame.\n\n dtype : np.dtype\n The data type of the output\n\n Returns\n -------\n wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))`\n The sum-squared envelope of the window function\n\n Examples\n --------\n For a fixed frame length (2048), compare modulation effects for a Hann window\n at different hop lengths:\n\n >>> n_frames = 50\n >>> wss_256 = librosa.filters.window_sumsquare('hann', n_frames, hop_length=256)\n >>> wss_512 = librosa.filters.window_sumsquare('hann', n_frames, hop_length=512)\n >>> wss_1024 = librosa.filters.window_sumsquare('hann', n_frames, hop_length=1024)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> plt.subplot(3,1,1)\n >>> plt.plot(wss_256)\n >>> plt.title('hop_length=256')\n >>> plt.subplot(3,1,2)\n >>> plt.plot(wss_512)\n >>> plt.title('hop_length=512')\n >>> plt.subplot(3,1,3)\n >>> plt.plot(wss_1024)\n >>> plt.title('hop_length=1024')\n >>> plt.tight_layout()"}
{"_id": "q_2130", "text": "Compute the spectral centroid.\n\n Each frame of a magnitude spectrogram is normalized and treated as a\n distribution over frequency bins, from which the mean (centroid) is\n extracted per frame.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n S : np.ndarray [shape=(d, t)] or None\n (optional) spectrogram magnitude\n\n n_fft : int > 0 [scalar]\n FFT window size\n\n hop_length : int > 0 [scalar]\n hop length for STFT. See `librosa.core.stft` for details.\n\n freq : None or np.ndarray [shape=(d,) or shape=(d, t)]\n Center frequencies for spectrogram bins.\n If `None`, then FFT bin center frequencies are used.\n Otherwise, it can be a single array of `d` center frequencies,\n or a matrix of center frequencies as constructed by\n `librosa.core.ifgram`\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n\n Returns\n -------\n centroid : np.ndarray [shape=(1, t)]\n centroid frequencies\n\n See Also\n --------\n librosa.core.stft\n Short-time Fourier Transform\n\n librosa.core.ifgram\n Instantaneous-frequency spectrogram\n\n Examples\n --------\n From time-series input:\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> cent = librosa.feature.spectral_centroid(y=y, sr=sr)\n >>> cent\n array([[ 4382.894, 626.588, ..., 5037.07 , 5413.398]])\n\n From spectrogram input:\n\n >>> S, phase = librosa.magphase(librosa.stft(y=y))\n >>> librosa.feature.spectral_centroid(S=S)\n array([[ 4382.894, 626.588, ..., 5037.07 , 5413.398]])\n\n Using variable bin center frequencies:\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> if_gram, D = librosa.ifgram(y)\n >>> librosa.feature.spectral_centroid(S=np.abs(D), freq=if_gram)\n array([[ 4420.719, 625.769, ..., 5011.86 , 5221.492]])\n\n Plot the result\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> plt.subplot(2, 1, 1)\n >>> plt.semilogy(cent.T, label='Spectral centroid')\n >>> plt.ylabel('Hz')\n >>> plt.xticks([])\n >>> plt.xlim([0, cent.shape[-1]])\n >>> plt.legend()\n >>> plt.subplot(2, 1, 2)\n >>> librosa.display.specshow(librosa.amplitude_to_db(S, ref=np.max),\n ... y_axis='log', x_axis='time')\n >>> plt.title('log Power spectrogram')\n >>> plt.tight_layout()"}
{"_id": "q_2131", "text": "Compute roll-off frequency.\n\n The roll-off frequency is defined for each frame as the center frequency\n for a spectrogram bin such that at least roll_percent (0.85 by default)\n of the energy of the spectrum in this frame is contained in this bin and\n the bins below. This can be used to, e.g., approximate the maximum (or\n minimum) frequency by setting roll_percent to a value close to 1 (or 0).\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n S : np.ndarray [shape=(d, t)] or None\n (optional) spectrogram magnitude\n\n n_fft : int > 0 [scalar]\n FFT window size\n\n hop_length : int > 0 [scalar]\n hop length for STFT. See `librosa.core.stft` for details.\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n freq : None or np.ndarray [shape=(d,) or shape=(d, t)]\n Center frequencies for spectrogram bins.\n If `None`, then FFT bin center frequencies are used.\n Otherwise, it can be a single array of `d` center frequencies,\n\n .. note:: `freq` is assumed to be sorted in increasing order\n\n roll_percent : float [0 < roll_percent < 1]\n Roll-off percentage.\n\n Returns\n -------\n rolloff : np.ndarray [shape=(1, t)]\n roll-off frequency for each frame\n\n\n Examples\n --------\n From time-series input\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> # Approximate maximum frequencies with roll_percent=0.85 (default)\n >>> rolloff = librosa.feature.spectral_rolloff(y=y, sr=sr)\n >>> rolloff\n array([[ 8376.416, 968.994, ..., 8925.513, 9108.545]])\n >>> # Approximate minimum frequencies with roll_percent=0.1\n >>> rolloff = librosa.feature.spectral_rolloff(y=y, sr=sr, roll_percent=0.1)\n >>> rolloff\n array([[ 75.36621094, 64.59960938, 64.59960938, ..., 75.36621094,\n 75.36621094, 64.59960938]])\n\n\n From spectrogram input\n\n >>> S, phase = librosa.magphase(librosa.stft(y))\n >>> librosa.feature.spectral_rolloff(S=S, sr=sr)\n array([[ 8376.416, 968.994, ..., 8925.513, 9108.545]])\n\n >>> # With a higher roll percentage:\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.feature.spectral_rolloff(y=y, sr=sr, roll_percent=0.95)\n array([[ 10012.939, 3003.882, ..., 10034.473, 10077.539]])\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> plt.subplot(2, 1, 1)\n >>> plt.semilogy(rolloff.T, label='Roll-off frequency')\n >>> plt.ylabel('Hz')\n >>> plt.xticks([])\n >>> plt.xlim([0, rolloff.shape[-1]])\n >>> plt.legend()\n >>> plt.subplot(2, 1, 2)\n >>> librosa.display.specshow(librosa.amplitude_to_db(S, ref=np.max),\n ... y_axis='log', x_axis='time')\n >>> plt.title('log Power spectrogram')\n >>> plt.tight_layout()"}
{"_id": "q_2132", "text": "Compute spectral flatness\n\n Spectral flatness (or tonality coefficient) is a measure to\n quantify how much noise-like a sound is, as opposed to being\n tone-like [1]_. A high spectral flatness (closer to 1.0)\n indicates the spectrum is similar to white noise.\n It is often converted to decibel.\n\n .. [1] Dubnov, Shlomo \"Generalization of spectral flatness\n measure for non-gaussian linear processes\"\n IEEE Signal Processing Letters, 2004, Vol. 11.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n S : np.ndarray [shape=(d, t)] or None\n (optional) pre-computed spectrogram magnitude\n\n n_fft : int > 0 [scalar]\n FFT window size\n\n hop_length : int > 0 [scalar]\n hop length for STFT. See `librosa.core.stft` for details.\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n amin : float > 0 [scalar]\n minimum threshold for `S` (=added noise floor for numerical stability)\n\n power : float > 0 [scalar]\n Exponent for the magnitude spectrogram.\n e.g., 1 for energy, 2 for power, etc.\n Power spectrogram is usually used for computing spectral flatness.\n\n Returns\n -------\n flatness : np.ndarray [shape=(1, t)]\n spectral flatness for each frame.\n The returned value is in [0, 1] and often converted to dB scale.\n\n\n Examples\n --------\n From time-series input\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> flatness = librosa.feature.spectral_flatness(y=y)\n >>> flatness\n array([[ 1.00000e+00, 5.82299e-03, 5.64624e-04, ..., 9.99063e-01,\n 1.00000e+00, 1.00000e+00]], dtype=float32)\n\n From spectrogram input\n\n >>> S, phase = librosa.magphase(librosa.stft(y))\n >>> librosa.feature.spectral_flatness(S=S)\n array([[ 1.00000e+00, 5.82299e-03, 5.64624e-04, ..., 9.99063e-01,\n 1.00000e+00, 1.00000e+00]], dtype=float32)\n\n From power spectrogram input\n\n >>> S, phase = librosa.magphase(librosa.stft(y))\n >>> S_power = S ** 2\n >>> librosa.feature.spectral_flatness(S=S_power, power=1.0)\n array([[ 1.00000e+00, 5.82299e-03, 5.64624e-04, ..., 9.99063e-01,\n 1.00000e+00, 1.00000e+00]], dtype=float32)"}
{"_id": "q_2133", "text": "Get coefficients of fitting an nth-order polynomial to the columns\n of a spectrogram.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n S : np.ndarray [shape=(d, t)] or None\n (optional) spectrogram magnitude\n\n n_fft : int > 0 [scalar]\n FFT window size\n\n hop_length : int > 0 [scalar]\n hop length for STFT. See `librosa.core.stft` for details.\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n order : int > 0\n order of the polynomial to fit\n\n freq : None or np.ndarray [shape=(d,) or shape=(d, t)]\n Center frequencies for spectrogram bins.\n If `None`, then FFT bin center frequencies are used.\n Otherwise, it can be a single array of `d` center frequencies,\n or a matrix of center frequencies as constructed by\n `librosa.core.ifgram`\n\n Returns\n -------\n coefficients : np.ndarray [shape=(order+1, t)]\n polynomial coefficients for each frame.\n\n `coeffecients[0]` corresponds to the highest degree (`order`),\n\n `coefficients[1]` corresponds to the next highest degree (`order-1`),\n\n down to the constant term `coefficients[order]`.\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> S = np.abs(librosa.stft(y))\n\n Fit a degree-0 polynomial (constant) to each frame\n\n >>> p0 = librosa.feature.poly_features(S=S, order=0)\n\n Fit a linear polynomial to each frame\n\n >>> p1 = librosa.feature.poly_features(S=S, order=1)\n\n Fit a quadratic to each frame\n\n >>> p2 = librosa.feature.poly_features(S=S, order=2)\n\n Plot the results for comparison\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(8, 8))\n >>> ax = plt.subplot(4,1,1)\n >>> plt.plot(p2[2], label='order=2', alpha=0.8)\n >>> plt.plot(p1[1], label='order=1', alpha=0.8)\n >>> plt.plot(p0[0], label='order=0', alpha=0.8)\n >>> plt.xticks([])\n >>> plt.ylabel('Constant')\n >>> plt.legend()\n >>> plt.subplot(4,1,2, sharex=ax)\n >>> plt.plot(p2[1], label='order=2', alpha=0.8)\n >>> plt.plot(p1[0], label='order=1', alpha=0.8)\n >>> plt.xticks([])\n >>> plt.ylabel('Linear')\n >>> plt.subplot(4,1,3, sharex=ax)\n >>> plt.plot(p2[0], label='order=2', alpha=0.8)\n >>> plt.xticks([])\n >>> plt.ylabel('Quadratic')\n >>> plt.subplot(4,1,4, sharex=ax)\n >>> librosa.display.specshow(librosa.amplitude_to_db(S, ref=np.max),\n ... y_axis='log')\n >>> plt.tight_layout()"}
{"_id": "q_2134", "text": "Compute the zero-crossing rate of an audio time series.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n Audio time series\n\n frame_length : int > 0\n Length of the frame over which to compute zero crossing rates\n\n hop_length : int > 0\n Number of samples to advance for each frame\n\n center : bool\n If `True`, frames are centered by padding the edges of `y`.\n This is similar to the padding in `librosa.core.stft`,\n but uses edge-value copies instead of reflection.\n\n kwargs : additional keyword arguments\n See `librosa.core.zero_crossings`\n\n .. note:: By default, the `pad` parameter is set to `False`, which\n differs from the default specified by\n `librosa.core.zero_crossings`.\n\n Returns\n -------\n zcr : np.ndarray [shape=(1, t)]\n `zcr[0, i]` is the fraction of zero crossings in the\n `i` th frame\n\n See Also\n --------\n librosa.core.zero_crossings\n Compute zero-crossings in a time-series\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.feature.zero_crossing_rate(y)\n array([[ 0.134, 0.139, ..., 0.387, 0.322]])"}
{"_id": "q_2135", "text": "Compute a chromagram from a waveform or power spectrogram.\n\n This implementation is derived from `chromagram_E` [1]_\n\n .. [1] Ellis, Daniel P.W. \"Chroma feature analysis and synthesis\"\n 2007/04/21\n http://labrosa.ee.columbia.edu/matlab/chroma-ansyn/\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n S : np.ndarray [shape=(d, t)] or None\n power spectrogram\n\n norm : float or None\n Column-wise normalization.\n See `librosa.util.normalize` for details.\n\n If `None`, no normalization is performed.\n\n n_fft : int > 0 [scalar]\n FFT window size if provided `y, sr` instead of `S`\n\n hop_length : int > 0 [scalar]\n hop length if provided `y, sr` instead of `S`\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n\n tuning : float in `[-0.5, 0.5)` [scalar] or None.\n Deviation from A440 tuning in fractional bins (cents).\n If `None`, it is automatically estimated.\n\n kwargs : additional keyword arguments\n Arguments to parameterize chroma filters.\n See `librosa.filters.chroma` for details.\n\n Returns\n -------\n chromagram : np.ndarray [shape=(n_chroma, t)]\n Normalized energy for each chroma bin at each frame.\n\n See Also\n --------\n librosa.filters.chroma\n Chroma filter bank construction\n librosa.util.normalize\n Vector normalization\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.feature.chroma_stft(y=y, sr=sr)\n array([[ 0.974, 0.881, ..., 0.925, 1. ],\n [ 1. , 0.841, ..., 0.882, 0.878],\n ...,\n [ 0.658, 0.985, ..., 0.878, 0.764],\n [ 0.969, 0.92 , ..., 0.974, 0.915]])\n\n Use an energy (magnitude) spectrum instead of power spectrogram\n\n >>> S = np.abs(librosa.stft(y))\n >>> chroma = librosa.feature.chroma_stft(S=S, sr=sr)\n >>> chroma\n array([[ 0.884, 0.91 , ..., 0.861, 0.858],\n [ 0.963, 0.785, ..., 0.968, 0.896],\n ...,\n [ 0.871, 1. , ..., 0.928, 0.829],\n [ 1. , 0.982, ..., 0.93 , 0.878]])\n\n Use a pre-computed power spectrogram with a larger frame\n\n >>> S = np.abs(librosa.stft(y, n_fft=4096))**2\n >>> chroma = librosa.feature.chroma_stft(S=S, sr=sr)\n >>> chroma\n array([[ 0.685, 0.477, ..., 0.961, 0.986],\n [ 0.674, 0.452, ..., 0.952, 0.926],\n ...,\n [ 0.844, 0.575, ..., 0.934, 0.869],\n [ 0.793, 0.663, ..., 0.964, 0.972]])\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(10, 4))\n >>> librosa.display.specshow(chroma, y_axis='chroma', x_axis='time')\n >>> plt.colorbar()\n >>> plt.title('Chromagram')\n >>> plt.tight_layout()"}
{"_id": "q_2136", "text": "r'''Constant-Q chromagram\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n\n sr : number > 0\n sampling rate of `y`\n\n C : np.ndarray [shape=(d, t)] [Optional]\n a pre-computed constant-Q spectrogram\n\n hop_length : int > 0\n number of samples between successive chroma frames\n\n fmin : float > 0\n minimum frequency to analyze in the CQT.\n Default: 'C1' ~= 32.7 Hz\n\n norm : int > 0, +-np.inf, or None\n Column-wise normalization of the chromagram.\n\n threshold : float\n Pre-normalization energy threshold. Values below the\n threshold are discarded, resulting in a sparse chromagram.\n\n tuning : float\n Deviation (in cents) from A440 tuning\n\n n_chroma : int > 0\n Number of chroma bins to produce\n\n n_octaves : int > 0\n Number of octaves to analyze above `fmin`\n\n window : None or np.ndarray\n Optional window parameter to `filters.cq_to_chroma`\n\n bins_per_octave : int > 0\n Number of bins per octave in the CQT.\n Default: matches `n_chroma`\n\n cqt_mode : ['full', 'hybrid']\n Constant-Q transform mode\n\n Returns\n -------\n chromagram : np.ndarray [shape=(n_chroma, t)]\n The output chromagram\n\n See Also\n --------\n librosa.util.normalize\n librosa.core.cqt\n librosa.core.hybrid_cqt\n chroma_stft\n\n Examples\n --------\n Compare a long-window STFT chromagram to the CQT chromagram\n\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... offset=10, duration=15)\n >>> chroma_stft = librosa.feature.chroma_stft(y=y, sr=sr,\n ... n_chroma=12, n_fft=4096)\n >>> chroma_cq = librosa.feature.chroma_cqt(y=y, sr=sr)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> plt.subplot(2,1,1)\n >>> librosa.display.specshow(chroma_stft, y_axis='chroma')\n >>> plt.title('chroma_stft')\n >>> plt.colorbar()\n >>> plt.subplot(2,1,2)\n >>> librosa.display.specshow(chroma_cq, y_axis='chroma', x_axis='time')\n >>> plt.title('chroma_cqt')\n >>> plt.colorbar()\n >>> plt.tight_layout()"}
{"_id": "q_2137", "text": "Compute a mel-scaled spectrogram.\n\n If a spectrogram input `S` is provided, then it is mapped directly onto\n the mel basis `mel_f` by `mel_f.dot(S)`.\n\n If a time-series input `y, sr` is provided, then its magnitude spectrogram\n `S` is first computed, and then mapped onto the mel scale by\n `mel_f.dot(S**power)`. By default, `power=2` operates on a power spectrum.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time-series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n S : np.ndarray [shape=(d, t)]\n spectrogram\n\n n_fft : int > 0 [scalar]\n length of the FFT window\n\n hop_length : int > 0 [scalar]\n number of samples between successive frames.\n See `librosa.core.stft`\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n power : float > 0 [scalar]\n Exponent for the magnitude melspectrogram.\n e.g., 1 for energy, 2 for power, etc.\n\n kwargs : additional keyword arguments\n Mel filter bank parameters.\n See `librosa.filters.mel` for details.\n\n Returns\n -------\n S : np.ndarray [shape=(n_mels, t)]\n Mel spectrogram\n\n See Also\n --------\n librosa.filters.mel\n Mel filter bank construction\n\n librosa.core.stft\n Short-time Fourier Transform\n\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.feature.melspectrogram(y=y, sr=sr)\n array([[ 2.891e-07, 2.548e-03, ..., 8.116e-09, 5.633e-09],\n [ 1.986e-07, 1.162e-02, ..., 9.332e-08, 6.716e-09],\n ...,\n [ 3.668e-09, 2.029e-08, ..., 3.208e-09, 2.864e-09],\n [ 2.561e-10, 2.096e-09, ..., 7.543e-10, 6.101e-10]])\n\n Using a pre-computed power spectrogram\n\n >>> D = np.abs(librosa.stft(y))**2\n >>> S = librosa.feature.melspectrogram(S=D)\n\n >>> # Passing through arguments to the Mel filters\n >>> S = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128,\n ... fmax=8000)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(10, 4))\n >>> librosa.display.specshow(librosa.power_to_db(S,\n ... ref=np.max),\n ... y_axis='mel', fmax=8000,\n ... x_axis='time')\n >>> plt.colorbar(format='%+2.0f dB')\n >>> plt.title('Mel spectrogram')\n >>> plt.tight_layout()"}
{"_id": "q_2138", "text": "Jaccard similarity between two intervals\n\n Parameters\n ----------\n int_a, int_b : np.ndarrays, shape=(2,)\n\n Returns\n -------\n Jaccard similarity between intervals"}
{"_id": "q_2139", "text": "Numba-accelerated interval matching algorithm."}
{"_id": "q_2140", "text": "Match one set of time intervals to another.\n\n This can be useful for tasks such as mapping beat timings\n to segments.\n\n Each element `[a, b]` of `intervals_from` is matched to the\n element `[c, d]` of `intervals_to` which maximizes the\n Jaccard similarity between the intervals:\n\n `max(0, |min(b, d) - max(a, c)|) / |max(d, b) - min(a, c)|`\n\n In `strict=True` mode, if there is no interval with positive\n intersection with `[a,b]`, an exception is thrown.\n\n In `strict=False` mode, any interval `[a, b]` that has no\n intersection with any element of `intervals_to` is instead\n matched to the interval `[c, d]` which minimizes\n\n `min(|b - c|, |a - d|)`\n\n that is, the disjoint interval `[c, d]` with a boundary closest\n to `[a, b]`.\n\n .. note:: An element of `intervals_to` may be matched to multiple\n entries of `intervals_from`.\n\n Parameters\n ----------\n intervals_from : np.ndarray [shape=(n, 2)]\n The time range for source intervals.\n The `i` th interval spans time `intervals_from[i, 0]`\n to `intervals_from[i, 1]`.\n `intervals_from[0, 0]` should be 0, `intervals_from[-1, 1]`\n should be the track duration.\n\n intervals_to : np.ndarray [shape=(m, 2)]\n Analogous to `intervals_from`.\n\n strict : bool\n If `True`, intervals can only match if they intersect.\n If `False`, disjoint intervals can match.\n\n Returns\n -------\n interval_mapping : np.ndarray [shape=(n,)]\n For each interval in `intervals_from`, the\n corresponding interval in `intervals_to`.\n\n See Also\n --------\n match_events\n\n Raises\n ------\n ParameterError\n If either array of input intervals is not the correct shape\n\n If `strict=True` and some element of `intervals_from` is disjoint from\n every element of `intervals_to`.\n\n Examples\n --------\n >>> ints_from = np.array([[3, 5], [1, 4], [4, 5]])\n >>> ints_to = np.array([[0, 2], [1, 3], [4, 5], [6, 7]])\n >>> librosa.util.match_intervals(ints_from, ints_to)\n array([2, 1, 2], dtype=uint32)\n >>> # [3, 5] => [4, 5] (ints_to[2])\n >>> # [1, 4] => [1, 3] (ints_to[1])\n >>> # [4, 5] => [4, 5] (ints_to[2])\n\n The reverse matching of the above is not possible in `strict` mode\n because `[6, 7]` is disjoint from all intervals in `ints_from`.\n With `strict=False`, we get the following:\n >>> librosa.util.match_intervals(ints_to, ints_from, strict=False)\n array([1, 1, 2, 2], dtype=uint32)\n >>> # [0, 2] => [1, 4] (ints_from[1])\n >>> # [1, 3] => [1, 4] (ints_from[1])\n >>> # [4, 5] => [4, 5] (ints_from[2])\n >>> # [6, 7] => [4, 5] (ints_from[2])"}
{"_id": "q_2141", "text": "Match one set of events to another.\n\n This is useful for tasks such as matching beats to the nearest\n detected onsets, or frame-aligned events to the nearest zero-crossing.\n\n .. note:: A target event may be matched to multiple source events.\n\n Examples\n --------\n >>> # Sources are multiples of 7\n >>> s_from = np.arange(0, 100, 7)\n >>> s_from\n array([ 0, 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91,\n 98])\n >>> # Targets are multiples of 10\n >>> s_to = np.arange(0, 100, 10)\n >>> s_to\n array([ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90])\n >>> # Find the matching\n >>> idx = librosa.util.match_events(s_from, s_to)\n >>> idx\n array([0, 1, 1, 2, 3, 3, 4, 5, 6, 6, 7, 8, 8, 9, 9])\n >>> # Print each source value to its matching target\n >>> zip(s_from, s_to[idx])\n [(0, 0), (7, 10), (14, 10), (21, 20), (28, 30), (35, 30),\n (42, 40), (49, 50), (56, 60), (63, 60), (70, 70), (77, 80),\n (84, 80), (91, 90), (98, 90)]\n\n Parameters\n ----------\n events_from : ndarray [shape=(n,)]\n Array of events (eg, times, sample or frame indices) to match from.\n\n events_to : ndarray [shape=(m,)]\n Array of events (eg, times, sample or frame indices) to\n match against.\n\n left : bool\n right : bool\n If `False`, then matched events cannot be to the left (or right)\n of source events.\n\n Returns\n -------\n event_mapping : np.ndarray [shape=(n,)]\n For each event in `events_from`, the corresponding event\n index in `events_to`.\n\n `event_mapping[i] == arg min |events_from[i] - events_to[:]|`\n\n See Also\n --------\n match_intervals\n\n Raises\n ------\n ParameterError\n If either array of input events is not the correct shape"}
{"_id": "q_2142", "text": "Populate a harmonic tensor from a time-frequency representation.\n\n Parameters\n ----------\n harmonic_out : np.ndarray, shape=(len(h_range), X.shape)\n The output array to store harmonics\n\n X : np.ndarray\n The input energy\n\n freqs : np.ndarray, shape=(x.shape[axis])\n The frequency values corresponding to x's elements along the\n chosen axis.\n\n h_range : list-like, non-negative\n Harmonics to compute. The first harmonic (1) corresponds to `x`\n itself.\n Values less than one (e.g., 1/2) correspond to sub-harmonics.\n\n kind : str\n Interpolation type. See `scipy.interpolate.interp1d`.\n\n fill_value : float\n The value to fill when extrapolating beyond the observed\n frequency range.\n\n axis : int\n The axis along which to compute harmonics\n\n See Also\n --------\n harmonics\n scipy.interpolate.interp1d\n\n\n Examples\n --------\n Estimate the harmonics of a time-averaged tempogram\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... duration=15, offset=30)\n >>> # Compute the time-varying tempogram and average over time\n >>> tempi = np.mean(librosa.feature.tempogram(y=y, sr=sr), axis=1)\n >>> # We'll measure the first five harmonics\n >>> h_range = [1, 2, 3, 4, 5]\n >>> f_tempo = librosa.tempo_frequencies(len(tempi), sr=sr)\n >>> # Build the harmonic tensor\n >>> t_harmonics = librosa.interp_harmonics(tempi, f_tempo, h_range)\n >>> print(t_harmonics.shape)\n (5, 384)\n\n >>> # And plot the results\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> librosa.display.specshow(t_harmonics, x_axis='tempo', sr=sr)\n >>> plt.yticks(0.5 + np.arange(len(h_range)),\n ... ['{:.3g}'.format(_) for _ in h_range])\n >>> plt.ylabel('Harmonic')\n >>> plt.xlabel('Tempo (BPM)')\n >>> plt.tight_layout()\n\n We can also compute frequency harmonics for spectrograms.\n To calculate subharmonic energy, use values < 1.\n\n >>> h_range = [1./3, 1./2, 1, 2, 3, 4]\n >>> S = np.abs(librosa.stft(y))\n >>> fft_freqs = librosa.fft_frequencies(sr=sr)\n >>> S_harm = librosa.interp_harmonics(S, fft_freqs, h_range, axis=0)\n >>> print(S_harm.shape)\n (6, 1025, 646)\n\n >>> plt.figure()\n >>> for i, _sh in enumerate(S_harm, 1):\n ... plt.subplot(3,2,i)\n ... librosa.display.specshow(librosa.amplitude_to_db(_sh,\n ... ref=S.max()),\n ... sr=sr, y_axis='log')\n ... plt.title('h={:.3g}'.format(h_range[i-1]))\n ... plt.yticks([])\n >>> plt.tight_layout()"}
{"_id": "q_2143", "text": "Load an audio file as a floating point time series.\n\n Audio will be automatically resampled to the given rate\n (default `sr=22050`).\n\n To preserve the native sampling rate of the file, use `sr=None`.\n\n Parameters\n ----------\n path : string, int, or file-like object\n path to the input file.\n\n Any codec supported by `soundfile` or `audioread` will work.\n\n If the codec is supported by `soundfile`, then `path` can also be\n an open file descriptor (int), or any object implementing Python's\n file interface.\n\n If the codec is not supported by `soundfile` (e.g., MP3), then only\n string file paths are supported.\n\n sr : number > 0 [scalar]\n target sampling rate\n\n 'None' uses the native sampling rate\n\n mono : bool\n convert signal to mono\n\n offset : float\n start reading after this time (in seconds)\n\n duration : float\n only load up to this much audio (in seconds)\n\n dtype : numeric type\n data type of `y`\n\n res_type : str\n resample type (see note)\n\n .. note::\n By default, this uses `resampy`'s high-quality mode ('kaiser_best').\n\n For alternative resampling modes, see `resample`\n\n .. note::\n `audioread` may truncate the precision of the audio data to 16 bits.\n\n See https://librosa.github.io/librosa/ioformats.html for alternate\n loading methods.\n\n\n Returns\n -------\n y : np.ndarray [shape=(n,) or (2, n)]\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n\n Examples\n --------\n >>> # Load an ogg vorbis file\n >>> filename = librosa.util.example_audio_file()\n >>> y, sr = librosa.load(filename)\n >>> y\n array([ -4.756e-06, -6.020e-06, ..., -1.040e-06, 0.000e+00], dtype=float32)\n >>> sr\n 22050\n\n >>> # Load a file and resample to 11 KHz\n >>> filename = librosa.util.example_audio_file()\n >>> y, sr = librosa.load(filename, sr=11025)\n >>> y\n array([ -2.077e-06, -2.928e-06, ..., -4.395e-06, 0.000e+00], dtype=float32)\n >>> sr\n 11025\n\n >>> # Load 5 seconds of a file, starting 15 seconds in\n >>> filename = librosa.util.example_audio_file()\n >>> y, sr = librosa.load(filename, offset=15.0, duration=5.0)\n >>> y\n array([ 0.069, 0.1 , ..., -0.101, 0. ], dtype=float32)\n >>> sr\n 22050"}
{"_id": "q_2144", "text": "Resample a time series from orig_sr to target_sr\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,) or shape=(2, n)]\n audio time series. Can be mono or stereo.\n\n orig_sr : number > 0 [scalar]\n original sampling rate of `y`\n\n target_sr : number > 0 [scalar]\n target sampling rate\n\n res_type : str\n resample type (see note)\n\n .. note::\n By default, this uses `resampy`'s high-quality mode ('kaiser_best').\n\n To use a faster method, set `res_type='kaiser_fast'`.\n\n To use `scipy.signal.resample`, set `res_type='fft'` or `res_type='scipy'`.\n\n To use `scipy.signal.resample_poly`, set `res_type='polyphase'`.\n\n .. note::\n When using `res_type='polyphase'`, only integer sampling rates are\n supported.\n\n fix : bool\n adjust the length of the resampled signal to be of size exactly\n `ceil(target_sr * len(y) / orig_sr)`\n\n scale : bool\n Scale the resampled signal so that `y` and `y_hat` have approximately\n equal total energy.\n\n kwargs : additional keyword arguments\n If `fix==True`, additional keyword arguments to pass to\n `librosa.util.fix_length`.\n\n Returns\n -------\n y_hat : np.ndarray [shape=(n * target_sr / orig_sr,)]\n `y` resampled from `orig_sr` to `target_sr`\n\n Raises\n ------\n ParameterError\n If `res_type='polyphase'` and `orig_sr` or `target_sr` are not both\n integer-valued.\n\n See Also\n --------\n librosa.util.fix_length\n scipy.signal.resample\n resampy.resample\n\n Notes\n -----\n This function caches at level 20.\n\n Examples\n --------\n Downsample from 22 KHz to 8 KHz\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), sr=22050)\n >>> y_8k = librosa.resample(y, sr, 8000)\n >>> y.shape, y_8k.shape\n ((1355168,), (491671,))"}
{"_id": "q_2145", "text": "Returns a signal with the signal `click` placed at each specified time\n\n Parameters\n ----------\n times : np.ndarray or None\n times to place clicks, in seconds\n\n frames : np.ndarray or None\n frame indices to place clicks\n\n sr : number > 0\n desired sampling rate of the output signal\n\n hop_length : int > 0\n if positions are specified by `frames`, the number of samples between frames.\n\n click_freq : float > 0\n frequency (in Hz) of the default click signal. Default is 1KHz.\n\n click_duration : float > 0\n duration (in seconds) of the default click signal. Default is 100ms.\n\n click : np.ndarray or None\n optional click signal sample to use instead of the default blip.\n\n length : int > 0\n desired number of samples in the output signal\n\n\n Returns\n -------\n click_signal : np.ndarray\n Synthesized click signal\n\n\n Raises\n ------\n ParameterError\n - If neither `times` nor `frames` are provided.\n - If any of `click_freq`, `click_duration`, or `length` are out of range.\n\n\n Examples\n --------\n >>> # Sonify detected beat events\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> tempo, beats = librosa.beat.beat_track(y=y, sr=sr)\n >>> y_beats = librosa.clicks(frames=beats, sr=sr)\n\n >>> # Or generate a signal of the same length as y\n >>> y_beats = librosa.clicks(frames=beats, sr=sr, length=len(y))\n\n >>> # Or use timing instead of frame indices\n >>> times = librosa.frames_to_time(beats, sr=sr)\n >>> y_beat_times = librosa.clicks(times=times, sr=sr)\n\n >>> # Or with a click frequency of 880Hz and a 500ms sample\n >>> y_beat_times880 = librosa.clicks(times=times, sr=sr,\n ... click_freq=880, click_duration=0.5)\n\n Display click waveform next to the spectrogram\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> S = librosa.feature.melspectrogram(y=y, sr=sr)\n >>> ax = plt.subplot(2,1,2)\n >>> librosa.display.specshow(librosa.power_to_db(S, ref=np.max),\n ... x_axis='time', y_axis='mel')\n >>> plt.subplot(2,1,1, sharex=ax)\n >>> librosa.display.waveplot(y_beat_times, sr=sr, label='Beat clicks')\n >>> plt.legend()\n >>> plt.xlim(15, 30)\n >>> plt.tight_layout()"}
{"_id": "q_2146", "text": "Returns a pure tone signal. The signal generated is a cosine wave.\n\n Parameters\n ----------\n frequency : float > 0\n frequency\n\n sr : number > 0\n desired sampling rate of the output signal\n\n length : int > 0\n desired number of samples in the output signal. When both `duration` and `length` are defined,\n `length` would take priority.\n\n duration : float > 0\n desired duration in seconds. When both `duration` and `length` are defined, `length` would take priority.\n\n phi : float or None\n phase offset, in radians. If unspecified, defaults to `-np.pi * 0.5`.\n\n\n Returns\n -------\n tone_signal : np.ndarray [shape=(length,), dtype=float64]\n Synthesized pure sine tone signal\n\n\n Raises\n ------\n ParameterError\n - If `frequency` is not provided.\n - If neither `length` nor `duration` are provided.\n\n\n Examples\n --------\n >>> # Generate a pure sine tone A4\n >>> tone = librosa.tone(440, duration=1)\n\n >>> # Or generate the same signal using `length`\n >>> tone = librosa.tone(440, sr=22050, length=22050)\n\n Display spectrogram\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> S = librosa.feature.melspectrogram(y=tone)\n >>> librosa.display.specshow(librosa.power_to_db(S, ref=np.max),\n ... x_axis='time', y_axis='mel')"}
{"_id": "q_2147", "text": "Returns a chirp signal that goes from frequency `fmin` to frequency `fmax`\n\n Parameters\n ----------\n fmin : float > 0\n initial frequency\n\n fmax : float > 0\n final frequency\n\n sr : number > 0\n desired sampling rate of the output signal\n\n length : int > 0\n desired number of samples in the output signal.\n When both `duration` and `length` are defined, `length` would take priority.\n\n duration : float > 0\n desired duration in seconds.\n When both `duration` and `length` are defined, `length` would take priority.\n\n linear : boolean\n - If `True`, use a linear sweep, i.e., frequency changes linearly with time\n - If `False`, use a exponential sweep.\n Default is `False`.\n\n phi : float or None\n phase offset, in radians.\n If unspecified, defaults to `-np.pi * 0.5`.\n\n\n Returns\n -------\n chirp_signal : np.ndarray [shape=(length,), dtype=float64]\n Synthesized chirp signal\n\n\n Raises\n ------\n ParameterError\n - If either `fmin` or `fmax` are not provided.\n - If neither `length` nor `duration` are provided.\n\n\n See Also\n --------\n scipy.signal.chirp\n\n\n Examples\n --------\n >>> # Generate a exponential chirp from A4 to A5\n >>> exponential_chirp = librosa.chirp(440, 880, duration=1)\n\n >>> # Or generate the same signal using `length`\n >>> exponential_chirp = librosa.chirp(440, 880, sr=22050, length=22050)\n\n >>> # Or generate a linear chirp instead\n >>> linear_chirp = librosa.chirp(440, 880, duration=1, linear=True)\n\n Display spectrogram for both exponential and linear chirps\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> S_exponential = librosa.feature.melspectrogram(y=exponential_chirp)\n >>> ax = plt.subplot(2,1,1)\n >>> librosa.display.specshow(librosa.power_to_db(S_exponential, ref=np.max),\n ... x_axis='time', y_axis='mel')\n >>> plt.subplot(2,1,2, sharex=ax)\n >>> S_linear = librosa.feature.melspectrogram(y=linear_chirp)\n >>> librosa.display.specshow(librosa.power_to_db(S_linear, ref=np.max),\n ... x_axis='time', y_axis='mel')\n >>> plt.tight_layout()"}
{"_id": "q_2148", "text": "Phase-vocoder time stretch demo function.\n\n :parameters:\n - input_file : str\n path to input audio\n - output_file : str\n path to save output (wav)\n - speed : float > 0\n speed up by this factor"}
{"_id": "q_2149", "text": "Argparse function to get the program parameters"}
{"_id": "q_2150", "text": "HPSS demo function.\n\n :parameters:\n - input_file : str\n path to input audio\n - output_harmonic : str\n path to save output harmonic (wav)\n - output_percussive : str\n path to save output harmonic (wav)"}
{"_id": "q_2151", "text": "r'''Dynamic programming beat tracker.\n\n Beats are detected in three stages, following the method of [1]_:\n 1. Measure onset strength\n 2. Estimate tempo from onset correlation\n 3. Pick peaks in onset strength approximately consistent with estimated\n tempo\n\n .. [1] Ellis, Daniel PW. \"Beat tracking by dynamic programming.\"\n Journal of New Music Research 36.1 (2007): 51-60.\n http://labrosa.ee.columbia.edu/projects/beattrack/\n\n\n Parameters\n ----------\n\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n onset_envelope : np.ndarray [shape=(n,)] or None\n (optional) pre-computed onset strength envelope.\n\n hop_length : int > 0 [scalar]\n number of audio samples between successive `onset_envelope` values\n\n start_bpm : float > 0 [scalar]\n initial guess for the tempo estimator (in beats per minute)\n\n tightness : float [scalar]\n tightness of beat distribution around tempo\n\n trim : bool [scalar]\n trim leading/trailing beats with weak onsets\n\n bpm : float [scalar]\n (optional) If provided, use `bpm` as the tempo instead of\n estimating it from `onsets`.\n\n units : {'frames', 'samples', 'time'}\n The units to encode detected beat events in.\n By default, 'frames' are used.\n\n\n Returns\n -------\n\n tempo : float [scalar, non-negative]\n estimated global tempo (in beats per minute)\n\n beats : np.ndarray [shape=(m,)]\n estimated beat event locations in the specified units\n (default is frame indices)\n\n .. note::\n If no onset strength could be detected, beat_tracker estimates 0 BPM\n and returns an empty list.\n\n\n Raises\n ------\n ParameterError\n if neither `y` nor `onset_envelope` are provided\n\n or if `units` is not one of 'frames', 'samples', or 'time'\n\n See Also\n --------\n librosa.onset.onset_strength\n\n\n Examples\n --------\n Track beats using time series input\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n\n >>> tempo, beats = librosa.beat.beat_track(y=y, sr=sr)\n >>> tempo\n 64.599609375\n\n\n Print the first 20 beat frames\n\n >>> beats[:20]\n array([ 320, 357, 397, 436, 480, 525, 569, 609, 658,\n 698, 737, 777, 817, 857, 896, 936, 976, 1016,\n 1055, 1095])\n\n\n Or print them as timestamps\n\n >>> librosa.frames_to_time(beats[:20], sr=sr)\n array([ 7.43 , 8.29 , 9.218, 10.124, 11.146, 12.19 ,\n 13.212, 14.141, 15.279, 16.208, 17.113, 18.042,\n 18.971, 19.9 , 20.805, 21.734, 22.663, 23.591,\n 24.497, 25.426])\n\n\n Track beats using a pre-computed onset envelope\n\n >>> onset_env = librosa.onset.onset_strength(y, sr=sr,\n ... aggregate=np.median)\n >>> tempo, beats = librosa.beat.beat_track(onset_envelope=onset_env,\n ... sr=sr)\n >>> tempo\n 64.599609375\n >>> beats[:20]\n array([ 320, 357, 397, 436, 480, 525, 569, 609, 658,\n 698, 737, 777, 817, 857, 896, 936, 976, 1016,\n 1055, 1095])\n\n\n Plot the beat events against the onset strength envelope\n\n >>> import matplotlib.pyplot as plt\n >>> hop_length = 512\n >>> plt.figure(figsize=(8, 4))\n >>> times = librosa.frames_to_time(np.arange(len(onset_env)),\n ... sr=sr, hop_length=hop_length)\n >>> plt.plot(times, librosa.util.normalize(onset_env),\n ... label='Onset strength')\n >>> plt.vlines(times[beats], 0, 1, alpha=0.5, color='r',\n ... linestyle='--', label='Beats')\n >>> plt.legend(frameon=True, framealpha=0.75)\n >>> # Limit the plot to a 15-second window\n >>> plt.xlim(15, 30)\n >>> plt.gca().xaxis.set_major_formatter(librosa.display.TimeFormatter())\n >>> plt.tight_layout()"}
{"_id": "q_2152", "text": "Construct the local score for an onset envlope and given period"}
{"_id": "q_2153", "text": "Get the last beat from the cumulative score array"}
{"_id": "q_2154", "text": "Convert a recurrence matrix into a lag matrix.\n\n `lag[i, j] == rec[i+j, j]`\n\n Parameters\n ----------\n rec : np.ndarray, or scipy.sparse.spmatrix [shape=(n, n)]\n A (binary) recurrence matrix, as returned by `recurrence_matrix`\n\n pad : bool\n If False, `lag` matrix is square, which is equivalent to\n assuming that the signal repeats itself indefinitely.\n\n If True, `lag` is padded with `n` zeros, which eliminates\n the assumption of repetition.\n\n axis : int\n The axis to keep as the `time` axis.\n The alternate axis will be converted to lag coordinates.\n\n Returns\n -------\n lag : np.ndarray\n The recurrence matrix in (lag, time) (if `axis=1`)\n or (time, lag) (if `axis=0`) coordinates\n\n Raises\n ------\n ParameterError : if `rec` is non-square\n\n See Also\n --------\n recurrence_matrix\n lag_to_recurrence\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> mfccs = librosa.feature.mfcc(y=y, sr=sr)\n >>> recurrence = librosa.segment.recurrence_matrix(mfccs)\n >>> lag_pad = librosa.segment.recurrence_to_lag(recurrence, pad=True)\n >>> lag_nopad = librosa.segment.recurrence_to_lag(recurrence, pad=False)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(8, 4))\n >>> plt.subplot(1, 2, 1)\n >>> librosa.display.specshow(lag_pad, x_axis='time', y_axis='lag')\n >>> plt.title('Lag (zero-padded)')\n >>> plt.subplot(1, 2, 2)\n >>> librosa.display.specshow(lag_nopad, x_axis='time')\n >>> plt.title('Lag (no padding)')\n >>> plt.tight_layout()"}
{"_id": "q_2155", "text": "Filtering in the time-lag domain.\n\n This is primarily useful for adapting image filters to operate on\n `recurrence_to_lag` output.\n\n Using `timelag_filter` is equivalent to the following sequence of\n operations:\n\n >>> data_tl = librosa.segment.recurrence_to_lag(data)\n >>> data_filtered_tl = function(data_tl)\n >>> data_filtered = librosa.segment.lag_to_recurrence(data_filtered_tl)\n\n Parameters\n ----------\n function : callable\n The filtering function to wrap, e.g., `scipy.ndimage.median_filter`\n\n pad : bool\n Whether to zero-pad the structure feature matrix\n\n index : int >= 0\n If `function` accepts input data as a positional argument, it should be\n indexed by `index`\n\n\n Returns\n -------\n wrapped_function : callable\n A new filter function which applies in time-lag space rather than\n time-time space.\n\n\n Examples\n --------\n\n Apply a 5-bin median filter to the diagonal of a recurrence matrix\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> chroma = librosa.feature.chroma_cqt(y=y, sr=sr)\n >>> rec = librosa.segment.recurrence_matrix(chroma)\n >>> from scipy.ndimage import median_filter\n >>> diagonal_median = librosa.segment.timelag_filter(median_filter)\n >>> rec_filtered = diagonal_median(rec, size=(1, 3), mode='mirror')\n\n Or with affinity weights\n\n >>> rec_aff = librosa.segment.recurrence_matrix(chroma, mode='affinity')\n >>> rec_aff_fil = diagonal_median(rec_aff, size=(1, 3), mode='mirror')\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(8,8))\n >>> plt.subplot(2, 2, 1)\n >>> librosa.display.specshow(rec, y_axis='time')\n >>> plt.title('Raw recurrence matrix')\n >>> plt.subplot(2, 2, 2)\n >>> librosa.display.specshow(rec_filtered)\n >>> plt.title('Filtered recurrence matrix')\n >>> plt.subplot(2, 2, 3)\n >>> librosa.display.specshow(rec_aff, x_axis='time', y_axis='time',\n ... cmap='magma_r')\n >>> plt.title('Raw affinity matrix')\n >>> plt.subplot(2, 2, 4)\n >>> librosa.display.specshow(rec_aff_fil, x_axis='time',\n ... cmap='magma_r')\n >>> plt.title('Filtered affinity matrix')\n >>> plt.tight_layout()"}
{"_id": "q_2156", "text": "Sub-divide a segmentation by feature clustering.\n\n Given a set of frame boundaries (`frames`), and a data matrix (`data`),\n each successive interval defined by `frames` is partitioned into\n `n_segments` by constrained agglomerative clustering.\n\n .. note::\n If an interval spans fewer than `n_segments` frames, then each\n frame becomes a sub-segment.\n\n Parameters\n ----------\n data : np.ndarray\n Data matrix to use in clustering\n\n frames : np.ndarray [shape=(n_boundaries,)], dtype=int, non-negative]\n Array of beat or segment boundaries, as provided by\n `librosa.beat.beat_track`,\n `librosa.onset.onset_detect`,\n or `agglomerative`.\n\n n_segments : int > 0\n Maximum number of frames to sub-divide each interval.\n\n axis : int\n Axis along which to apply the segmentation.\n By default, the last index (-1) is taken.\n\n Returns\n -------\n boundaries : np.ndarray [shape=(n_subboundaries,)]\n List of sub-divided segment boundaries\n\n See Also\n --------\n agglomerative : Temporal segmentation\n librosa.onset.onset_detect : Onset detection\n librosa.beat.beat_track : Beat tracking\n\n Notes\n -----\n This function caches at level 30.\n\n Examples\n --------\n Load audio, detect beat frames, and subdivide in twos by CQT\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=8)\n >>> tempo, beats = librosa.beat.beat_track(y=y, sr=sr, hop_length=512)\n >>> beat_times = librosa.frames_to_time(beats, sr=sr, hop_length=512)\n >>> cqt = np.abs(librosa.cqt(y, sr=sr, hop_length=512))\n >>> subseg = librosa.segment.subsegment(cqt, beats, n_segments=2)\n >>> subseg_t = librosa.frames_to_time(subseg, sr=sr, hop_length=512)\n >>> subseg\n array([ 0, 2, 4, 21, 23, 26, 43, 55, 63, 72, 83,\n 97, 102, 111, 122, 137, 142, 153, 162, 180, 182, 185,\n 202, 210, 221, 231, 241, 256, 261, 271, 281, 296, 301,\n 310, 320, 339, 341, 344, 361, 368, 382, 389, 401, 416,\n 420, 430, 436, 451, 456, 465, 476, 489, 496, 503, 515,\n 527, 535, 544, 553, 558, 571, 578, 590, 607, 609, 638])\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> librosa.display.specshow(librosa.amplitude_to_db(cqt,\n ... ref=np.max),\n ... y_axis='cqt_hz', x_axis='time')\n >>> lims = plt.gca().get_ylim()\n >>> plt.vlines(beat_times, lims[0], lims[1], color='lime', alpha=0.9,\n ... linewidth=2, label='Beats')\n >>> plt.vlines(subseg_t, lims[0], lims[1], color='linen', linestyle='--',\n ... linewidth=1.5, alpha=0.5, label='Sub-beats')\n >>> plt.legend(frameon=True, shadow=True)\n >>> plt.title('CQT + Beat and sub-beat markers')\n >>> plt.tight_layout()"}
{"_id": "q_2157", "text": "Bottom-up temporal segmentation.\n\n Use a temporally-constrained agglomerative clustering routine to partition\n `data` into `k` contiguous segments.\n\n Parameters\n ----------\n data : np.ndarray\n data to cluster\n\n k : int > 0 [scalar]\n number of segments to produce\n\n clusterer : sklearn.cluster.AgglomerativeClustering, optional\n An optional AgglomerativeClustering object.\n If `None`, a constrained Ward object is instantiated.\n\n axis : int\n axis along which to cluster.\n By default, the last axis (-1) is chosen.\n\n Returns\n -------\n boundaries : np.ndarray [shape=(k,)]\n left-boundaries (frame numbers) of detected segments. This\n will always include `0` as the first left-boundary.\n\n See Also\n --------\n sklearn.cluster.AgglomerativeClustering\n\n Examples\n --------\n Cluster by chroma similarity, break into 20 segments\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=15)\n >>> chroma = librosa.feature.chroma_cqt(y=y, sr=sr)\n >>> bounds = librosa.segment.agglomerative(chroma, 20)\n >>> bound_times = librosa.frames_to_time(bounds, sr=sr)\n >>> bound_times\n array([ 0. , 1.672, 2.322, 2.624, 3.251, 3.506,\n 4.18 , 5.387, 6.014, 6.293, 6.943, 7.198,\n 7.848, 9.033, 9.706, 9.961, 10.635, 10.89 ,\n 11.54 , 12.539])\n\n Plot the segmentation over the chromagram\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> librosa.display.specshow(chroma, y_axis='chroma', x_axis='time')\n >>> plt.vlines(bound_times, 0, chroma.shape[0], color='linen', linestyle='--',\n ... linewidth=2, alpha=0.9, label='Segment boundaries')\n >>> plt.axis('tight')\n >>> plt.legend(frameon=True, shadow=True)\n >>> plt.title('Power spectrogram')\n >>> plt.tight_layout()"}
{"_id": "q_2158", "text": "Multi-angle path enhancement for self- and cross-similarity matrices.\n\n This function convolves multiple diagonal smoothing filters with a self-similarity (or\n recurrence) matrix R, and aggregates the result by an element-wise maximum.\n\n Technically, the output is a matrix R_smooth such that\n\n `R_smooth[i, j] = max_theta (R * filter_theta)[i, j]`\n\n where `*` denotes 2-dimensional convolution, and `filter_theta` is a smoothing filter at\n orientation theta.\n\n This is intended to provide coherent temporal smoothing of self-similarity matrices\n when there are changes in tempo.\n\n Smoothing filters are generated at evenly spaced orientations between min_ratio and\n max_ratio.\n\n This function is inspired by the multi-angle path enhancement of [1]_, but differs by\n modeling tempo differences in the space of similarity matrices rather than re-sampling\n the underlying features prior to generating the self-similarity matrix.\n\n .. [1] M\u00fcller, Meinard and Frank Kurth.\n \"Enhancing similarity matrices for music audio analysis.\"\n 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings.\n Vol. 5. IEEE, 2006.\n\n .. note:: if using recurrence_matrix to construct the input similarity matrix, be sure to include the main\n diagonal by setting `self=True`. Otherwise, the diagonal will be suppressed, and this is likely to\n produce discontinuities which will pollute the smoothing filter response.\n\n Parameters\n ----------\n R : np.ndarray\n The self- or cross-similarity matrix to be smoothed.\n Note: sparse inputs are not supported.\n\n n : int > 0\n The length of the smoothing filter\n\n window : window specification\n The type of smoothing filter to use. See `filters.get_window` for more information\n on window specification formats.\n\n max_ratio : float > 0\n The maximum tempo ratio to support\n\n min_ratio : float > 0\n The minimum tempo ratio to support.\n If not provided, it will default to `1/max_ratio`\n\n n_filters : int >= 1\n The number of different smoothing filters to use, evenly spaced\n between `min_ratio` and `max_ratio`.\n\n If `min_ratio = 1/max_ratio` (the default), using an odd number\n of filters will ensure that the main diagonal (ratio=1) is included.\n\n zero_mean : bool\n By default, the smoothing filters are non-negative and sum to one (i.e. are averaging\n filters).\n\n If `zero_mean=True`, then the smoothing filters are made to sum to zero by subtracting\n a constant value from the non-diagonal coordinates of the filter. This is primarily\n useful for suppressing blocks while enhancing diagonals.\n\n clip : bool\n If True, the smoothed similarity matrix will be thresholded at 0, and will not contain\n negative entries.\n\n kwargs : additional keyword arguments\n Additional arguments to pass to `scipy.ndimage.convolve`\n\n\n Returns\n -------\n R_smooth : np.ndarray, shape=R.shape\n The smoothed self- or cross-similarity matrix\n\n See Also\n --------\n filters.diagonal_filter\n recurrence_matrix\n\n\n Examples\n --------\n Use a 51-frame diagonal smoothing filter to enhance paths in a recurrence matrix\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=30)\n >>> chroma = librosa.feature.chroma_cqt(y=y, sr=sr)\n >>> rec = librosa.segment.recurrence_matrix(chroma, mode='affinity', self=True)\n >>> rec_smooth = librosa.segment.path_enhance(rec, 51, window='hann', n_filters=7)\n\n Plot the recurrence matrix before and after smoothing\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(8, 4))\n >>> plt.subplot(1,2,1)\n >>> librosa.display.specshow(rec, x_axis='time', y_axis='time')\n >>> plt.title('Unfiltered recurrence')\n >>> plt.subplot(1,2,2)\n >>> librosa.display.specshow(rec_smooth, x_axis='time', y_axis='time')\n >>> plt.title('Multi-angle enhanced recurrence')\n >>> plt.tight_layout()"}
{"_id": "q_2159", "text": "Slice a time series into overlapping frames.\n\n This implementation uses low-level stride manipulation to avoid\n redundant copies of the time series data.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n Time series to frame. Must be one-dimensional and contiguous\n in memory.\n\n frame_length : int > 0 [scalar]\n Length of the frame in samples\n\n hop_length : int > 0 [scalar]\n Number of samples to hop between frames\n\n Returns\n -------\n y_frames : np.ndarray [shape=(frame_length, N_FRAMES)]\n An array of frames sampled from `y`:\n `y_frames[i, j] == y[j * hop_length + i]`\n\n Raises\n ------\n ParameterError\n If `y` is not contiguous in memory, not an `np.ndarray`, or\n not one-dimensional. See `np.ascontiguous()` for details.\n\n If `hop_length < 1`, frames cannot advance.\n\n If `len(y) < frame_length`.\n\n Examples\n --------\n Extract 2048-sample frames from `y` with a hop of 64 samples per frame\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.util.frame(y, frame_length=2048, hop_length=64)\n array([[ -9.216e-06, 7.710e-06, ..., -2.117e-06, -4.362e-07],\n [ 2.518e-06, -6.294e-06, ..., -1.775e-05, -6.365e-06],\n ...,\n [ -7.429e-04, 5.173e-03, ..., 1.105e-05, -5.074e-06],\n [ 2.169e-03, 4.867e-03, ..., 3.666e-06, -5.571e-06]], dtype=float32)"}
{"_id": "q_2160", "text": "Ensure that an input value is integer-typed.\n This is primarily useful for ensuring integrable-valued\n array indices.\n\n Parameters\n ----------\n x : number\n A scalar value to be cast to int\n\n cast : function [optional]\n A function to modify `x` before casting.\n Default: `np.floor`\n\n Returns\n -------\n x_int : int\n `x_int = int(cast(x))`\n\n Raises\n ------\n ParameterError\n If `cast` is provided and is not callable."}
{"_id": "q_2161", "text": "Normalize an array along a chosen axis.\n\n Given a norm (described below) and a target axis, the input\n array is scaled so that\n\n `norm(S, axis=axis) == 1`\n\n For example, `axis=0` normalizes each column of a 2-d array\n by aggregating over the rows (0-axis).\n Similarly, `axis=1` normalizes each row of a 2-d array.\n\n This function also supports thresholding small-norm slices:\n any slice (i.e., row or column) with norm below a specified\n `threshold` can be left un-normalized, set to all-zeros, or\n filled with uniform non-zero values that normalize to 1.\n\n Note: the semantics of this function differ from\n `scipy.linalg.norm` in two ways: multi-dimensional arrays\n are supported, but matrix-norms are not.\n\n\n Parameters\n ----------\n S : np.ndarray\n The matrix to normalize\n\n norm : {np.inf, -np.inf, 0, float > 0, None}\n - `np.inf` : maximum absolute value\n - `-np.inf` : mininum absolute value\n - `0` : number of non-zeros (the support)\n - float : corresponding l_p norm\n See `scipy.linalg.norm` for details.\n - None : no normalization is performed\n\n axis : int [scalar]\n Axis along which to compute the norm.\n\n threshold : number > 0 [optional]\n Only the columns (or rows) with norm at least `threshold` are\n normalized.\n\n By default, the threshold is determined from\n the numerical precision of `S.dtype`.\n\n fill : None or bool\n If None, then columns (or rows) with norm below `threshold`\n are left as is.\n\n If False, then columns (rows) with norm below `threshold`\n are set to 0.\n\n If True, then columns (rows) with norm below `threshold`\n are filled uniformly such that the corresponding norm is 1.\n\n .. note:: `fill=True` is incompatible with `norm=0` because\n no uniform vector exists with l0 \"norm\" equal to 1.\n\n Returns\n -------\n S_norm : np.ndarray [shape=S.shape]\n Normalized array\n\n Raises\n ------\n ParameterError\n If `norm` is not among the valid types defined above\n\n If `S` is not finite\n\n If `fill=True` and `norm=0`\n\n See Also\n --------\n scipy.linalg.norm\n\n Notes\n -----\n This function caches at level 40.\n\n Examples\n --------\n >>> # Construct an example matrix\n >>> S = np.vander(np.arange(-2.0, 2.0))\n >>> S\n array([[-8., 4., -2., 1.],\n [-1., 1., -1., 1.],\n [ 0., 0., 0., 1.],\n [ 1., 1., 1., 1.]])\n >>> # Max (l-infinity)-normalize the columns\n >>> librosa.util.normalize(S)\n array([[-1. , 1. , -1. , 1. ],\n [-0.125, 0.25 , -0.5 , 1. ],\n [ 0. , 0. , 0. , 1. ],\n [ 0.125, 0.25 , 0.5 , 1. ]])\n >>> # Max (l-infinity)-normalize the rows\n >>> librosa.util.normalize(S, axis=1)\n array([[-1. , 0.5 , -0.25 , 0.125],\n [-1. , 1. , -1. , 1. ],\n [ 0. , 0. , 0. , 1. ],\n [ 1. , 1. , 1. , 1. ]])\n >>> # l1-normalize the columns\n >>> librosa.util.normalize(S, norm=1)\n array([[-0.8 , 0.667, -0.5 , 0.25 ],\n [-0.1 , 0.167, -0.25 , 0.25 ],\n [ 0. , 0. , 0. , 0.25 ],\n [ 0.1 , 0.167, 0.25 , 0.25 ]])\n >>> # l2-normalize the columns\n >>> librosa.util.normalize(S, norm=2)\n array([[-0.985, 0.943, -0.816, 0.5 ],\n [-0.123, 0.236, -0.408, 0.5 ],\n [ 0. , 0. , 0. , 0.5 ],\n [ 0.123, 0.236, 0.408, 0.5 ]])\n\n >>> # Thresholding and filling\n >>> S[:, -1] = 1e-308\n >>> S\n array([[ -8.000e+000, 4.000e+000, -2.000e+000,\n 1.000e-308],\n [ -1.000e+000, 1.000e+000, -1.000e+000,\n 1.000e-308],\n [ 0.000e+000, 0.000e+000, 0.000e+000,\n 1.000e-308],\n [ 1.000e+000, 1.000e+000, 1.000e+000,\n 1.000e-308]])\n\n >>> # By default, small-norm columns are left untouched\n >>> librosa.util.normalize(S)\n array([[ -1.000e+000, 1.000e+000, -1.000e+000,\n 1.000e-308],\n [ -1.250e-001, 2.500e-001, -5.000e-001,\n 1.000e-308],\n [ 0.000e+000, 0.000e+000, 0.000e+000,\n 1.000e-308],\n [ 1.250e-001, 2.500e-001, 5.000e-001,\n 1.000e-308]])\n >>> # Small-norm columns can be zeroed out\n >>> librosa.util.normalize(S, fill=False)\n array([[-1. , 1. , -1. , 0. ],\n [-0.125, 0.25 , -0.5 , 0. ],\n [ 0. , 0. , 0. , 0. ],\n [ 0.125, 0.25 , 0.5 , 0. ]])\n >>> # Or set to constant with unit-norm\n >>> librosa.util.normalize(S, fill=True)\n array([[-1. , 1. , -1. , 1. ],\n [-0.125, 0.25 , -0.5 , 1. ],\n [ 0. , 0. , 0. , 1. ],\n [ 0.125, 0.25 , 0.5 , 1. ]])\n >>> # With an l1 norm instead of max-norm\n >>> librosa.util.normalize(S, norm=1, fill=True)\n array([[-0.8 , 0.667, -0.5 , 0.25 ],\n [-0.1 , 0.167, -0.25 , 0.25 ],\n [ 0. , 0. , 0. , 0.25 ],\n [ 0.1 , 0.167, 0.25 , 0.25 ]])"}
{"_id": "q_2162", "text": "Uses a flexible heuristic to pick peaks in a signal.\n\n A sample n is selected as an peak if the corresponding x[n]\n fulfills the following three conditions:\n\n 1. `x[n] == max(x[n - pre_max:n + post_max])`\n 2. `x[n] >= mean(x[n - pre_avg:n + post_avg]) + delta`\n 3. `n - previous_n > wait`\n\n where `previous_n` is the last sample picked as a peak (greedily).\n\n This implementation is based on [1]_ and [2]_.\n\n .. [1] Boeck, Sebastian, Florian Krebs, and Markus Schedl.\n \"Evaluating the Online Capabilities of Onset Detection Methods.\" ISMIR.\n 2012.\n\n .. [2] https://github.com/CPJKU/onset_detection/blob/master/onset_program.py\n\n\n Parameters\n ----------\n x : np.ndarray [shape=(n,)]\n input signal to peak picks from\n\n pre_max : int >= 0 [scalar]\n number of samples before `n` over which max is computed\n\n post_max : int >= 1 [scalar]\n number of samples after `n` over which max is computed\n\n pre_avg : int >= 0 [scalar]\n number of samples before `n` over which mean is computed\n\n post_avg : int >= 1 [scalar]\n number of samples after `n` over which mean is computed\n\n delta : float >= 0 [scalar]\n threshold offset for mean\n\n wait : int >= 0 [scalar]\n number of samples to wait after picking a peak\n\n Returns\n -------\n peaks : np.ndarray [shape=(n_peaks,), dtype=int]\n indices of peaks in `x`\n\n Raises\n ------\n ParameterError\n If any input lies outside its defined range\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=15)\n >>> onset_env = librosa.onset.onset_strength(y=y, sr=sr,\n ... hop_length=512,\n ... aggregate=np.median)\n >>> peaks = librosa.util.peak_pick(onset_env, 3, 3, 3, 5, 0.5, 10)\n >>> peaks\n array([ 4, 23, 73, 102, 142, 162, 182, 211, 261, 301, 320,\n 331, 348, 368, 382, 396, 411, 431, 446, 461, 476, 491,\n 510, 525, 536, 555, 570, 590, 609, 625, 639])\n\n >>> import matplotlib.pyplot as plt\n >>> times = librosa.frames_to_time(np.arange(len(onset_env)),\n ... sr=sr, hop_length=512)\n >>> plt.figure()\n >>> ax = plt.subplot(2, 1, 2)\n >>> D = librosa.stft(y)\n >>> librosa.display.specshow(librosa.amplitude_to_db(D, ref=np.max),\n ... y_axis='log', x_axis='time')\n >>> plt.subplot(2, 1, 1, sharex=ax)\n >>> plt.plot(times, onset_env, alpha=0.8, label='Onset strength')\n >>> plt.vlines(times[peaks], 0,\n ... onset_env.max(), color='r', alpha=0.8,\n ... label='Selected peaks')\n >>> plt.legend(frameon=True, framealpha=0.8)\n >>> plt.axis('tight')\n >>> plt.tight_layout()"}
{"_id": "q_2163", "text": "Convert an integer buffer to floating point values.\n This is primarily useful when loading integer-valued wav data\n into numpy arrays.\n\n See Also\n --------\n buf_to_float\n\n Parameters\n ----------\n x : np.ndarray [dtype=int]\n The integer-valued data buffer\n\n n_bytes : int [1, 2, 4]\n The number of bytes per sample in `x`\n\n dtype : numeric type\n The target output type (default: 32-bit float)\n\n Returns\n -------\n x_float : np.ndarray [dtype=float]\n The input data buffer cast to floating point"}
{"_id": "q_2164", "text": "Synchronous aggregation of a multi-dimensional array between boundaries\n\n .. note::\n In order to ensure total coverage, boundary points may be added\n to `idx`.\n\n If synchronizing a feature matrix against beat tracker output, ensure\n that frame index numbers are properly aligned and use the same hop length.\n\n Parameters\n ----------\n data : np.ndarray\n multi-dimensional array of features\n\n idx : iterable of ints or slices\n Either an ordered array of boundary indices, or\n an iterable collection of slice objects.\n\n\n aggregate : function\n aggregation function (default: `np.mean`)\n\n pad : boolean\n If `True`, `idx` is padded to span the full range `[0, data.shape[axis]]`\n\n axis : int\n The axis along which to aggregate data\n\n Returns\n -------\n data_sync : ndarray\n `data_sync` will have the same dimension as `data`, except that the `axis`\n coordinate will be reduced according to `idx`.\n\n For example, a 2-dimensional `data` with `axis=-1` should satisfy\n\n `data_sync[:, i] = aggregate(data[:, idx[i-1]:idx[i]], axis=-1)`\n\n Raises\n ------\n ParameterError\n If the index set is not of consistent type (all slices or all integers)\n\n Notes\n -----\n This function caches at level 40.\n\n Examples\n --------\n Beat-synchronous CQT spectra\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> tempo, beats = librosa.beat.beat_track(y=y, sr=sr, trim=False)\n >>> C = np.abs(librosa.cqt(y=y, sr=sr))\n >>> beats = librosa.util.fix_frames(beats, x_max=C.shape[1])\n\n By default, use mean aggregation\n\n >>> C_avg = librosa.util.sync(C, beats)\n\n Use median-aggregation instead of mean\n\n >>> C_med = librosa.util.sync(C, beats,\n ... aggregate=np.median)\n\n Or sub-beat synchronization\n\n >>> sub_beats = librosa.segment.subsegment(C, beats)\n >>> sub_beats = librosa.util.fix_frames(sub_beats, x_max=C.shape[1])\n >>> C_med_sub = librosa.util.sync(C, sub_beats, aggregate=np.median)\n\n\n Plot the results\n\n >>> import matplotlib.pyplot as plt\n >>> beat_t = librosa.frames_to_time(beats, sr=sr)\n >>> subbeat_t = librosa.frames_to_time(sub_beats, sr=sr)\n >>> plt.figure()\n >>> plt.subplot(3, 1, 1)\n >>> librosa.display.specshow(librosa.amplitude_to_db(C,\n ... ref=np.max),\n ... x_axis='time')\n >>> plt.title('CQT power, shape={}'.format(C.shape))\n >>> plt.subplot(3, 1, 2)\n >>> librosa.display.specshow(librosa.amplitude_to_db(C_med,\n ... ref=np.max),\n ... x_coords=beat_t, x_axis='time')\n >>> plt.title('Beat synchronous CQT power, '\n ... 'shape={}'.format(C_med.shape))\n >>> plt.subplot(3, 1, 3)\n >>> librosa.display.specshow(librosa.amplitude_to_db(C_med_sub,\n ... ref=np.max),\n ... x_coords=subbeat_t, x_axis='time')\n >>> plt.title('Sub-beat synchronous CQT power, '\n ... 'shape={}'.format(C_med_sub.shape))\n >>> plt.tight_layout()"}
{"_id": "q_2165", "text": "Robustly compute a softmask operation.\n\n `M = X**power / (X**power + X_ref**power)`\n\n\n Parameters\n ----------\n X : np.ndarray\n The (non-negative) input array corresponding to the positive mask elements\n\n X_ref : np.ndarray\n The (non-negative) array of reference or background elements.\n Must have the same shape as `X`.\n\n power : number > 0 or np.inf\n If finite, returns the soft mask computed in a numerically stable way\n\n If infinite, returns a hard (binary) mask equivalent to `X > X_ref`.\n Note: for hard masks, ties are always broken in favor of `X_ref` (`mask=0`).\n\n\n split_zeros : bool\n If `True`, entries where `X` and X`_ref` are both small (close to 0)\n will receive mask values of 0.5.\n\n Otherwise, the mask is set to 0 for these entries.\n\n\n Returns\n -------\n mask : np.ndarray, shape=`X.shape`\n The output mask array\n\n Raises\n ------\n ParameterError\n If `X` and `X_ref` have different shapes.\n\n If `X` or `X_ref` are negative anywhere\n\n If `power <= 0`\n\n Examples\n --------\n\n >>> X = 2 * np.ones((3, 3))\n >>> X_ref = np.vander(np.arange(3.0))\n >>> X\n array([[ 2., 2., 2.],\n [ 2., 2., 2.],\n [ 2., 2., 2.]])\n >>> X_ref\n array([[ 0., 0., 1.],\n [ 1., 1., 1.],\n [ 4., 2., 1.]])\n >>> librosa.util.softmask(X, X_ref, power=1)\n array([[ 1. , 1. , 0.667],\n [ 0.667, 0.667, 0.667],\n [ 0.333, 0.5 , 0.667]])\n >>> librosa.util.softmask(X_ref, X, power=1)\n array([[ 0. , 0. , 0.333],\n [ 0.333, 0.333, 0.333],\n [ 0.667, 0.5 , 0.333]])\n >>> librosa.util.softmask(X, X_ref, power=2)\n array([[ 1. , 1. , 0.8],\n [ 0.8, 0.8, 0.8],\n [ 0.2, 0.5, 0.8]])\n >>> librosa.util.softmask(X, X_ref, power=4)\n array([[ 1. , 1. , 0.941],\n [ 0.941, 0.941, 0.941],\n [ 0.059, 0.5 , 0.941]])\n >>> librosa.util.softmask(X, X_ref, power=100)\n array([[ 1.000e+00, 1.000e+00, 1.000e+00],\n [ 1.000e+00, 1.000e+00, 1.000e+00],\n [ 7.889e-31, 5.000e-01, 1.000e+00]])\n >>> librosa.util.softmask(X, X_ref, power=np.inf)\n array([[ True, True, True],\n [ True, True, True],\n [False, False, True]], dtype=bool)"}
{"_id": "q_2166", "text": "Compute the tiny-value corresponding to an input's data type.\n\n This is the smallest \"usable\" number representable in `x`'s\n data type (e.g., float32).\n\n This is primarily useful for determining a threshold for\n numerical underflow in division or multiplication operations.\n\n Parameters\n ----------\n x : number or np.ndarray\n The array to compute the tiny-value for.\n All that matters here is `x.dtype`.\n\n Returns\n -------\n tiny_value : float\n The smallest positive usable number for the type of `x`.\n If `x` is integer-typed, then the tiny value for `np.float32`\n is returned instead.\n\n See Also\n --------\n numpy.finfo\n\n Examples\n --------\n\n For a standard double-precision floating point number:\n\n >>> librosa.util.tiny(1.0)\n 2.2250738585072014e-308\n\n Or explicitly as double-precision\n\n >>> librosa.util.tiny(np.asarray(1e-5, dtype=np.float64))\n 2.2250738585072014e-308\n\n Or complex numbers\n\n >>> librosa.util.tiny(1j)\n 2.2250738585072014e-308\n\n Single-precision floating point:\n\n >>> librosa.util.tiny(np.asarray(1e-5, dtype=np.float32))\n 1.1754944e-38\n\n Integer\n\n >>> librosa.util.tiny(5)\n 1.1754944e-38"}
{"_id": "q_2167", "text": "Read the frame images from a directory and join them as a video\n\n Args:\n frame_dir (str): The directory containing video frames.\n video_file (str): Output filename.\n fps (float): FPS of the output video.\n fourcc (str): Fourcc of the output video, this should be compatible\n with the output file type.\n filename_tmpl (str): Filename template with the index as the variable.\n start (int): Starting frame index.\n end (int): Ending frame index.\n show_progress (bool): Whether to show a progress bar."}
{"_id": "q_2168", "text": "Get frame by index.\n\n Args:\n frame_id (int): Index of the expected frame, 0-based.\n\n Returns:\n ndarray or None: Return the frame if successful, otherwise None."}
{"_id": "q_2169", "text": "Track the progress of tasks execution with a progress bar.\n\n Tasks are done with a simple for-loop.\n\n Args:\n func (callable): The function to be applied to each task.\n tasks (list or tuple[Iterable, int]): A list of tasks or\n (tasks, total num).\n bar_width (int): Width of progress bar.\n\n Returns:\n list: The task results."}
{"_id": "q_2170", "text": "Track the progress of parallel task execution with a progress bar.\n\n The built-in :mod:`multiprocessing` module is used for process pools and\n tasks are done with :func:`Pool.map` or :func:`Pool.imap_unordered`.\n\n Args:\n func (callable): The function to be applied to each task.\n tasks (list or tuple[Iterable, int]): A list of tasks or\n (tasks, total num).\n nproc (int): Process (worker) number.\n initializer (None or callable): Refer to :class:`multiprocessing.Pool`\n for details.\n initargs (None or tuple): Refer to :class:`multiprocessing.Pool` for\n details.\n chunksize (int): Refer to :class:`multiprocessing.Pool` for details.\n bar_width (int): Width of progress bar.\n skip_first (bool): Whether to skip the first sample for each worker\n when estimating fps, since the initialization step may takes\n longer.\n keep_order (bool): If True, :func:`Pool.imap` is used, otherwise\n :func:`Pool.imap_unordered` is used.\n\n Returns:\n list: The task results."}
{"_id": "q_2171", "text": "Clip bboxes to fit the image shape.\n\n Args:\n bboxes (ndarray): Shape (..., 4*k)\n img_shape (tuple): (height, width) of the image.\n\n Returns:\n ndarray: Clipped bboxes."}
{"_id": "q_2172", "text": "Crop image patches.\n\n 3 steps: scale the bboxes -> clip bboxes -> crop and pad.\n\n Args:\n img (ndarray): Image to be cropped.\n bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes.\n scale (float, optional): Scale ratio of bboxes, the default value\n 1.0 means no padding.\n pad_fill (number or list): Value to be filled for padding, None for\n no padding.\n\n Returns:\n list or ndarray: The cropped image patches."}
{"_id": "q_2173", "text": "Pad an image to a certain shape.\n\n Args:\n img (ndarray): Image to be padded.\n shape (tuple): Expected padding shape.\n pad_val (number or sequence): Values to be filled in padding areas.\n\n Returns:\n ndarray: The padded image."}
{"_id": "q_2174", "text": "Pad an image to ensure each edge to be multiple to some number.\n\n Args:\n img (ndarray): Image to be padded.\n divisor (int): Padded image edges will be multiple to divisor.\n pad_val (number or sequence): Same as :func:`impad`.\n\n Returns:\n ndarray: The padded image."}
{"_id": "q_2175", "text": "Rescale a size by a ratio.\n\n Args:\n size (tuple): w, h.\n scale (float): Scaling factor.\n\n Returns:\n tuple[int]: scaled size."}
{"_id": "q_2176", "text": "Resize image to a given size.\n\n Args:\n img (ndarray): The input image.\n size (tuple): Target (w, h).\n return_scale (bool): Whether to return `w_scale` and `h_scale`.\n interpolation (str): Interpolation method, accepted values are\n \"nearest\", \"bilinear\", \"bicubic\", \"area\", \"lanczos\".\n\n Returns:\n tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or\n `resized_img`."}
{"_id": "q_2177", "text": "Resize image to the same size of a given image.\n\n Args:\n img (ndarray): The input image.\n dst_img (ndarray): The target image.\n return_scale (bool): Whether to return `w_scale` and `h_scale`.\n interpolation (str): Same as :func:`resize`.\n\n Returns:\n tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or\n `resized_img`."}
{"_id": "q_2178", "text": "Resize image while keeping the aspect ratio.\n\n Args:\n img (ndarray): The input image.\n scale (float or tuple[int]): The scaling factor or maximum size.\n If it is a float number, then the image will be rescaled by this\n factor, else if it is a tuple of 2 integers, then the image will\n be rescaled as large as possible within the scale.\n return_scale (bool): Whether to return the scaling factor besides the\n rescaled image.\n interpolation (str): Same as :func:`resize`.\n\n Returns:\n ndarray: The rescaled image."}
{"_id": "q_2179", "text": "Register a handler for some file extensions.\n\n Args:\n handler (:obj:`BaseFileHandler`): Handler to be registered.\n file_formats (str or list[str]): File formats to be handled by this\n handler."}
{"_id": "q_2180", "text": "Get priority value.\n\n Args:\n priority (int or str or :obj:`Priority`): Priority.\n\n Returns:\n int: The priority value."}
{"_id": "q_2181", "text": "Draw bboxes on an image.\n\n Args:\n img (str or ndarray): The image to be displayed.\n bboxes (list or ndarray): A list of ndarray of shape (k, 4).\n colors (list[str or tuple or Color]): A list of colors.\n top_k (int): Plot the first k bboxes only if set positive.\n thickness (int): Thickness of lines.\n show (bool): Whether to show the image.\n win_name (str): The window name.\n wait_time (int): Value of waitKey param.\n out_file (str, optional): The filename to write the image."}
{"_id": "q_2182", "text": "Read an optical flow map.\n\n Args:\n flow_or_path (ndarray or str): A flow map or filepath.\n quantize (bool): whether to read quantized pair, if set to True,\n remaining args will be passed to :func:`dequantize_flow`.\n concat_axis (int): The axis that dx and dy are concatenated,\n can be either 0 or 1. Ignored if quantize is False.\n\n Returns:\n ndarray: Optical flow represented as a (h, w, 2) numpy array"}
{"_id": "q_2183", "text": "Recover from quantized flow.\n\n Args:\n dx (ndarray): Quantized dx.\n dy (ndarray): Quantized dy.\n max_val (float): Maximum value used when quantizing.\n denorm (bool): Whether to multiply flow values with width/height.\n\n Returns:\n ndarray: Dequantized flow."}
{"_id": "q_2184", "text": "Load state_dict to a module.\n\n This method is modified from :meth:`torch.nn.Module.load_state_dict`.\n Default value for ``strict`` is set to ``False`` and the message for\n param mismatch will be shown even if strict is False.\n\n Args:\n module (Module): Module that receives the state_dict.\n state_dict (OrderedDict): Weights.\n strict (bool): whether to strictly enforce that the keys\n in :attr:`state_dict` match the keys returned by this module's\n :meth:`~torch.nn.Module.state_dict` function. Default: ``False``.\n logger (:obj:`logging.Logger`, optional): Logger to log the error\n message. If not specified, print function will be used."}
{"_id": "q_2185", "text": "Copy a model state_dict to cpu.\n\n Args:\n state_dict (OrderedDict): Model weights on GPU.\n\n Returns:\n OrderedDict: Model weights on GPU."}
{"_id": "q_2186", "text": "Init the optimizer.\n\n Args:\n optimizer (dict or :obj:`~torch.optim.Optimizer`): Either an\n optimizer object or a dict used for constructing the optimizer.\n\n Returns:\n :obj:`~torch.optim.Optimizer`: An optimizer object.\n\n Examples:\n >>> optimizer = dict(type='SGD', lr=0.01, momentum=0.9)\n >>> type(runner.init_optimizer(optimizer))\n <class 'torch.optim.sgd.SGD'>"}
{"_id": "q_2187", "text": "Get current learning rates.\n\n Returns:\n list: Current learning rate of all param groups."}
{"_id": "q_2188", "text": "Register a hook into the hook list.\n\n Args:\n hook (:obj:`Hook`): The hook to be registered.\n priority (int or str or :obj:`Priority`): Hook priority.\n Lower value means higher priority."}
{"_id": "q_2189", "text": "Register default hooks for training.\n\n Default hooks include:\n\n - LrUpdaterHook\n - OptimizerStepperHook\n - CheckpointSaverHook\n - IterTimerHook\n - LoggerHook(s)"}
{"_id": "q_2190", "text": "Convert a video with ffmpeg.\n\n This provides a general api to ffmpeg, the executed command is::\n\n `ffmpeg -y <pre_options> -i <in_file> <options> <out_file>`\n\n Options(kwargs) are mapped to ffmpeg commands with the following rules:\n\n - key=val: \"-key val\"\n - key=True: \"-key\"\n - key=False: \"\"\n\n Args:\n in_file (str): Input video filename.\n out_file (str): Output video filename.\n pre_options (str): Options appears before \"-i <in_file>\".\n print_cmd (bool): Whether to print the final ffmpeg command."}
{"_id": "q_2191", "text": "Resize a video.\n\n Args:\n in_file (str): Input video filename.\n out_file (str): Output video filename.\n size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1).\n ratio (tuple or float): Expected resize ratio, (2, 0.5) means\n (w*2, h*0.5).\n keep_ar (bool): Whether to keep original aspect ratio.\n log_level (str): Logging level of ffmpeg.\n print_cmd (bool): Whether to print the final ffmpeg command."}
{"_id": "q_2192", "text": "Load a text file and parse the content as a dict.\n\n Each line of the text file will be two or more columns splited by\n whitespaces or tabs. The first column will be parsed as dict keys, and\n the following columns will be parsed as dict values.\n\n Args:\n filename(str): Filename.\n key_type(type): Type of the dict's keys. str is user by default and\n type conversion will be performed if specified.\n\n Returns:\n dict: The parsed contents."}
{"_id": "q_2193", "text": "3x3 convolution with padding"}
{"_id": "q_2194", "text": "Read an image.\n\n Args:\n img_or_path (ndarray or str): Either a numpy array or image path.\n If it is a numpy array (loaded image), then it will be returned\n as is.\n flag (str): Flags specifying the color type of a loaded image,\n candidates are `color`, `grayscale` and `unchanged`.\n\n Returns:\n ndarray: Loaded image array."}
{"_id": "q_2195", "text": "Read an image from bytes.\n\n Args:\n content (bytes): Image bytes got from files or other streams.\n flag (str): Same as :func:`imread`.\n\n Returns:\n ndarray: Loaded image array."}
{"_id": "q_2196", "text": "Write image to file\n\n Args:\n img (ndarray): Image array to be written.\n file_path (str): Image file path.\n params (None or list): Same as opencv's :func:`imwrite` interface.\n auto_mkdir (bool): If the parent folder of `file_path` does not exist,\n whether to create it automatically.\n\n Returns:\n bool: Successful or not."}
{"_id": "q_2197", "text": "Convert a BGR image to grayscale image.\n\n Args:\n img (ndarray): The input image.\n keepdim (bool): If False (by default), then return the grayscale image\n with 2 dims, otherwise 3 dims.\n\n Returns:\n ndarray: The converted grayscale image."}
{"_id": "q_2198", "text": "Convert a grayscale image to BGR image.\n\n Args:\n img (ndarray or str): The input image.\n\n Returns:\n ndarray: The converted BGR image."}
{"_id": "q_2199", "text": "Cast elements of an iterable object into some type.\n\n Args:\n inputs (Iterable): The input object.\n dst_type (type): Destination type.\n return_type (type, optional): If specified, the output object will be\n converted to this type, otherwise an iterator.\n\n Returns:\n iterator or specified type: The converted object."}
{"_id": "q_2200", "text": "Check whether it is a sequence of some type.\n\n Args:\n seq (Sequence): The sequence to be checked.\n expected_type (type): Expected type of sequence items.\n seq_type (type, optional): Expected sequence type.\n\n Returns:\n bool: Whether the sequence is valid."}
{"_id": "q_2201", "text": "Slice a list into several sub lists by a list of given length.\n\n Args:\n in_list (list): The list to be sliced.\n lens(int or list): The expected length of each out list.\n\n Returns:\n list: A list of sliced list."}
{"_id": "q_2202", "text": "Average latest n values or all values"}
{"_id": "q_2203", "text": "Scatters tensor across multiple GPUs."}
{"_id": "q_2204", "text": "Add check points in a single line.\n\n This method is suitable for running a task on a list of items. A timer will\n be registered when the method is called for the first time.\n\n :Example:\n\n >>> import time\n >>> import mmcv\n >>> for i in range(1, 6):\n >>> # simulate a code block\n >>> time.sleep(i)\n >>> mmcv.check_time('task1')\n 2.000\n 3.000\n 4.000\n 5.000\n\n Args:\n timer_id (str): Timer identifier."}
{"_id": "q_2205", "text": "Time since the last checking.\n\n Either :func:`since_start` or :func:`since_last_check` is a checking\n operation.\n\n Returns (float): Time in seconds."}
{"_id": "q_2206", "text": "Show optical flow.\n\n Args:\n flow (ndarray or str): The optical flow to be displayed.\n win_name (str): The window name.\n wait_time (int): Value of waitKey param."}
{"_id": "q_2207", "text": "Convert flow map to RGB image.\n\n Args:\n flow (ndarray): Array of optical flow.\n color_wheel (ndarray or None): Color wheel used to map flow field to\n RGB colorspace. Default color wheel will be used if not specified.\n unknown_thr (str): Values above this threshold will be marked as\n unknown and thus ignored.\n\n Returns:\n ndarray: RGB image that can be visualized."}
{"_id": "q_2208", "text": "Build a color wheel.\n\n Args:\n bins(list or tuple, optional): Specify the number of bins for each\n color range, corresponding to six ranges: red -> yellow,\n yellow -> green, green -> cyan, cyan -> blue, blue -> magenta,\n magenta -> red. [15, 6, 4, 11, 13, 6] is used for default\n (see Middlebury).\n\n Returns:\n ndarray: Color wheel of shape (total_bins, 3)."}
{"_id": "q_2209", "text": "Scatter inputs to target gpus.\n\n The only difference from original :func:`scatter` is to add support for\n :type:`~mmcv.parallel.DataContainer`."}
{"_id": "q_2210", "text": "Scatter with support for kwargs dictionary"}
{"_id": "q_2211", "text": "Decide whether a particular character needs to be quoted.\n\n The 'quotetabs' flag indicates whether embedded tabs and spaces should be\n quoted. Note that line-ending tabs and spaces are always encoded, as per\n RFC 1521."}
{"_id": "q_2212", "text": "Quote a single character."}
{"_id": "q_2213", "text": "Read 'input', apply quoted-printable encoding, and write to 'output'.\n\n 'input' and 'output' are files with readline() and write() methods.\n The 'quotetabs' flag indicates whether embedded tabs and spaces should be\n quoted. Note that line-ending tabs and spaces are always encoded, as per\n RFC 1521.\n The 'header' flag indicates whether we are encoding spaces as _ as per\n RFC 1522."}
{"_id": "q_2214", "text": "Get the integer value of a hexadecimal number."}
{"_id": "q_2215", "text": "Encode a string using Base64.\n\n s is the string to encode. Optional altchars must be a string of at least\n length 2 (additional characters are ignored) which specifies an\n alternative alphabet for the '+' and '/' characters. This allows an\n application to e.g. generate url or filesystem safe Base64 strings.\n\n The encoded string is returned."}
{"_id": "q_2216", "text": "Decode a Base64 encoded string.\n\n s is the string to decode. Optional altchars must be a string of at least\n length 2 (additional characters are ignored) which specifies the\n alternative alphabet used instead of the '+' and '/' characters.\n\n The decoded string is returned. A TypeError is raised if s is\n incorrectly padded. Characters that are neither in the normal base-64\n alphabet nor the alternative alphabet are discarded prior to the padding\n check."}
{"_id": "q_2217", "text": "Encode a string using Base32.\n\n s is the string to encode. The encoded string is returned."}
{"_id": "q_2218", "text": "Decode a Base32 encoded string.\n\n s is the string to decode. Optional casefold is a flag specifying whether\n a lowercase alphabet is acceptable as input. For security purposes, the\n default is False.\n\n RFC 3548 allows for optional mapping of the digit 0 (zero) to the letter O\n (oh), and for optional mapping of the digit 1 (one) to either the letter I\n (eye) or letter L (el). The optional argument map01 when not None,\n specifies which letter the digit 1 should be mapped to (when map01 is not\n None, the digit 0 is always mapped to the letter O). For security\n purposes the default is None, so that 0 and 1 are not allowed in the\n input.\n\n The decoded string is returned. A TypeError is raised if s were\n incorrectly padded or if there are non-alphabet characters present in the\n string."}
{"_id": "q_2219", "text": "Decode a Base16 encoded string.\n\n s is the string to decode. Optional casefold is a flag specifying whether\n a lowercase alphabet is acceptable as input. For security purposes, the\n default is False.\n\n The decoded string is returned. A TypeError is raised if s is\n incorrectly padded or if there are non-alphabet characters present in the\n string."}
{"_id": "q_2220", "text": "Encode a file."}
{"_id": "q_2221", "text": "Returns a zero-length range located just after the end of this range."}
{"_id": "q_2222", "text": "Returns a zero-based column number of the beginning of this range."}
{"_id": "q_2223", "text": "Returns the line number of the beginning of this range."}
{"_id": "q_2224", "text": "Returns the lines of source code containing the entirety of this range."}
{"_id": "q_2225", "text": "An AST comparison function. Returns ``True`` if all fields in\n ``left`` are equal to fields in ``right``; if ``compare_locs`` is\n true, all locations should match as well."}
{"_id": "q_2226", "text": "Convert a 32-bit or 64-bit integer created\n by float_pack into a Python float."}
{"_id": "q_2227", "text": "Convert a Python float x into a 64-bit unsigned integer\n with the same byte representation."}
{"_id": "q_2228", "text": "A context manager that appends ``note`` to every diagnostic processed by\n this engine."}
{"_id": "q_2229", "text": "Format a list of traceback entry tuples for printing.\n\n Given a list of tuples as returned by extract_tb() or\n extract_stack(), return a list of strings ready for printing.\n Each string in the resulting list corresponds to the item with the\n same index in the argument list. Each string ends in a newline;\n the strings may contain internal newlines as well, for those items\n whose source text line is not None."}
{"_id": "q_2230", "text": "Print up to 'limit' stack trace entries from the traceback 'tb'.\n\n If 'limit' is omitted or None, all entries are printed. If 'file'\n is omitted or None, the output goes to sys.stderr; otherwise\n 'file' should be an open file or file-like object with a write()\n method."}
{"_id": "q_2231", "text": "Print exception up to 'limit' stack trace entries from 'tb' to 'file'.\n\n This differs from print_tb() in the following ways: (1) if\n traceback is not None, it prints a header \"Traceback (most recent\n call last):\"; (2) it prints the exception type and value after the\n stack trace; (3) if type is SyntaxError and value has the\n appropriate format, it prints the line where the syntax error\n occurred with a caret on the next line indicating the approximate\n position of the error."}
{"_id": "q_2232", "text": "Format a stack trace and the exception information.\n\n The arguments have the same meaning as the corresponding arguments\n to print_exception(). The return value is a list of strings, each\n ending in a newline and some containing internal newlines. When\n these lines are concatenated and printed, exactly the same text is\n printed as does print_exception()."}
{"_id": "q_2233", "text": "x, random=random.random -> shuffle list x in place; return None.\n\n Optional arg random is a 0-argument function returning a random\n float in [0.0, 1.0); by default, the standard random.random."}
{"_id": "q_2234", "text": "Return a list of slot names for a given class.\n\n This needs to find slots defined by the class and its bases, so we\n can't simply return the __slots__ attribute. We must walk down\n the Method Resolution Order and concatenate the __slots__ of each\n class found there. (This assumes classes don't modify their\n __slots__ attribute to misrepresent their slots after the class is\n defined.)"}
{"_id": "q_2235", "text": "Convert a cmp= function into a key= function"}
{"_id": "q_2236", "text": "Read header lines.\n\n Read header lines up to the entirely blank line that terminates them.\n The (normally blank) line that ends the headers is skipped, but not\n included in the returned list. If a non-header line ends the headers,\n (which is an error), an attempt is made to backspace over it; it is\n never included in the returned list.\n\n The variable self.status is set to the empty string if all went well,\n otherwise it is an error message. The variable self.headers is a\n completely uninterpreted list of lines contained in the header (so\n printing them will reproduce the header exactly as it appears in the\n file)."}
{"_id": "q_2237", "text": "Determine whether a given line is a legal header.\n\n This method should return the header name, suitably canonicalized.\n You may override this method in order to use Message parsing on tagged\n data in RFC 2822-like formats with special header formats."}
{"_id": "q_2238", "text": "Get the first header line matching name.\n\n This is similar to getallmatchingheaders, but it returns only the\n first matching header (and its continuation lines)."}
{"_id": "q_2239", "text": "Get all values for a header.\n\n This returns a list of values for headers given more than once; each\n value in the result list is stripped in the same way as the result of\n getheader(). If the header is not given, return an empty list."}
{"_id": "q_2240", "text": "Get a list of addresses from a header.\n\n Retrieves a list of addresses from a header, where each address is a\n tuple as returned by getaddr(). Scans all named headers, so it works\n properly with multiple To: or Cc: headers for example."}
{"_id": "q_2241", "text": "Parse up to the start of the next address."}
{"_id": "q_2242", "text": "Get the complete domain name from an address."}
{"_id": "q_2243", "text": "Parse a sequence of RFC 2822 phrases.\n\n A phrase is a sequence of words, which are in turn either RFC 2822\n atoms or quoted-strings. Phrases are canonicalized by squeezing all\n runs of continuous whitespace into one space."}
{"_id": "q_2244", "text": "year, month -> number of days in that month in that year."}
{"_id": "q_2245", "text": "year, month, day -> ordinal, considering 01-Jan-0001 as day 1."}
{"_id": "q_2246", "text": "Return a new date with new values for the specified fields."}
{"_id": "q_2247", "text": "Return a 3-tuple containing ISO year, week number, and weekday.\n\n The first ISO week of the year is the (Mon-Sun) week\n containing the year's first Thursday; everything else derives\n from that.\n\n The first week is 1; Monday is 1 ... Sunday is 7.\n\n ISO calendar algorithm taken from\n http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm"}
{"_id": "q_2248", "text": "Return the timezone name.\n\n Note that the name is 100% informational -- there's no requirement that\n it mean anything in particular. For example, \"GMT\", \"UTC\", \"-500\",\n \"-5:00\", \"EDT\", \"US/Eastern\", \"America/New York\" are all valid replies."}
{"_id": "q_2249", "text": "Return a new time with new values for the specified fields."}
{"_id": "q_2250", "text": "Return the time part, with same tzinfo."}
{"_id": "q_2251", "text": "Return a new datetime with new values for the specified fields."}
{"_id": "q_2252", "text": "Same as a + b, for a and b sequences."}
{"_id": "q_2253", "text": "Return the first index of b in a."}
{"_id": "q_2254", "text": "Same as a += b, for a and b sequences."}
{"_id": "q_2255", "text": "Return the string obtained by replacing the leftmost\n non-overlapping occurrences of the pattern in string by the\n replacement repl. repl can be either a string or a callable;\n if a string, backslash escapes in it are processed. If it is\n a callable, it's passed the match object and must return\n a replacement string to be used."}
{"_id": "q_2256", "text": "Split the source string by the occurrences of the pattern,\n returning a list containing the resulting substrings."}
{"_id": "q_2257", "text": "Return a list of all non-overlapping matches in the string.\n\n If one or more groups are present in the pattern, return a\n list of groups; this will be a list of tuples if the pattern\n has more than one group.\n\n Empty matches are included in the result."}
{"_id": "q_2258", "text": "Decode uuencoded file"}
{"_id": "q_2259", "text": "Return number of `ch` characters at the start of `line`.\n\n Example:\n\n >>> _count_leading(' abc', ' ')\n 3"}
{"_id": "q_2260", "text": "r\"\"\"\n Compare two sequences of lines; generate the delta as a unified diff.\n\n Unified diffs are a compact way of showing line changes and a few\n lines of context. The number of context lines is set by 'n' which\n defaults to three.\n\n By default, the diff control lines (those with ---, +++, or @@) are\n created with a trailing newline. This is helpful so that inputs\n created from file.readlines() result in diffs that are suitable for\n file.writelines() since both the inputs and outputs have trailing\n newlines.\n\n For inputs that do not have trailing newlines, set the lineterm\n argument to \"\" so that the output will be uniformly newline free.\n\n The unidiff format normally has a header for filenames and modification\n times. Any or all of these may be specified using strings for\n 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.\n The modification times are normally expressed in the ISO 8601 format.\n\n Example:\n\n >>> for line in unified_diff('one two three four'.split(),\n ... 'zero one tree four'.split(), 'Original', 'Current',\n ... '2005-01-26 23:30:50', '2010-04-02 10:20:52',\n ... lineterm=''):\n ... print line # doctest: +NORMALIZE_WHITESPACE\n --- Original 2005-01-26 23:30:50\n +++ Current 2010-04-02 10:20:52\n @@ -1,4 +1,4 @@\n +zero\n one\n -two\n -three\n +tree\n four"}
{"_id": "q_2261", "text": "r\"\"\"\n Compare two sequences of lines; generate the delta as a context diff.\n\n Context diffs are a compact way of showing line changes and a few\n lines of context. The number of context lines is set by 'n' which\n defaults to three.\n\n By default, the diff control lines (those with *** or ---) are\n created with a trailing newline. This is helpful so that inputs\n created from file.readlines() result in diffs that are suitable for\n file.writelines() since both the inputs and outputs have trailing\n newlines.\n\n For inputs that do not have trailing newlines, set the lineterm\n argument to \"\" so that the output will be uniformly newline free.\n\n The context diff format normally has a header for filenames and\n modification times. Any or all of these may be specified using\n strings for 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.\n The modification times are normally expressed in the ISO 8601 format.\n If not specified, the strings default to blanks.\n\n Example:\n\n >>> print ''.join(context_diff('one\\ntwo\\nthree\\nfour\\n'.splitlines(1),\n ... 'zero\\none\\ntree\\nfour\\n'.splitlines(1), 'Original', 'Current')),\n *** Original\n --- Current\n ***************\n *** 1,4 ****\n one\n ! two\n ! three\n four\n --- 1,4 ----\n + zero\n one\n ! tree\n four"}
{"_id": "q_2262", "text": "Make a new Match object from a sequence or iterable"}
{"_id": "q_2263", "text": "Return list of triples describing matching subsequences.\n\n Each triple is of the form (i, j, n), and means that\n a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in\n i and in j. New in Python 2.5, it's also guaranteed that if\n (i, j, n) and (i', j', n') are adjacent triples in the list, and\n the second is not the last triple in the list, then i+n != i' or\n j+n != j'. IOW, adjacent triples never describe adjacent equal\n blocks.\n\n The last triple is a dummy, (len(a), len(b), 0), and is the only\n triple with n==0.\n\n >>> s = SequenceMatcher(None, \"abxcd\", \"abcd\")\n >>> s.get_matching_blocks()\n [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)]"}
{"_id": "q_2264", "text": "r\"\"\"\n Compare two sequences of lines; generate the resulting delta.\n\n Each sequence must contain individual single-line strings ending with\n newlines. Such sequences can be obtained from the `readlines()` method\n of file-like objects. The delta generated also consists of newline-\n terminated strings, ready to be printed as-is via the writeline()\n method of a file-like object.\n\n Example:\n\n >>> print ''.join(Differ().compare('one\\ntwo\\nthree\\n'.splitlines(1),\n ... 'ore\\ntree\\nemu\\n'.splitlines(1))),\n - one\n ? ^\n + ore\n ? ^\n - two\n - three\n ? -\n + tree\n + emu"}
{"_id": "q_2265", "text": "r\"\"\"\n Format \"?\" output and deal with leading tabs.\n\n Example:\n\n >>> d = Differ()\n >>> results = d._qformat('\\tabcDefghiJkl\\n', '\\tabcdefGhijkl\\n',\n ... ' ^ ^ ^ ', ' ^ ^ ^ ')\n >>> for line in results: print repr(line)\n ...\n '- \\tabcDefghiJkl\\n'\n '? \\t ^ ^ ^\\n'\n '+ \\tabcdefGhijkl\\n'\n '? \\t ^ ^ ^\\n'"}
{"_id": "q_2266", "text": "Returns HTML file of side by side comparison with change highlights\n\n Arguments:\n fromlines -- list of \"from\" lines\n tolines -- list of \"to\" lines\n fromdesc -- \"from\" file column header string\n todesc -- \"to\" file column header string\n context -- set to True for contextual differences (defaults to False\n which shows full differences).\n numlines -- number of context lines. When context is set True,\n controls number of lines displayed before and after the change.\n When context is False, controls the number of lines to place\n the \"next\" link anchors before the next change (so click of\n \"next\" link jumps to just before the change)."}
{"_id": "q_2267", "text": "Builds list of text lines by splitting text lines at wrap point\n\n This function will determine if the input text line needs to be\n wrapped (split) into separate lines. If so, the first wrap point\n will be determined and the first line appended to the output\n text line list. This function is used recursively to handle\n the second part of the split line to further split it."}
{"_id": "q_2268", "text": "Collects mdiff output into separate lists\n\n Before storing the mdiff from/to data into a list, it is converted\n into a single line of text with HTML markup."}
{"_id": "q_2269", "text": "Create unique anchor prefixes"}
{"_id": "q_2270", "text": "Returns HTML table of side by side comparison with change highlights\n\n Arguments:\n fromlines -- list of \"from\" lines\n tolines -- list of \"to\" lines\n fromdesc -- \"from\" file column header string\n todesc -- \"to\" file column header string\n context -- set to True for contextual differences (defaults to False\n which shows full differences).\n numlines -- number of context lines. When context is set True,\n controls number of lines displayed before and after the change.\n When context is False, controls the number of lines to place\n the \"next\" link anchors before the next change (so click of\n \"next\" link jumps to just before the change)."}
{"_id": "q_2271", "text": "Create and return a benchmark that runs work_func p times in parallel."}
{"_id": "q_2272", "text": "List directory contents, using cache."}
{"_id": "q_2273", "text": "Format a Python o into a pretty-printed representation."}
{"_id": "q_2274", "text": "A decorator returning a function that first runs ``inner_rule`` and then, if its\n return value is not None, maps that value using ``mapper``.\n\n If the value being mapped is a tuple, it is expanded into multiple arguments.\n\n Similar to attaching semantic actions to rules in traditional parser generators."}
{"_id": "q_2275", "text": "A rule that accepts a sequence of tokens satisfying ``rules`` and returns a tuple\n containing their return values, or None if the first rule was not satisfied."}
{"_id": "q_2276", "text": "A rule that accepts token of kind ``newline`` and returns an empty list."}
{"_id": "q_2277", "text": "Join a base URL and a possibly relative URL to form an absolute\n interpretation of the latter."}
{"_id": "q_2278", "text": "Removes any existing fragment from URL.\n\n Returns a tuple of the defragmented URL and the fragment. If\n the URL contained no fragments, the second element is the\n empty string."}
{"_id": "q_2279", "text": "Return a new SplitResult object replacing specified fields with new values"}
{"_id": "q_2280", "text": "Test whether a path is a regular file"}
{"_id": "q_2281", "text": "Return true if the pathname refers to an existing directory."}
{"_id": "q_2282", "text": "Given a list of pathnames, returns the longest common leading component"}
{"_id": "q_2283", "text": "Wrap a single paragraph of text, returning a list of wrapped lines.\n\n Reformat the single paragraph in 'text' so it fits in lines of no\n more than 'width' columns, and return a list of wrapped lines. By\n default, tabs in 'text' are expanded with string.expandtabs(), and\n all other whitespace characters (including newline) are converted to\n space. See TextWrapper class for available keyword args to customize\n wrapping behaviour."}
{"_id": "q_2284", "text": "Fill a single paragraph of text, returning a new string.\n\n Reformat the single paragraph in 'text' to fit in lines of no more\n than 'width' columns, and return a new string containing the entire\n wrapped paragraph. As with wrap(), tabs are expanded and other\n whitespace characters converted to space. See TextWrapper class for\n available keyword args to customize wrapping behaviour."}
{"_id": "q_2285", "text": "Remove any common leading whitespace from every line in `text`.\n\n This can be used to make triple-quoted strings line up with the left\n edge of the display, while still presenting them in the source code\n in indented form.\n\n Note that tabs and spaces are both treated as whitespace, but they\n are not equal: the lines \" hello\" and \"\\\\thello\" are\n considered to have no common leading whitespace. (This behaviour is\n new in Python 2.5; older versions of this module incorrectly\n expanded tabs before searching for common leading whitespace.)"}
{"_id": "q_2286", "text": "Transform a list of characters into a list of longs."}
{"_id": "q_2287", "text": "Initialize the message-digest and set all fields to zero."}
{"_id": "q_2288", "text": "Shallow copy operation on arbitrary Python objects.\n\n See the module's __doc__ string for more info."}
{"_id": "q_2289", "text": "Deep copy operation on arbitrary Python objects.\n\n See the module's __doc__ string for more info."}
{"_id": "q_2290", "text": "Keeps a reference to the object x in the memo.\n\n Because we remember objects by their id, we have\n to assure that possibly temporary objects are kept\n alive by referencing them.\n We store a reference at the id of the memo, which should\n normally not be used unless someone tries to deepcopy\n the memo itself..."}
{"_id": "q_2291", "text": "Issue a deprecation warning for Python 3.x related changes.\n\n Warnings are omitted unless Python is started with the -3 option."}
{"_id": "q_2292", "text": "Hook to write a warning to a file; replace if you like."}
{"_id": "q_2293", "text": "Issue a warning, or maybe ignore it or raise an exception."}
{"_id": "q_2294", "text": "Compute the hash value of a set.\n\n Note that we don't define __hash__: not all sets are hashable.\n But if you define a hashable set type, its __hash__ should\n call this function.\n\n This must be compatible __eq__.\n\n All sets ought to compare equal if they contain the same\n elements, regardless of how they are implemented, and\n regardless of the order of the elements; so there's not much\n freedom for __eq__ or __hash__. We match the algorithm used\n by the built-in frozenset type."}
{"_id": "q_2295", "text": "Remove an element. If not a member, raise a KeyError."}
{"_id": "q_2296", "text": "Return the popped value. Raise KeyError if empty."}
{"_id": "q_2297", "text": "Release a lock, decrementing the recursion level.\n\n If after the decrement it is zero, reset the lock to unlocked (not owned\n by any thread), and if any other threads are blocked waiting for the\n lock to become unlocked, allow exactly one of them to proceed. If after\n the decrement the recursion level is still nonzero, the lock remains\n locked and owned by the calling thread.\n\n Only call this method when the calling thread owns the lock. A\n RuntimeError is raised if this method is called when the lock is\n unlocked.\n\n There is no return value."}
{"_id": "q_2298", "text": "Wait until notified or until a timeout occurs.\n\n If the calling thread has not acquired the lock when this method is\n called, a RuntimeError is raised.\n\n This method releases the underlying lock, and then blocks until it is\n awakened by a notify() or notifyAll() call for the same condition\n variable in another thread, or until the optional timeout occurs. Once\n awakened or timed out, it re-acquires the lock and returns.\n\n When the timeout argument is present and not None, it should be a\n floating point number specifying a timeout for the operation in seconds\n (or fractions thereof).\n\n When the underlying lock is an RLock, it is not released using its\n release() method, since this may not actually unlock the lock when it\n was acquired multiple times recursively. Instead, an internal interface\n of the RLock class is used, which really unlocks it even when it has\n been recursively acquired several times. Another internal interface is\n then used to restore the recursion level when the lock is reacquired."}
{"_id": "q_2299", "text": "Wake up one or more threads waiting on this condition, if any.\n\n If the calling thread has not acquired the lock when this method is\n called, a RuntimeError is raised.\n\n This method wakes up at most n of the threads waiting for the condition\n variable; it is a no-op if no threads are waiting."}
{"_id": "q_2300", "text": "Acquire a semaphore, decrementing the internal counter by one.\n\n When invoked without arguments: if the internal counter is larger than\n zero on entry, decrement it by one and return immediately. If it is zero\n on entry, block, waiting until some other thread has called release() to\n make it larger than zero. This is done with proper interlocking so that\n if multiple acquire() calls are blocked, release() will wake exactly one\n of them up. The implementation may pick one at random, so the order in\n which blocked threads are awakened should not be relied on. There is no\n return value in this case.\n\n When invoked with blocking set to true, do the same thing as when called\n without arguments, and return true.\n\n When invoked with blocking set to false, do not block. If a call without\n an argument would block, return false immediately; otherwise, do the\n same thing as when called without arguments, and return true."}
{"_id": "q_2301", "text": "Block until the internal flag is true.\n\n If the internal flag is true on entry, return immediately. Otherwise,\n block until another thread calls set() to set the flag to true, or until\n the optional timeout occurs.\n\n When the timeout argument is present and not None, it should be a\n floating point number specifying a timeout for the operation in seconds\n (or fractions thereof).\n\n This method returns the internal flag on exit, so it will always return\n True except if a timeout is given and the operation times out."}
{"_id": "q_2302", "text": "Start the thread's activity.\n\n It must be called at most once per thread object. It arranges for the\n object's run() method to be invoked in a separate thread of control.\n\n This method will raise a RuntimeError if called more than once on the\n same thread object."}
{"_id": "q_2303", "text": "Method representing the thread's activity.\n\n You may override this method in a subclass. The standard run() method\n invokes the callable object passed to the object's constructor as the\n target argument, if any, with sequential and keyword arguments taken\n from the args and kwargs arguments, respectively."}
{"_id": "q_2304", "text": "Wait until the thread terminates.\n\n This blocks the calling thread until the thread whose join() method is\n called terminates -- either normally or through an unhandled exception\n or until the optional timeout occurs.\n\n When the timeout argument is present and not None, it should be a\n floating point number specifying a timeout for the operation in seconds\n (or fractions thereof). As join() always returns None, you must call\n isAlive() after join() to decide whether a timeout happened -- if the\n thread is still alive, the join() call timed out.\n\n When the timeout argument is not present or None, the operation will\n block until the thread terminates.\n\n A thread can be join()ed many times.\n\n join() raises a RuntimeError if an attempt is made to join the current\n thread as that would cause a deadlock. It is also an error to join() a\n thread before it has been started and attempts to do so raises the same\n exception."}
{"_id": "q_2305", "text": "quotetabs=True means that tab and space characters are always\n quoted.\n istext=False means that \\r and \\n are treated as regular characters\n header=True encodes space characters with '_' and requires\n real '_' characters to be quoted."}
{"_id": "q_2306", "text": "Return a comma-separated list of option strings & metavariables."}
{"_id": "q_2307", "text": "Update the option values from an arbitrary dictionary, but only\n use keys from dict that already have a corresponding attribute\n in self. Any keys in dict without a corresponding attribute\n are silently ignored."}
{"_id": "q_2308", "text": "Insert item x in list a, and keep it sorted assuming a is sorted.\n\n If x is already in a, insert it to the right of the rightmost x.\n\n Optional args lo (default 0) and hi (default len(a)) bound the\n slice of a to be searched."}
{"_id": "q_2309", "text": "Lock a mutex, call the function with supplied argument\n when it is acquired. If the mutex is already locked, place\n function and argument in the queue."}
{"_id": "q_2310", "text": "Unlock a mutex. If the queue is not empty, call the next\n function with its argument."}
{"_id": "q_2311", "text": "Return a clone object.\n\n Return a copy ('clone') of the md5 object. This can be used\n to efficiently compute the digests of strings that share\n a common initial substring."}
{"_id": "q_2312", "text": "Return the string obtained by replacing the leftmost non-overlapping\n occurrences of pattern in string by the replacement repl."}
{"_id": "q_2313", "text": "Split string by the occurrences of pattern."}
{"_id": "q_2314", "text": "Creates a tuple of index pairs representing matched groups."}
{"_id": "q_2315", "text": "Skips forward in a string as fast as possible using information from\n an optimization info block."}
{"_id": "q_2316", "text": "Creates a new child context of this context and pushes it on the\n stack. pattern_offset is the offset off the current code position to\n start interpreting from."}
{"_id": "q_2317", "text": "Checks whether a character matches set of arbitrary length. Assumes\n the code pointer is at the first member of the set."}
{"_id": "q_2318", "text": "Remove the exponent by changing intpart and fraction."}
{"_id": "q_2319", "text": "Return the subset of the list NAMES that match PAT"}
{"_id": "q_2320", "text": "Put an item into the queue.\n\n If optional args 'block' is true and 'timeout' is None (the default),\n block if necessary until a free slot is available. If 'timeout' is\n a non-negative number, it blocks at most 'timeout' seconds and raises\n the Full exception if no free slot was available within that time.\n Otherwise ('block' is false), put an item on the queue if a free slot\n is immediately available, else raise the Full exception ('timeout'\n is ignored in that case)."}
{"_id": "q_2321", "text": "Combine multiple context managers into a single nested context manager.\n\n This function has been deprecated in favour of the multiple manager form\n of the with statement.\n\n The one advantage of this function over the multiple manager form of the\n with statement is that argument unpacking allows it to be\n used with a variable number of context managers as follows:\n\n with nested(*managers):\n do_something()"}
{"_id": "q_2322", "text": "Read and decodes JSON response."}
{"_id": "q_2323", "text": "Process coroutine callback function"}
{"_id": "q_2324", "text": "For crawling multiple urls"}
{"_id": "q_2325", "text": "Init a Request class for crawling html"}
{"_id": "q_2326", "text": "Actually start crawling."}
{"_id": "q_2327", "text": "Returns the TensorFlow variables used by the baseline.\n\n Returns:\n List of variables"}
{"_id": "q_2328", "text": "Creates a baseline from a specification dict."}
{"_id": "q_2329", "text": "Iteration loop body of the conjugate gradient algorithm.\n\n Args:\n x: Current solution estimate $x_t$.\n iteration: Current iteration counter $t$.\n conjugate: Current conjugate $c_t$.\n residual: Current residual $r_t$.\n squared_residual: Current squared residual $r_t^2$.\n\n Returns:\n Updated arguments for next iteration."}
{"_id": "q_2330", "text": "Returns the target optimizer arguments including the time, the list of variables to \n optimize, and various functions which the optimizer might require to perform an update \n step.\n\n Returns:\n Target optimizer arguments as dict."}
{"_id": "q_2331", "text": "Creates an environment from a specification dict."}
{"_id": "q_2332", "text": "Pass through rest role."}
{"_id": "q_2333", "text": "Rendering table element. Wrap header and body in it.\n\n :param header: header part of the table.\n :param body: body part of the table."}
{"_id": "q_2334", "text": "Worker Agent generator, receives an Agent class and creates a Worker Agent class that inherits from that Agent."}
{"_id": "q_2335", "text": "Returns x, y from flat_position integer.\n\n Args:\n flat_position: flattened position integer\n\n Returns: x, y"}
{"_id": "q_2336", "text": "Wait until there is a state."}
{"_id": "q_2337", "text": "Creates an optimizer from a specification dict."}
{"_id": "q_2338", "text": "Registers the saver operations to the graph in context."}
{"_id": "q_2339", "text": "Saves this component's managed variables.\n\n Args:\n sess: The session for which to save the managed variables.\n save_path: The path to save data to.\n timestep: Optional, the timestep to append to the file name.\n\n Returns:\n Checkpoint path where the model was saved."}
{"_id": "q_2340", "text": "Restores the values of the managed variables from disk location.\n\n Args:\n sess: The session for which to save the managed variables.\n save_path: The path used to save the data to."}
{"_id": "q_2341", "text": "Process state.\n\n Args:\n tensor: tensor to process\n\n Returns: processed state"}
{"_id": "q_2342", "text": "Shape of preprocessed state given original shape.\n\n Args:\n shape: original state shape\n\n Returns: processed state shape"}
{"_id": "q_2343", "text": "Makes sure our optimizer is wrapped into the global_optimizer meta. This is only relevant for distributed RL."}
{"_id": "q_2344", "text": "Creates and returns the TensorFlow operations for calculating the sequence of discounted cumulative rewards\n for a given sequence of single rewards.\n\n Example:\n single rewards = 2.0 1.0 0.0 0.5 1.0 -1.0\n terminal = False, False, False, False True False\n gamma = 0.95\n final_reward = 100.0 (only matters for last episode (r=-1.0) as this episode has no terminal signal)\n horizon=3\n output = 2.95 1.45 1.38 1.45 1.0 94.0\n\n Args:\n terminal: Tensor (bool) holding the is-terminal sequence. This sequence may contain more than one\n True value. If its very last element is False (not terminating), the given `final_reward` value\n is assumed to follow the last value in the single rewards sequence (see below).\n reward: Tensor (float) holding the sequence of single rewards. If the last element of `terminal` is False,\n an assumed last reward of the value of `final_reward` will be used.\n discount (float): The discount factor (gamma). By default, take the Model's discount factor.\n final_reward (float): Reward value to use if last episode in sequence does not terminate (terminal sequence\n ends with False). This value will be ignored if horizon == 1 or discount == 0.0.\n horizon (int): The length of the horizon (e.g. for n-step cumulative rewards in continuous tasks\n without terminal signals). Use 0 (default) for an infinite horizon. Note that horizon=1 leads to the\n exact same results as a discount factor of 0.0.\n\n Returns:\n Discounted cumulative reward tensor with the same shape as `reward`."}
{"_id": "q_2345", "text": "Creates the TensorFlow operations for calculating the loss per batch instance.\n\n Args:\n states: Dict of state tensors.\n internals: Dict of prior internal state tensors.\n actions: Dict of action tensors.\n terminal: Terminal boolean tensor.\n reward: Reward tensor.\n next_states: Dict of successor state tensors.\n next_internals: List of posterior internal state tensors.\n update: Boolean tensor indicating whether this call happens during an update.\n reference: Optional reference tensor(s), in case of a comparative loss.\n\n Returns:\n Loss per instance tensor."}
{"_id": "q_2346", "text": "Creates the TensorFlow operations for calculating the full loss of a batch.\n\n Args:\n states: Dict of state tensors.\n internals: List of prior internal state tensors.\n actions: Dict of action tensors.\n terminal: Terminal boolean tensor.\n reward: Reward tensor.\n next_states: Dict of successor state tensors.\n next_internals: List of posterior internal state tensors.\n update: Boolean tensor indicating whether this call happens during an update.\n reference: Optional reference tensor(s), in case of a comparative loss.\n\n Returns:\n Loss tensor."}
{"_id": "q_2347", "text": "Creates the TensorFlow operations for performing an optimization update step based\n on the given input states and actions batch.\n\n Args:\n states: Dict of state tensors.\n internals: List of prior internal state tensors.\n actions: Dict of action tensors.\n terminal: Terminal boolean tensor.\n reward: Reward tensor.\n next_states: Dict of successor state tensors.\n next_internals: List of posterior internal state tensors.\n\n Returns:\n The optimization operation."}
{"_id": "q_2348", "text": "Creates a distribution from a specification dict."}
{"_id": "q_2349", "text": "Utility method for unbuffered observing where each tuple is inserted into TensorFlow via\n a single session call, thus avoiding race conditions in multi-threaded mode.\n\n Observe full experience tuplefrom the environment to learn from. Optionally pre-processes rewards\n Child classes should call super to get the processed reward\n EX: terminal, reward = super()...\n\n Args:\n states (any): One state (usually a value tuple) or dict of states if multiple states are expected.\n actions (any): One action (usually a value tuple) or dict of states if multiple actions are expected.\n internals (any): Internal list.\n terminal (bool): boolean indicating if the episode terminated after the observation.\n reward (float): scalar reward that resulted from executing the action."}
{"_id": "q_2350", "text": "Returns a named tensor if available.\n\n Returns:\n valid: True if named tensor found, False otherwise\n tensor: If valid, will be a tensor, otherwise None"}
{"_id": "q_2351", "text": "Stores a transition in replay memory.\n\n If the memory is full, the oldest entry is replaced."}
{"_id": "q_2352", "text": "Change the priority of a leaf node"}
{"_id": "q_2353", "text": "Change the priority of a leaf node."}
{"_id": "q_2354", "text": "Similar to position++."}
{"_id": "q_2355", "text": "Sample random element with priority greater than p."}
{"_id": "q_2356", "text": "Sample minibatch of size batch_size."}
{"_id": "q_2357", "text": "Computes priorities according to loss.\n\n Args:\n loss_per_instance:"}
{"_id": "q_2358", "text": "Ends our server tcp connection."}
{"_id": "q_2359", "text": "Determines whether action is available.\n That is, executing it would change the state."}
{"_id": "q_2360", "text": "Determines whether action 'Left' is available."}
{"_id": "q_2361", "text": "Execute action, add a new tile, update the score & return the reward."}
{"_id": "q_2362", "text": "Executes action 'Left'."}
{"_id": "q_2363", "text": "Adds a random tile to the grid. Assumes that it has empty fields."}
{"_id": "q_2364", "text": "Creates the tf.train.Saver object and stores it in self.saver."}
{"_id": "q_2365", "text": "Creates and returns a list of hooks to use in a session. Populates self.saver_directory.\n\n Returns: List of hooks to use in a session."}
{"_id": "q_2366", "text": "Returns the tf op to fetch when unbuffered observations are passed in.\n\n Args:\n states (any): One state (usually a value tuple) or dict of states if multiple states are expected.\n actions (any): One action (usually a value tuple) or dict of states if multiple actions are expected.\n internals (any): Internal list.\n terminal (bool): boolean indicating if the episode terminated after the observation.\n reward (float): scalar reward that resulted from executing the action.\n\n Returns: Tf op to fetch when `observe()` is called."}
{"_id": "q_2367", "text": "Returns the list of all of the components this model consists of that can be individually saved and restored.\n For instance the network or distribution.\n\n Returns:\n List of util.SavableComponent"}
{"_id": "q_2368", "text": "Saves a component of this model to the designated location.\n\n Args:\n component_name: The component to save.\n save_path: The location to save to.\n Returns:\n Checkpoint path where the component was saved."}
{"_id": "q_2369", "text": "Restores a component's parameters from a save location.\n\n Args:\n component_name: The component to restore.\n save_path: The save location."}
{"_id": "q_2370", "text": "Return the state space. Might include subdicts if multiple states are\n available simultaneously.\n\n Returns: dict of state properties (shape and type)."}
{"_id": "q_2371", "text": "Sanity checks an actions dict, used to define the action space for an MDP.\n Throws an error or warns if mismatches are found.\n\n Args:\n actions_spec (Union[None,dict]): The spec-dict to check (or None).\n\n Returns: Tuple of 1) the action space desc and 2) whether there is only one component in the action space."}
{"_id": "q_2372", "text": "Handles the behaviour of visible bolts flying toward Marauders."}
{"_id": "q_2373", "text": "Handles the behaviour of visible bolts flying toward the player."}
{"_id": "q_2374", "text": "Launches a new bolt from a random Marauder."}
{"_id": "q_2375", "text": "Creates and stores Network and Distribution objects.\n Generates and stores all template functions."}
{"_id": "q_2376", "text": "Creates and returns the Distribution objects based on self.distributions_spec.\n\n Returns: Dict of distributions according to self.distributions_spec."}
{"_id": "q_2377", "text": "Creates a memory from a specification dict."}
{"_id": "q_2378", "text": "Initialization step preparing the arguments for the first iteration of the loop body.\n\n Args:\n x_init: Initial solution guess $x_0$.\n base_value: Value $f(x')$ at $x = x'$.\n target_value: Value $f(x_0)$ at $x = x_0$.\n estimated_improvement: Estimated value at $x = x_0$, $f(x')$ if None.\n\n Returns:\n Initial arguments for tf_step."}
{"_id": "q_2379", "text": "Iteration loop body of the line search algorithm.\n\n Args:\n x: Current solution estimate $x_t$.\n iteration: Current iteration counter $t$.\n deltas: Current difference $x_t - x'$.\n improvement: Current improvement $(f(x_t) - f(x')) / v'$.\n last_improvement: Last improvement $(f(x_{t-1}) - f(x')) / v'$.\n estimated_improvement: Current estimated value $v'$.\n\n Returns:\n Updated arguments for next iteration."}
{"_id": "q_2380", "text": "Render markdown formatted text to html.\n\n :param text: markdown formatted text content.\n :param escape: if set to False, all html tags will not be escaped.\n :param use_xhtml: output with xhtml tags.\n :param hard_wrap: if set to True, it will use the GFM line breaks feature.\n :param parse_block_html: parse text only in block level html.\n :param parse_inline_html: parse text only in inline level html."}
{"_id": "q_2381", "text": "Parse setext heading."}
{"_id": "q_2382", "text": "Grammar for hard wrap linebreak. You don't need to add two\n spaces at the end of a line."}
{"_id": "q_2383", "text": "Rendering block level code. ``pre > code``.\n\n :param code: text content of the code block.\n :param lang: language of the given code."}
{"_id": "q_2384", "text": "Rendering block level pure html content.\n\n :param html: text content of the html snippet."}
{"_id": "q_2385", "text": "Rendering the ref anchor of a footnote.\n\n :param key: identity key for the footnote.\n :param index: the index count of current footnote."}
{"_id": "q_2386", "text": "Rendering a footnote item.\n\n :param key: identity key for the footnote.\n :param text: text content of the footnote."}
{"_id": "q_2387", "text": "Convert MetaParams into TF Summary Format and create summary_op.\n\n Returns:\n Merged TF Op for TEXT summary elements, should only be executed once to reduce data duplication."}
{"_id": "q_2388", "text": "Creates the TensorFlow operations for calculating the baseline loss of a batch.\n\n Args:\n states: Dict of state tensors.\n internals: List of prior internal state tensors.\n reward: Reward tensor.\n update: Boolean tensor indicating whether this call happens during an update.\n reference: Optional reference tensor(s), in case of a comparative loss.\n\n Returns:\n Loss tensor."}
{"_id": "q_2389", "text": "Creates the TensorFlow operations for performing an optimization step on the given variables, including\n actually changing the values of the variables.\n\n Args:\n time: Time tensor. Not used for this optimizer.\n variables: List of variables to optimize.\n **kwargs: \n fn_loss : loss function tensor to differentiate.\n\n Returns:\n List of delta tensors corresponding to the updates for each optimized variable."}
{"_id": "q_2390", "text": "Constructs the extra Replay memory."}
{"_id": "q_2391", "text": "Extends the q-model loss via the dqfd large-margin loss."}
{"_id": "q_2392", "text": "Combines Q-loss and demo loss."}
{"_id": "q_2393", "text": "Stores demonstrations in the demo memory."}
{"_id": "q_2394", "text": "Performs a demonstration update by calling the demo optimization operation.\n Note that the batch data does not have to be fetched from the demo memory as this is now part of\n the TensorFlow operation of the demo update."}
{"_id": "q_2395", "text": "Ensures tasks have an action key and strings are converted to python objects"}
{"_id": "q_2396", "text": "Parses yaml as ansible.utils.parse_yaml but with linenumbers.\n\n The line numbers are stored in each node's LINE_NUMBER_KEY key."}
{"_id": "q_2397", "text": "Add additional requirements from setup.cfg to file metadata_path"}
{"_id": "q_2398", "text": "Convert an .egg-info directory into a .dist-info directory"}
{"_id": "q_2399", "text": "Returns a message that includes a set of suggested actions and optional text.\n\n :Example:\n message = MessageFactory.suggested_actions([CardAction(title='a', type=ActionTypes.im_back, value='a'),\n CardAction(title='b', type=ActionTypes.im_back, value='b'),\n CardAction(title='c', type=ActionTypes.im_back, value='c')], 'Choose a color')\n await context.send_activity(message)\n\n :param actions:\n :param text:\n :param speak:\n :param input_hint:\n :return:"}
{"_id": "q_2400", "text": "Returns a message that will display a single image or video to a user.\n\n :Example:\n message = MessageFactory.content_url('https://example.com/hawaii.jpg', 'image/jpeg',\n 'Hawaii Trip', 'A photo from our family vacation.')\n await context.send_activity(message)\n\n :param url:\n :param content_type:\n :param name:\n :param text:\n :param speak:\n :param input_hint:\n :return:"}
{"_id": "q_2401", "text": "Read storeitems from storage.\n\n :param keys:\n :return dict:"}
{"_id": "q_2402", "text": "Save storeitems to storage.\n\n :param changes:\n :return:"}
{"_id": "q_2403", "text": "Return the sanitized key.\n\n Replace characters that are not allowed in keys in Cosmos.\n\n :param key:\n :return str:"}
{"_id": "q_2404", "text": "Call the get or create methods."}
{"_id": "q_2405", "text": "Return the database link.\n\n Check if the database exists or create the db.\n\n :param doc_client:\n :param id:\n :return str:"}
{"_id": "q_2406", "text": "Fills the event properties and metrics for the QnaMessage event for telemetry.\n\n :return: A tuple of event data properties and metrics that will be sent to the BotTelemetryClient.track_event() method for the QnAMessage event. The properties and metrics returned the standard properties logged with any properties passed from the get_answers() method.\n\n :rtype: EventData"}
{"_id": "q_2407", "text": "Returns the conversation reference for an activity. This can be saved as a plain old JSON\n object and then later used to message the user proactively.\n\n Usage Example:\n reference = TurnContext.get_conversation_reference(context.request)\n :param activity:\n :return:"}
{"_id": "q_2408", "text": "Give the waterfall step a unique name"}
{"_id": "q_2409", "text": "Determine if a number of Suggested Actions are supported by a Channel.\n\n Args:\n channel_id (str): The Channel to check the if Suggested Actions are supported in.\n button_cnt (int, optional): Defaults to 100. The number of Suggested Actions to check for the Channel.\n\n Returns:\n bool: True if the Channel supports the button_cnt total Suggested Actions, False if the Channel does not support that number of Suggested Actions."}
{"_id": "q_2410", "text": "Determines if a given Auth header is from the Bot Framework Emulator\n\n :param auth_header: Bearer Token, in the 'Bearer [Long String]' Format.\n :type auth_header: str\n\n :return: True, if the token was issued by the Emulator. Otherwise, false."}
{"_id": "q_2411", "text": "Returns an attachment for a hero card. Will raise a TypeError if 'card' argument is not a HeroCard.\n\n Hero cards tend to have one dominant full width image and the cards text & buttons can\n usually be found below the image.\n :return:"}
{"_id": "q_2412", "text": "Return bool, True if succeed otherwise False."}
{"_id": "q_2413", "text": "Reset to the default text color on console window.\n Return bool, True if succeed otherwise False."}
{"_id": "q_2414", "text": "WindowFromPoint from Win32.\n Return int, a native window handle."}
{"_id": "q_2415", "text": "keybd_event from Win32."}
{"_id": "q_2416", "text": "PostMessage from Win32.\n Return bool, True if succeed otherwise False."}
{"_id": "q_2417", "text": "SendMessage from Win32.\n Return int, the return value specifies the result of the message processing;\n it depends on the message sent."}
{"_id": "q_2418", "text": "GetConsoleTitle from Win32.\n Return str."}
{"_id": "q_2419", "text": "Check if desktop is locked.\n Return bool.\n Desktop is locked if press Win+L, Ctrl+Alt+Del or in remote desktop mode."}
{"_id": "q_2420", "text": "Create Win32 struct `INPUT` for `SendInput`.\n Return `INPUT`."}
{"_id": "q_2421", "text": "Create Win32 struct `KEYBDINPUT` for `SendInput`."}
{"_id": "q_2422", "text": "Create Win32 struct `HARDWAREINPUT` for `SendInput`."}
{"_id": "q_2423", "text": "Call IUIAutomation ElementFromPoint x,y. May return None if mouse is over cmd's title bar icon.\n Return `Control` subclass or None."}
{"_id": "q_2424", "text": "Get a native handle from point x,y and call IUIAutomation.ElementFromHandle.\n Return `Control` subclass."}
{"_id": "q_2425", "text": "Delete log file."}
{"_id": "q_2426", "text": "Return `ctypes.Array`, an iterable array of int values in argb."}
{"_id": "q_2427", "text": "Return list, a list of `Control` subclasses."}
{"_id": "q_2428", "text": "Call native SetWindowText if control has a valid native handle."}
{"_id": "q_2429", "text": "Determine whether current control is top level."}
{"_id": "q_2430", "text": "Get the top level control which current control lays.\n If current control is top level, return self.\n If current control is root control, return None.\n Return `PaneControl` or `WindowControl` or None."}
{"_id": "q_2431", "text": "Set top level window maximize."}
{"_id": "q_2432", "text": "Move window to screen center."}
{"_id": "q_2433", "text": "Set top level window active."}
{"_id": "q_2434", "text": "For a composite instruction, reverse the order of sub-gates.\n\n This is done by recursively mirroring all sub-instructions.\n It does not invert any gate.\n\n Returns:\n Instruction: a fresh gate with sub-gates reversed"}
{"_id": "q_2435", "text": "Invert this instruction.\n\n If the instruction is composite (i.e. has a definition),\n then its definition will be recursively inverted.\n\n Special instructions inheriting from Instruction can\n implement their own inverse (e.g. T and Tdg, Barrier, etc.)\n\n Returns:\n Instruction: a fresh instruction for the inverse\n\n Raises:\n QiskitError: if the instruction is not composite\n and an inverse has not been implemented for it."}
{"_id": "q_2436", "text": "Add classical control on register classical and value val."}
{"_id": "q_2437", "text": "Run all the passes on a QuantumCircuit\n\n Args:\n circuit (QuantumCircuit): circuit to transform via all the registered passes\n\n Returns:\n QuantumCircuit: Transformed circuit."}
{"_id": "q_2438", "text": "Do a pass and its \"requires\".\n\n Args:\n pass_ (BasePass): Pass to do.\n dag (DAGCircuit): The dag on which the pass is ran.\n options (dict): PassManager options.\n Returns:\n DAGCircuit: The transformed dag in case of a transformation pass.\n The same input dag in case of an analysis pass.\n Raises:\n TranspilerError: If the pass is not a proper pass instance."}
{"_id": "q_2439", "text": "Returns a list structure of the appended passes and its options.\n\n Returns (list): The appended passes."}
{"_id": "q_2440", "text": "Fetches the passes added to this flow controller.\n\n Returns (dict): {'options': self.options, 'passes': [passes], 'type': type(self)}"}
{"_id": "q_2441", "text": "Constructs a flow controller based on the partially evaluated controller arguments.\n\n Args:\n passes (list[BasePass]): passes to add to the flow controller.\n options (dict): PassManager options.\n **partial_controller (dict): Partially evaluated controller arguments in the form\n `{name:partial}`\n\n Raises:\n TranspilerError: When partial_controller is not well-formed.\n\n Returns:\n FlowController: A FlowController instance."}
{"_id": "q_2442", "text": "Apply a single qubit gate to the qubit.\n\n Args:\n gate(str): the single qubit gate name\n params(list): the operation parameters op['params']\n Returns:\n tuple: a tuple of U gate parameters (theta, phi, lam)\n Raises:\n QiskitError: if the gate name is not valid"}
{"_id": "q_2443", "text": "Get the matrix for a single qubit.\n\n Args:\n gate(str): the single qubit gate name\n params(list): the operation parameters op['params']\n Returns:\n array: A numpy array representing the matrix"}
{"_id": "q_2444", "text": "Return the index string for Numpy.eignsum matrix-vector multiplication.\n\n The returned indices are to perform a matrix multiplication A.v where\n the matrix A is an M-qubit matrix, vector v is an N-qubit vector, and\n M <= N, and identity matrices are implied on the subsystems where A has no\n support on v.\n\n Args:\n gate_indices (list[int]): the indices of the right matrix subsystems\n to contract with the left matrix.\n number_of_qubits (int): the total number of qubits for the right matrix.\n\n Returns:\n str: An indices string for the Numpy.einsum function."}
{"_id": "q_2445", "text": "Return the index string for Numpy.eignsum matrix multiplication.\n\n The returned indices are to perform a matrix multiplication A.v where\n the matrix A is an M-qubit matrix, matrix v is an N-qubit vector, and\n M <= N, and identity matrices are implied on the subsystems where A has no\n support on v.\n\n Args:\n gate_indices (list[int]): the indices of the right matrix subsystems\n to contract with the left matrix.\n number_of_qubits (int): the total number of qubits for the right matrix.\n\n Returns:\n tuple: (mat_left, mat_right, tens_in, tens_out) of index strings for\n that may be combined into a Numpy.einsum function string.\n\n Raises:\n QiskitError: if the total number of qubits plus the number of\n contracted indices is greater than 26."}
{"_id": "q_2446", "text": "Build a ``DAGCircuit`` object from a ``QuantumCircuit``.\n\n Args:\n circuit (QuantumCircuit): the input circuit.\n\n Return:\n DAGCircuit: the DAG representing the input circuit."}
{"_id": "q_2447", "text": "Function used to fit the decay cosine."}
{"_id": "q_2448", "text": "Plot coherence data.\n\n Args:\n xdata\n ydata\n std_error\n fit\n fit_function\n xunit\n exp_str\n qubit_label\n Raises:\n ImportError: If matplotlib is not installed."}
{"_id": "q_2449", "text": "Take the raw rb data and convert it into averages and std dev\n\n Args:\n raw_rb (numpy.array): m x n x l list where m is the number of seeds, n\n is the number of Clifford sequences and l is the number of qubits\n\n Return:\n numpy_array: 2 x n x l list where index 0 is the mean over seeds, 1 is\n the std dev overseeds"}
{"_id": "q_2450", "text": "Plot randomized benchmarking data.\n\n Args:\n xdata (list): list of subsequence lengths\n ydatas (list): list of lists of survival probabilities for each\n sequence\n yavg (list): mean of the survival probabilities at each sequence\n length\n yerr (list): error of the survival\n fit (list): fit parameters\n survival_prob (callable): function that computes survival probability\n ax (Axes or None): plot axis (if passed in)\n show_plt (bool): display the plot.\n\n Raises:\n ImportError: If matplotlib is not installed."}
{"_id": "q_2451", "text": "Validates the input to state visualization functions.\n\n Args:\n quantum_state (ndarray): Input state / density matrix.\n Returns:\n rho: A 2d numpy array for the density matrix.\n Raises:\n VisualizationError: Invalid input."}
{"_id": "q_2452", "text": "Trim a PIL image and remove white space."}
{"_id": "q_2453", "text": "Get the list of qubits drawing this gate would cover"}
{"_id": "q_2454", "text": "Build an ``Instruction`` object from a ``QuantumCircuit``.\n\n The instruction is anonymous (not tied to a named quantum register),\n and so can be inserted into another circuit. The instruction will\n have the same string name as the circuit.\n\n Args:\n circuit (QuantumCircuit): the input circuit.\n\n Return:\n Instruction: an instruction equivalent to the action of the\n input circuit. Upon decomposition, this instruction will\n yield the components comprising the original circuit."}
{"_id": "q_2455", "text": "Pick a convenient layout depending on the best matching\n qubit connectivity, and set the property `layout`.\n\n Args:\n dag (DAGCircuit): DAG to find layout for.\n\n Raises:\n TranspilerError: if dag wider than self.coupling_map"}
{"_id": "q_2456", "text": "Computes the qubit mapping with the best connectivity.\n\n Args:\n n_qubits (int): Number of subset qubits to consider.\n\n Returns:\n ndarray: Array of qubits to use for best connectivity mapping."}
{"_id": "q_2457", "text": "Apply barrier to circuit.\n If qargs is None, applies to all the qbits.\n Args is a list of QuantumRegister or single qubits.\n For QuantumRegister, applies barrier to all the qubits in that register."}
{"_id": "q_2458", "text": "Process an Id or IndexedId node as a bit or register type.\n\n Return a list of tuples (Register,index)."}
{"_id": "q_2459", "text": "Process a gate node.\n\n If opaque is True, process the node as an opaque gate node."}
{"_id": "q_2460", "text": "Process a CNOT gate node."}
{"_id": "q_2461", "text": "Process a measurement node."}
{"_id": "q_2462", "text": "Process an if node."}
{"_id": "q_2463", "text": "Create a DAG node out of a parsed AST op node.\n\n Args:\n name (str): operation name to apply to the dag.\n params (list): op parameters\n qargs (list(QuantumRegister, int)): qubits to attach to\n\n Raises:\n QiskitError: if encountering a non-basis opaque gate"}
{"_id": "q_2464", "text": "Return duration of supplied channels.\n\n Args:\n *channels: Supplied channels"}
{"_id": "q_2465", "text": "Return minimum start time for supplied channels.\n\n Args:\n *channels: Supplied channels"}
{"_id": "q_2466", "text": "Return maximum start time for supplied channels.\n\n Args:\n *channels: Supplied channels"}
{"_id": "q_2467", "text": "Iterable for flattening Schedule tree.\n\n Args:\n time: Shifted time due to parent\n\n Yields:\n Tuple[int, ScheduleComponent]: Tuple containing time `ScheduleComponent` starts\n at and the flattened `ScheduleComponent`."}
{"_id": "q_2468", "text": "Include unknown fields after load.\n\n Unknown fields are added with no processing at all.\n\n Args:\n valid_data (dict or list): validated data returned by ``load()``.\n many (bool): if True, data and original_data are a list.\n original_data (dict or list): data passed to ``load()`` in the\n first place.\n\n Returns:\n dict: the same ``valid_data`` extended with the unknown attributes.\n\n Inspired by https://github.com/marshmallow-code/marshmallow/pull/595."}
{"_id": "q_2469", "text": "Add validation after instantiation."}
{"_id": "q_2470", "text": "Serialize the model into a Python dict of simple types.\n\n Note that this method requires that the model is bound with\n ``@bind_schema``."}
{"_id": "q_2471", "text": "n-qubit QFT on q in circ."}
{"_id": "q_2472", "text": "Partial trace over subsystems of multi-partite vector.\n\n Args:\n vec (vector_like): complex vector N\n trace_systems (list(int)): a list of subsystems (starting from 0) to\n trace over.\n dimensions (list(int)): a list of the dimensions of the subsystems.\n If this is not set it will assume all\n subsystems are qubits.\n reverse (bool): ordering of systems in operator.\n If True system-0 is the right most system in tensor product.\n If False system-0 is the left most system in tensor product.\n\n Returns:\n ndarray: A density matrix with the appropriate subsystems traced over."}
{"_id": "q_2473", "text": "Devectorize a vectorized square matrix.\n\n Args:\n vectorized_mat (ndarray): a vectorized density matrix.\n method (str): the method of devectorization. Allowed values are\n - 'col' (default): flattens to column-major vector.\n - 'row': flattens to row-major vector.\n - 'pauli': flattens in the n-qubit Pauli basis.\n - 'pauli-weights': flattens in the n-qubit Pauli basis ordered by\n weight.\n\n Returns:\n ndarray: the resulting matrix.\n Raises:\n Exception: if input state is not a n-qubit state"}
{"_id": "q_2474", "text": "Convert a Choi-matrix to a Pauli-basis superoperator.\n\n Note that this function assumes that the Choi-matrix\n is defined in the standard column-stacking convention\n and is normalized to have trace 1. For a channel E this\n is defined as: choi = (I \\\\otimes E)(bell_state).\n\n The resulting 'rauli' R acts on input states as\n |rho_out>_p = R.|rho_in>_p\n where |rho> = vectorize(rho, method='pauli') for order=1\n and |rho> = vectorize(rho, method='pauli_weights') for order=0.\n\n Args:\n choi (matrix): the input Choi-matrix.\n order (int): ordering of the Pauli group vector.\n order=1 (default) is standard lexicographic ordering.\n Eg: [II, IX, IY, IZ, XI, XX, XY,...]\n order=0 is ordered by weights.\n Eg. [II, IX, IY, IZ, XI, XY, XZ, XX, XY,...]\n\n Returns:\n np.array: A superoperator in the Pauli basis."}
{"_id": "q_2475", "text": "Construct the outer product of two vectors.\n\n The second vector argument is optional, if absent the projector\n of the first vector will be returned.\n\n Args:\n vector1 (ndarray): the first vector.\n vector2 (ndarray): the (optional) second vector.\n\n Returns:\n np.array: The matrix |v1><v2|."}
{"_id": "q_2476", "text": "Compute the Shannon entropy of a probability vector.\n\n The shannon entropy of a probability vector pv is defined as\n $H(pv) = - \\\\sum_j pv[j] log_b (pv[j])$ where $0 log_b 0 = 0$.\n\n Args:\n pvec (array_like): a probability vector.\n base (int): the base of the logarith\n\n Returns:\n float: The Shannon entropy H(pvec)."}
{"_id": "q_2477", "text": "Compute the von-Neumann entropy of a quantum state.\n\n Args:\n state (array_like): a density matrix or state vector.\n\n Returns:\n float: The von-Neumann entropy S(rho)."}
{"_id": "q_2478", "text": "Compute the mutual information of a bipartite state.\n\n Args:\n state (array_like): a bipartite state-vector or density-matrix.\n d0 (int): dimension of the first subsystem.\n d1 (int or None): dimension of the second subsystem.\n\n Returns:\n float: The mutual information S(rho_A) + S(rho_B) - S(rho_AB)."}
{"_id": "q_2479", "text": "Compute the entanglement of formation of quantum state.\n\n The input quantum state must be either a bipartite state vector, or a\n 2-qubit density matrix.\n\n Args:\n state (array_like): (N) array_like or (4,4) array_like, a\n bipartite quantum state.\n d0 (int): the dimension of the first subsystem.\n d1 (int or None): the dimension of the second subsystem.\n\n Returns:\n float: The entanglement of formation."}
{"_id": "q_2480", "text": "Compute the Entanglement of Formation of a 2-qubit density matrix.\n\n Args:\n rho ((array_like): (4,4) array_like, input density matrix.\n\n Returns:\n float: The entanglement of formation."}
{"_id": "q_2481", "text": "Return schedule shifted by `time`.\n\n Args:\n schedule: The schedule to shift\n time: The time to shift by\n name: Name of shifted schedule. Defaults to name of `schedule`"}
{"_id": "q_2482", "text": "Return a new schedule with the `child` schedule inserted into the `parent` at `start_time`.\n\n Args:\n parent: Schedule to be inserted into\n time: Time to be inserted defined with respect to `parent`\n child: Schedule to insert\n name: Name of the new schedule. Defaults to name of parent"}
{"_id": "q_2483", "text": "Apply u3 to q."}
{"_id": "q_2484", "text": "Start the progress bar.\n\n Parameters:\n iterations (int): Number of iterations."}
{"_id": "q_2485", "text": "Dissasemble a qobj and return the circuits, run_config, and user header\n\n Args:\n qobj (Qobj): The input qobj object to dissasemble\n Returns:\n circuits (list): A list of quantum circuits\n run_config (dict): The dist of the run config\n user_qobj_header (dict): The dict of any user headers in the qobj"}
{"_id": "q_2486", "text": "Calculate the Hamming distance between two bit strings\n\n Args:\n str1 (str): First string.\n str2 (str): Second string.\n Returns:\n int: Distance between strings.\n Raises:\n VisualizationError: Strings not same length"}
{"_id": "q_2487", "text": "Return quaternion for rotation about given axis.\n\n Args:\n angle (float): Angle in radians.\n axis (str): Axis for rotation\n\n Returns:\n Quaternion: Quaternion for axis rotation.\n\n Raises:\n ValueError: Invalid input axis."}
{"_id": "q_2488", "text": "Normalizes a Quaternion to unit length\n so that it represents a valid rotation.\n\n Args:\n inplace (bool): Do an inplace normalization.\n\n Returns:\n Quaternion: Normalized quaternion."}
{"_id": "q_2489", "text": "Converts a unit-length quaternion to a rotation matrix.\n\n Returns:\n ndarray: Rotation matrix."}
{"_id": "q_2490", "text": "Customize check_type for handling containers."}
{"_id": "q_2491", "text": "Check that j is a valid index into self."}
{"_id": "q_2492", "text": "Test if an array is a square matrix."}
{"_id": "q_2493", "text": "Test if an array is a diagonal matrix"}
{"_id": "q_2494", "text": "Test if an array is a symmetrix matrix"}
{"_id": "q_2495", "text": "Test if an array is a Hermitian matrix"}
{"_id": "q_2496", "text": "Test if an array is an identity matrix."}
{"_id": "q_2497", "text": "Test if an array is a unitary matrix."}
{"_id": "q_2498", "text": "Transform a QuantumChannel to the Choi representation."}
{"_id": "q_2499", "text": "Transform a QuantumChannel to the PTM representation."}
{"_id": "q_2500", "text": "Transform Operator representation to other representation."}
{"_id": "q_2501", "text": "Transform Stinespring representation to Operator representation."}
{"_id": "q_2502", "text": "Transform SuperOp representation to Choi representation."}
{"_id": "q_2503", "text": "Transform Choi to SuperOp representation."}
{"_id": "q_2504", "text": "Transform Choi representation to Kraus representation."}
{"_id": "q_2505", "text": "Transform Stinespring representation to Choi representation."}
{"_id": "q_2506", "text": "Transform Chi representation to a Choi representation."}
{"_id": "q_2507", "text": "Reravel two bipartite matrices."}
{"_id": "q_2508", "text": "Resets Bloch sphere data sets to empty."}
{"_id": "q_2509", "text": "Add a text or LaTeX annotation to Bloch sphere,\n parametrized by a qubit state or a vector.\n\n Args:\n state_or_vector (array_like):\n Position for the annotation.\n Qobj of a qubit or a vector of 3 elements.\n text (str):\n Annotation text.\n You can use LaTeX, but remember to use raw string\n e.g. r\"$\\\\langle x \\\\rangle$\"\n or escape backslashes\n e.g. \"$\\\\\\\\langle x \\\\\\\\rangle$\".\n **kwargs:\n Options as for mplot3d.axes3d.text, including:\n fontsize, color, horizontalalignment, verticalalignment.\n Raises:\n Exception: If input not array_like or tuple."}
{"_id": "q_2510", "text": "Render the Bloch sphere and its data sets in on given figure and axes."}
{"_id": "q_2511", "text": "front half of sphere"}
{"_id": "q_2512", "text": "Display Bloch sphere and corresponding data sets."}
{"_id": "q_2513", "text": "Constructs the top line of the element"}
{"_id": "q_2514", "text": "Constructs the bottom line of the element"}
{"_id": "q_2515", "text": "Get the params and format them to add them to a label. None if there\n are no params of if the params are numpy.ndarrays."}
{"_id": "q_2516", "text": "Creates the label for a box."}
{"_id": "q_2517", "text": "Apply filters to deprecation warnings.\n\n Force the `DeprecationWarning` warnings to be displayed for the qiskit\n module, overriding the system configuration as they are ignored by default\n [1] for end-users. Additionally, silence the `ChangedInMarshmallow3Warning`\n messages.\n\n TODO: on Python 3.7, this might not be needed due to PEP-0565 [2].\n\n [1] https://docs.python.org/3/library/warnings.html#default-warning-filters\n [2] https://www.python.org/dev/peps/pep-0565/"}
{"_id": "q_2518", "text": "Basic hardware information about the local machine.\n\n Gives actual number of CPU's in the machine, even when hyperthreading is\n turned on. CPU count defaults to 1 when true count can't be determined.\n\n Returns:\n dict: The hardware information."}
{"_id": "q_2519", "text": "Internal function that updates the status\n of a HTML job monitor.\n\n Args:\n job_var (BaseJob): The job to keep track of.\n interval (int): The status check interval\n status (widget): HTML ipywidget for output ot screen\n header (str): String representing HTML code for status.\n _interval_set (bool): Was interval set by user?"}
{"_id": "q_2520", "text": "Continuous constant pulse.\n\n Args:\n times: Times to output pulse for.\n amp: Complex pulse amplitude."}
{"_id": "q_2521", "text": "Continuous triangle wave.\n\n Args:\n times: Times to output wave for.\n amp: Pulse amplitude. Wave range is [-amp, amp].\n period: Pulse period, units of dt.\n phase: Pulse phase."}
{"_id": "q_2522", "text": "Continuous cosine wave.\n\n Args:\n times: Times to output wave for.\n amp: Pulse amplitude.\n freq: Pulse frequency, units of 1/dt.\n phase: Pulse phase."}
{"_id": "q_2523", "text": "r\"\"\"Enforce that the supplied gaussian pulse is zeroed at a specific width.\n\n This is acheived by subtracting $\\Omega_g(center \\pm zeroed_width/2)$ from all samples.\n\n amp: Pulse amplitude at `2\\times center+1`.\n center: Center (mean) of pulse.\n sigma: Width (standard deviation) of pulse.\n zeroed_width: Subtract baseline to gaussian pulses to make sure\n $\\Omega_g(center \\pm zeroed_width/2)=0$ is satisfied. This is used to avoid\n large discontinuities at the start of a gaussian pulse. If unsupplied,\n defaults to $2*(center+1)$ such that the samples are zero at $\\Omega_g(-1)$.\n rescale_amp: If `zeroed_width` is not `None` and `rescale_amp=True` the pulse will\n be rescaled so that $\\Omega_g(center)-\\Omega_g(center\\pm zeroed_width/2)=amp$.\n ret_scale_factor: Return amplitude scale factor."}
{"_id": "q_2524", "text": "r\"\"\"Continuous unnormalized gaussian pulse.\n\n Integrated area under curve is $\\Omega_g(amp, sigma) = amp \\times np.sqrt(2\\pi \\sigma^2)$\n\n Args:\n times: Times to output pulse for.\n amp: Pulse amplitude at `center`. If `zeroed_width` is set pulse amplitude at center\n will be $amp-\\Omega_g(center\\pm zeroed_width/2)$ unless `rescale_amp` is set,\n in which case all samples will be rescaled such that the center\n amplitude will be `amp`.\n center: Center (mean) of pulse.\n sigma: Width (standard deviation) of pulse.\n zeroed_width: Subtract baseline to gaussian pulses to make sure\n $\\Omega_g(center \\pm zeroed_width/2)=0$ is satisfied. This is used to avoid\n large discontinuities at the start of a gaussian pulse.\n rescale_amp: If `zeroed_width` is not `None` and `rescale_amp=True` the pulse will\n be rescaled so that $\\Omega_g(center)-\\Omega_g(center\\pm zeroed_width/2)=amp$.\n ret_x: Return centered and standard deviation normalized pulse location.\n $x=(times-center)/sigma."}
{"_id": "q_2525", "text": "Continuous unnormalized gaussian derivative pulse.\n\n Args:\n times: Times to output pulse for.\n amp: Pulse amplitude at `center`.\n center: Center (mean) of pulse.\n sigma: Width (standard deviation) of pulse.\n ret_gaussian: Return gaussian with which derivative was taken with."}
{"_id": "q_2526", "text": "Test if this circuit has the register r.\n\n Args:\n register (Register): a quantum or classical register.\n\n Returns:\n bool: True if the register is contained in this circuit."}
{"_id": "q_2527", "text": "Mirror the circuit by reversing the instructions.\n\n This is done by recursively mirroring all instructions.\n It does not invert any gate.\n\n Returns:\n QuantumCircuit: the mirrored circuit"}
{"_id": "q_2528", "text": "Invert this circuit.\n\n This is done by recursively inverting all gates.\n\n Returns:\n QuantumCircuit: the inverted circuit\n\n Raises:\n QiskitError: if the circuit cannot be inverted."}
{"_id": "q_2529", "text": "DEPRECATED after 0.8"}
{"_id": "q_2530", "text": "Add registers."}
{"_id": "q_2531", "text": "Raise exception if list of qubits contains duplicates."}
{"_id": "q_2532", "text": "Raise exception if a qarg is not in this circuit or bad format."}
{"_id": "q_2533", "text": "Raise exception if clbit is not in this circuit or bad format."}
{"_id": "q_2534", "text": "Raise exception if the circuits are defined on incompatible registers"}
{"_id": "q_2535", "text": "Return OpenQASM string."}
{"_id": "q_2536", "text": "Count each operation kind in the circuit.\n\n Returns:\n dict: a breakdown of how many operations of each kind."}
{"_id": "q_2537", "text": "How many non-entangled subcircuits can the circuit be factored to.\n\n Args:\n unitary_only (bool): Compute only unitary part of graph.\n\n Returns:\n int: Number of connected components in circuit."}
{"_id": "q_2538", "text": "Assign parameters to values yielding a new circuit.\n\n Args:\n value_dict (dict): {parameter: value, ...}\n\n Raises:\n QiskitError: If value_dict contains parameters not present in the circuit\n\n Returns:\n QuantumCircuit: copy of self with assignment substitution."}
{"_id": "q_2539", "text": "Plot the interpolated envelope of pulse\n\n Args:\n samples (ndarray): Data points of complex pulse envelope.\n duration (int): Pulse length (number of points).\n dt (float): Time interval of samples.\n interp_method (str): Method of interpolation\n (set `None` for turn off the interpolation).\n filename (str): Name required to save pulse image.\n interactive (bool): When set true show the circuit in a new window\n (this depends on the matplotlib backend being used supporting this).\n dpi (int): Resolution of saved image.\n nop (int): Data points for interpolation.\n size (tuple): Size of figure.\n Returns:\n matplotlib.figure: A matplotlib figure object for the pulse envelope.\n Raises:\n ImportError: when the output methods requieres non-installed libraries.\n QiskitError: when invalid interpolation method is specified."}
{"_id": "q_2540", "text": "Search for SWAPs which allow for application of largest number of gates.\n\n Arguments:\n layout (Layout): Map from virtual qubit index to physical qubit index.\n gates (list): Gates to be mapped.\n coupling_map (CouplingMap): CouplingMap of the target backend.\n depth (int): Number of SWAP layers to search before choosing a result.\n width (int): Number of SWAPs to consider at each layer.\n Returns:\n dict: Describes solution step found.\n layout (Layout): Virtual to physical qubit map after SWAPs.\n gates_remaining (list): Gates that could not be mapped.\n gates_mapped (list): Gates that were mapped, including added SWAPs."}
{"_id": "q_2541", "text": "Map all gates that can be executed with the current layout.\n\n Args:\n layout (Layout): Map from virtual qubit index to physical qubit index.\n gates (list): Gates to be mapped.\n coupling_map (CouplingMap): CouplingMap for target device topology.\n\n Returns:\n tuple:\n mapped_gates (list): ops for gates that can be executed, mapped onto layout.\n remaining_gates (list): gates that cannot be executed on the layout."}
{"_id": "q_2542", "text": "Return the sum of the distances of two-qubit pairs in each CNOT in gates\n according to the layout and the coupling."}
{"_id": "q_2543", "text": "Return a copy of source_dag with metadata but empty.\n Generate only a single qreg in the output DAG, matching the size of the\n coupling_map."}
{"_id": "q_2544", "text": "Return op implementing a virtual gate on given layout."}
{"_id": "q_2545", "text": "Generate list of ops to implement a SWAP gate along a coupling edge."}
{"_id": "q_2546", "text": "Run one pass of the lookahead mapper on the provided DAG.\n\n Args:\n dag (DAGCircuit): the directed acyclic graph to be mapped\n Returns:\n DAGCircuit: A dag mapped to be compatible with the coupling_map in\n the property_set.\n Raises:\n TranspilerError: if the coupling map or the layout are not\n compatible with the DAG"}
{"_id": "q_2547", "text": "Add a physical qubit to the coupling graph as a node.\n\n physical_qubit (int): An integer representing a physical qubit.\n\n Raises:\n CouplingError: if trying to add duplicate qubit"}
{"_id": "q_2548", "text": "Return a CouplingMap object for a subgraph of self.\n\n nodelist (list): list of integer node labels"}
{"_id": "q_2549", "text": "Returns a sorted list of physical_qubits"}
{"_id": "q_2550", "text": "Compute the full distance matrix on pairs of nodes.\n\n The distance map self._dist_matrix is computed from the graph using\n all_pairs_shortest_path_length."}
{"_id": "q_2551", "text": "Returns the undirected distance between physical_qubit1 and physical_qubit2.\n\n Args:\n physical_qubit1 (int): A physical qubit\n physical_qubit2 (int): Another physical qubit\n\n Returns:\n int: The undirected distance\n\n Raises:\n CouplingError: if the qubits do not exist in the CouplingMap"}
{"_id": "q_2552", "text": "Add controls to all instructions."}
{"_id": "q_2553", "text": "Add classical control register to all instructions."}
{"_id": "q_2554", "text": "Subscribes to an event, so when it's emitted all the callbacks subscribed,\n will be executed. We are not allowing double registration.\n\n Args\n event (string): The event to subscribed in the form of:\n \"terra.<component>.<method>.<action>\"\n callback (callable): The callback that will be executed when an event is\n emitted."}
{"_id": "q_2555", "text": "Emits an event if there are any subscribers.\n\n Args\n event (String): The event to be emitted\n args: Arguments linked with the event\n kwargs: Named arguments linked with the event"}
{"_id": "q_2556", "text": "Unsubscribe the specific callback to the event.\n\n Args\n event (String): The event to unsubscribe\n callback (callable): The callback that won't be executed anymore\n\n Returns\n True: if we have successfully unsubscribed to the event\n False: if there's no callback previously registered"}
{"_id": "q_2557", "text": "Call to create a circuit with gates that take the\n desired vector to zero.\n\n Returns:\n QuantumCircuit: circuit to take self.params vector to |00..0>"}
{"_id": "q_2558", "text": "Checks if value has the format of a virtual qubit"}
{"_id": "q_2559", "text": "Returns a copy of a Layout instance."}
{"_id": "q_2560", "text": "Checks if the attribute name is in the list of attributes to protect. If so, raises\n TranspilerAccessError.\n\n Args:\n name (string): the attribute name to check\n\n Raises:\n TranspilerAccessError: when name is the list of attributes to protect."}
{"_id": "q_2561", "text": "Run the StochasticSwap pass on `dag`.\n\n Args:\n dag (DAGCircuit): DAG to map.\n\n Returns:\n DAGCircuit: A mapped DAG.\n\n Raises:\n TranspilerError: if the coupling map or the layout are not\n compatible with the DAG"}
{"_id": "q_2562", "text": "Provide a DAGCircuit for a new mapped layer.\n\n i (int) = layer number\n first_layer (bool) = True if this is the first layer in the\n circuit with any multi-qubit gates\n best_layout (Layout) = layout returned from _layer_permutation\n best_depth (int) = depth returned from _layer_permutation\n best_circuit (DAGCircuit) = swap circuit returned\n from _layer_permutation\n layer_list (list) = list of DAGCircuit objects for each layer,\n output of DAGCircuit layers() method\n\n Return a DAGCircuit object to append to the output DAGCircuit\n that the _mapper method is building."}
{"_id": "q_2563", "text": "Return the Pauli group with 4^n elements.\n\n The phases have been removed.\n case 'weight' is ordered by Pauli weights and\n case 'tensor' is ordered by I,X,Y,Z counting lowest qubit fastest.\n\n Args:\n number_of_qubits (int): number of qubits\n case (str): determines ordering of group elements ('weight' or 'tensor')\n\n Returns:\n list: list of Pauli objects\n\n Raises:\n QiskitError: case is not 'weight' or 'tensor'\n QiskitError: number_of_qubits is larger than 4"}
{"_id": "q_2564", "text": "Construct pauli from boolean array.\n\n Args:\n z (numpy.ndarray): boolean, z vector\n x (numpy.ndarray): boolean, x vector\n\n Returns:\n Pauli: self\n\n Raises:\n QiskitError: if z or x are None or the length of z and x are different."}
{"_id": "q_2565", "text": "Convert to Operator object."}
{"_id": "q_2566", "text": "Convert to Pauli circuit instruction."}
{"_id": "q_2567", "text": "Update partial or entire x.\n\n Args:\n x (numpy.ndarray or list): to-be-updated x\n indices (numpy.ndarray or list or optional): to-be-updated qubit indices\n\n Returns:\n Pauli: self\n\n Raises:\n QiskitError: when updating whole x, the number of qubits must be the same."}
{"_id": "q_2568", "text": "Append pauli at the end.\n\n Args:\n paulis (Pauli): the to-be-inserted or appended pauli\n pauli_labels (list[str]): the to-be-inserted or appended pauli label\n\n Returns:\n Pauli: self"}
{"_id": "q_2569", "text": "Delete pauli at the indices.\n\n Args:\n indices(list[int]): the indices of to-be-deleted paulis\n\n Returns:\n Pauli: self"}
{"_id": "q_2570", "text": "Generate single qubit pauli at index with pauli_label with length num_qubits.\n\n Args:\n num_qubits (int): the length of pauli\n index (int): the qubit index to insert the single qubii\n pauli_label (str): pauli\n\n Returns:\n Pauli: single qubit pauli"}
{"_id": "q_2571", "text": "Simulate the outcome of measurement of a qubit.\n\n Args:\n qubit (int): the qubit to measure\n\n Return:\n tuple: pair (outcome, probability) where outcome is '0' or '1' and\n probability is the probability of the returned outcome."}
{"_id": "q_2572", "text": "Generate memory samples from current statevector.\n\n Args:\n measure_params (list): List of (qubit, cmembit) values for\n measure instructions to sample.\n num_samples (int): The number of memory samples to generate.\n\n Returns:\n list: A list of memory values in hex format."}
{"_id": "q_2573", "text": "Apply a measure instruction to a qubit.\n\n Args:\n qubit (int): qubit is the qubit measured.\n cmembit (int): is the classical memory bit to store outcome in.\n cregbit (int, optional): is the classical register bit to store outcome in."}
{"_id": "q_2574", "text": "Apply a reset instruction to a qubit.\n\n Args:\n qubit (int): the qubit being rest\n\n This is done by doing a simulating a measurement\n outcome and projecting onto the outcome state while\n renormalizing."}
{"_id": "q_2575", "text": "Validate an initial statevector"}
{"_id": "q_2576", "text": "Set the initial statevector for simulation"}
{"_id": "q_2577", "text": "Determine if measure sampling is allowed for an experiment\n\n Args:\n experiment (QobjExperiment): a qobj experiment."}
{"_id": "q_2578", "text": "Run qobj asynchronously.\n\n Args:\n qobj (Qobj): payload of the experiment\n backend_options (dict): backend options\n\n Returns:\n BasicAerJob: derived from BaseJob\n\n Additional Information:\n backend_options: Is a dict of options for the backend. It may contain\n * \"initial_statevector\": vector_like\n\n The \"initial_statevector\" option specifies a custom initial\n initial statevector for the simulator to be used instead of the all\n zero state. This size of this vector must be correct for the number\n of qubits in all experiments in the qobj.\n\n Example::\n\n backend_options = {\n \"initial_statevector\": np.array([1, 0, 0, 1j]) / np.sqrt(2),\n }"}
{"_id": "q_2579", "text": "Run experiments in qobj\n\n Args:\n job_id (str): unique id for the job.\n qobj (Qobj): job description\n\n Returns:\n Result: Result object"}
{"_id": "q_2580", "text": "Semantic validations of the qobj which cannot be done via schemas."}
{"_id": "q_2581", "text": "Validate an initial unitary matrix"}
{"_id": "q_2582", "text": "Set the initial unitary for simulation"}
{"_id": "q_2583", "text": "Return the current unitary in JSON Result spec format"}
{"_id": "q_2584", "text": "Semantic validations of the qobj which cannot be done via schemas.\n Some of these may later move to backend schemas.\n 1. No shots\n 2. No measurements in the middle"}
{"_id": "q_2585", "text": "Determine if obj is a bit"}
{"_id": "q_2586", "text": "Pick a layout by assigning n circuit qubits to device qubits 0, .., n-1.\n\n Args:\n dag (DAGCircuit): DAG to find layout for.\n\n Raises:\n TranspilerError: if dag wider than self.coupling_map"}
{"_id": "q_2587", "text": "Return maximum time of timeslots over all channels.\n\n Args:\n *channels: Channels over which to obtain stop time."}
{"_id": "q_2588", "text": "Return a new TimeslotCollection shifted by `time`.\n\n Args:\n time: time to be shifted by"}
{"_id": "q_2589", "text": "Report on GitHub that the specified branch is failing to build at\n the specified commit. The method will open an issue indicating that\n the branch is failing. If there is an issue already open, it will add a\n comment avoiding to report twice about the same failure.\n\n Args:\n branch (str): branch name to report about.\n commit (str): commit hash at which the build fails.\n infourl (str): URL with extra info about the failure such as the\n build logs."}
{"_id": "q_2590", "text": "Sort rho data"}
{"_id": "q_2591", "text": "Create a paulivec representation.\n\n Graphical representation of the input array.\n\n Args:\n rho (array): State vector or density matrix.\n figsize (tuple): Figure size in pixels.\n slider (bool): activate slider\n show_legend (bool): show legend of graph content"}
{"_id": "q_2592", "text": "Apply RZZ to circuit."}
{"_id": "q_2593", "text": "Apply Fredkin to circuit."}
{"_id": "q_2594", "text": "Extract readout and CNOT errors and compute swap costs."}
{"_id": "q_2595", "text": "Program graph has virtual qubits as nodes.\n Two nodes have an edge if the corresponding virtual qubits\n participate in a 2-qubit gate. The edge is weighted by the\n number of CNOTs between the pair."}
{"_id": "q_2596", "text": "Select the best remaining hardware qubit for the next program qubit."}
{"_id": "q_2597", "text": "Return a list of instructions for this CompositeGate.\n\n If the CompositeGate itself contains composites, call\n this method recursively."}
{"_id": "q_2598", "text": "Invert this gate."}
{"_id": "q_2599", "text": "Add controls to this gate."}
{"_id": "q_2600", "text": "Add classical control register."}
{"_id": "q_2601", "text": "Return True if operator is a unitary matrix."}
{"_id": "q_2602", "text": "Return the conjugate of the operator."}
{"_id": "q_2603", "text": "Return the transpose of the operator."}
{"_id": "q_2604", "text": "Return the matrix power of the operator.\n\n Args:\n n (int): the power to raise the matrix to.\n\n Returns:\n BaseOperator: the n-times composed operator.\n\n Raises:\n QiskitError: if the input and output dimensions of the operator\n are not equal, or the power is not a positive integer."}
{"_id": "q_2605", "text": "Return the tensor shape of the matrix operator"}
{"_id": "q_2606", "text": "Convert a QuantumCircuit or Instruction to an Operator."}
{"_id": "q_2607", "text": "Separate a bitstring according to the registers defined in the result header."}
{"_id": "q_2608", "text": "Format an experiment result memory object for measurement level 1.\n\n Args:\n memory (list): Memory from experiment with `meas_level==1`. `avg` or\n `single` will be inferred from shape of result memory.\n\n Returns:\n np.ndarray: Measurement level 1 complex numpy array\n\n Raises:\n QiskitError: If the returned numpy array does not have 1 (avg) or 2 (single)\n indicies."}
{"_id": "q_2609", "text": "Format an experiment result memory object for measurement level 2.\n\n Args:\n memory (list): Memory from experiment with `meas_level==2` and `memory==True`.\n header (dict): the experiment header dictionary containing\n useful information for postprocessing.\n\n Returns:\n list[str]: List of bitstrings"}
{"_id": "q_2610", "text": "Format a single experiment result coming from backend to present\n to the Qiskit user.\n\n Args:\n counts (dict): counts histogram of multiple shots\n header (dict): the experiment header dictionary containing\n useful information for postprocessing.\n\n Returns:\n dict: a formatted counts"}
{"_id": "q_2611", "text": "Format statevector coming from the backend to present to the Qiskit user.\n\n Args:\n vec (list): a list of [re, im] complex numbers.\n decimals (int): the number of decimals in the statevector.\n If None, no rounding is done.\n\n Returns:\n list[complex]: a list of python complex numbers."}
{"_id": "q_2612", "text": "Format unitary coming from the backend to present to the Qiskit user.\n\n Args:\n mat (list[list]): a list of list of [re, im] complex numbers\n decimals (int): the number of decimals in the statevector.\n If None, no rounding is done.\n\n Returns:\n list[list[complex]]: a matrix of complex numbers"}
{"_id": "q_2613", "text": "Submit the job to the backend for execution.\n\n Raises:\n QobjValidationError: if the JSON serialization of the Qobj passed\n during construction does not validate against the Qobj schema.\n\n JobError: if trying to re-submit the job."}
{"_id": "q_2614", "text": "Gets the status of the job by querying the Python's future\n\n Returns:\n qiskit.providers.JobStatus: The current JobStatus\n\n Raises:\n JobError: If the future is in unexpected state\n concurrent.futures.TimeoutError: if timeout occurred."}
{"_id": "q_2615", "text": "Whether `lo_freq` is within the `LoRange`.\n\n Args:\n lo_freq: LO frequency to be checked\n\n Returns:\n bool: True if lo_freq is included in this range, otherwise False"}
{"_id": "q_2616", "text": "Create a bloch sphere representation.\n\n Graphical representation of the input array, using as much bloch\n spheres as qubit are required.\n\n Args:\n rho (array): State vector or density matrix\n figsize (tuple): Figure size in pixels."}
{"_id": "q_2617", "text": "Expand all op nodes to the given basis.\n\n Args:\n dag(DAGCircuit): input dag\n\n Raises:\n QiskitError: if unable to unroll given the basis due to undefined\n decomposition rules (such as a bad basis) or excessive recursion.\n\n Returns:\n DAGCircuit: output unrolled dag"}
{"_id": "q_2618", "text": "Create a Q sphere representation.\n\n Graphical representation of the input array, using a Q sphere for each\n eigenvalue.\n\n Args:\n rho (array): State vector or density matrix.\n figsize (tuple): Figure size in pixels."}
{"_id": "q_2619", "text": "Return the lex index of a combination..\n\n Args:\n n (int): the total number of options .\n k (int): The number of elements.\n lst (list): list\n\n Returns:\n int: returns int index for lex order\n\n Raises:\n VisualizationError: if length of list is not equal to k"}
{"_id": "q_2620", "text": "Returns the Instruction object corresponding to the op for the node else None"}
{"_id": "q_2621", "text": "Generates zero-sampled `SamplePulse`.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n name: Name of pulse."}
{"_id": "q_2622", "text": "Generates square wave `SamplePulse`.\n\n Applies `left` sampling strategy to generate discrete pulse from continuous function.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude. Wave range is [-amp, amp].\n period: Pulse period, units of dt. If `None` defaults to single cycle.\n phase: Pulse phase.\n name: Name of pulse."}
{"_id": "q_2623", "text": "Generates sawtooth wave `SamplePulse`.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude. Wave range is [-amp, amp].\n period: Pulse period, units of dt. If `None` defaults to single cycle.\n phase: Pulse phase.\n name: Name of pulse."}
{"_id": "q_2624", "text": "Generates cosine wave `SamplePulse`.\n\n Applies `left` sampling strategy to generate discrete pulse from continuous function.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude.\n freq: Pulse frequency, units of 1/dt. If `None` defaults to single cycle.\n phase: Pulse phase.\n name: Name of pulse."}
{"_id": "q_2625", "text": "Generates sine wave `SamplePulse`.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude.\n freq: Pulse frequency, units of 1/dt. If `None` defaults to single cycle.\n phase: Pulse phase.\n name: Name of pulse."}
{"_id": "q_2626", "text": "r\"\"\"Generates unnormalized gaussian `SamplePulse`.\n\n Centered at `duration/2` and zeroed at `t=-1` to prevent large initial discontinuity.\n\n Applies `left` sampling strategy to generate discrete pulse from continuous function.\n\n Integrated area under curve is $\\Omega_g(amp, sigma) = amp \\times np.sqrt(2\\pi \\sigma^2)$\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude at `duration/2`.\n sigma: Width (standard deviation) of pulse.\n name: Name of pulse."}
{"_id": "q_2627", "text": "r\"\"\"Generates unnormalized gaussian derivative `SamplePulse`.\n\n Applies `left` sampling strategy to generate discrete pulse from continuous function.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude at `center`.\n sigma: Width (standard deviation) of pulse.\n name: Name of pulse."}
{"_id": "q_2628", "text": "Generates gaussian square `SamplePulse`.\n\n Centered at `duration/2` and zeroed at `t=-1` and `t=duration+1` to prevent\n large initial/final discontinuities.\n\n Applies `left` sampling strategy to generate discrete pulse from continuous function.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude.\n sigma: Width (standard deviation) of gaussian rise/fall portion of the pulse.\n risefall: Number of samples over which pulse rise and fall happen. Width of\n square portion of pulse will be `duration-2*risefall`.\n name: Name of pulse."}
{"_id": "q_2629", "text": "Compute distance."}
{"_id": "q_2630", "text": "Print the node data, with indent."}
{"_id": "q_2631", "text": "Rename a classical or quantum register throughout the circuit.\n\n regname = existing register name string\n newname = replacement register name string"}
{"_id": "q_2632", "text": "Add all wires in a classical register."}
{"_id": "q_2633", "text": "Add a qubit or bit to the circuit.\n\n Args:\n wire (tuple): (Register,int) containing a register instance and index\n This adds a pair of in and out nodes connected by an edge.\n\n Raises:\n DAGCircuitError: if trying to add duplicate wire"}
{"_id": "q_2634", "text": "Add a new operation node to the graph and assign properties.\n\n Args:\n op (Instruction): the operation associated with the DAG node\n qargs (list): list of quantum wires to attach to.\n cargs (list): list of classical wires to attach to.\n condition (tuple or None): optional condition (ClassicalRegister, int)"}
{"_id": "q_2635", "text": "Check that the wiremap is consistent.\n\n Check that the wiremap refers to valid wires and that\n those wires have consistent types.\n\n Args:\n wire_map (dict): map from (register,idx) in keymap to\n (register,idx) in valmap\n keymap (dict): a map whose keys are wire_map keys\n valmap (dict): a map whose keys are wire_map values\n\n Raises:\n DAGCircuitError: if wire_map not valid"}
{"_id": "q_2636", "text": "Use the wire_map dict to change the condition tuple's creg name.\n\n Args:\n wire_map (dict): a map from wires to wires\n condition (tuple): (ClassicalRegister,int)\n Returns:\n tuple(ClassicalRegister,int): new condition"}
{"_id": "q_2637", "text": "Apply the input circuit to the output of this circuit.\n\n The two bases must be \"compatible\" or an exception occurs.\n A subset of input qubits of the input circuit are mapped\n to a subset of output qubits of this circuit.\n\n Args:\n input_circuit (DAGCircuit): circuit to append\n edge_map (dict): map {(Register, int): (Register, int)}\n from the output wires of input_circuit to input wires\n of self.\n\n Raises:\n DAGCircuitError: if missing, duplicate or incosistent wire"}
{"_id": "q_2638", "text": "Check that a list of wires is compatible with a node to be replaced.\n\n - no duplicate names\n - correct length for operation\n Raise an exception otherwise.\n\n Args:\n wires (list[register, index]): gives an order for (qu)bits\n in the input circuit that is replacing the node.\n node (DAGNode): a node in the dag\n\n Raises:\n DAGCircuitError: if check doesn't pass."}
{"_id": "q_2639", "text": "Return predecessor and successor dictionaries.\n\n Args:\n node (DAGNode): reference to multi_graph node\n\n Returns:\n tuple(dict): tuple(predecessor_map, successor_map)\n These map from wire (Register, int) to predecessor (successor)\n nodes of n."}
{"_id": "q_2640", "text": "Map all wires of the input circuit.\n\n Map all wires of the input circuit to predecessor and\n successor nodes in self, keyed on wires in self.\n\n Args:\n pred_map (dict): comes from _make_pred_succ_maps\n succ_map (dict): comes from _make_pred_succ_maps\n input_circuit (DAGCircuit): the input circuit\n wire_map (dict): the map from wires of input_circuit to wires of self\n\n Returns:\n tuple: full_pred_map, full_succ_map (dict, dict)\n\n Raises:\n DAGCircuitError: if more than one predecessor for output nodes"}
{"_id": "q_2641", "text": "Yield nodes in topological order.\n\n Returns:\n generator(DAGNode): node in topological order"}
{"_id": "q_2642", "text": "Get the list of \"op\" nodes in the dag.\n\n Args:\n op (Type): Instruction subclass op nodes to return. if op=None, return\n all op nodes.\n Returns:\n list[DAGNode]: the list of node ids containing the given op."}
{"_id": "q_2643", "text": "Get the list of gate nodes in the dag.\n\n Returns:\n list: the list of node ids that represent gates."}
{"_id": "q_2644", "text": "Get list of 2-qubit gates. Ignore snapshot, barriers, and the like."}
{"_id": "q_2645", "text": "Returns set of the ancestors of a node as DAGNodes."}
{"_id": "q_2646", "text": "Returns list of the successors of a node that are\n connected by a quantum edge as DAGNodes."}
{"_id": "q_2647", "text": "Remove an operation node n.\n\n Add edges from predecessors to successors."}
{"_id": "q_2648", "text": "Remove all of the descendant operation nodes of node."}
{"_id": "q_2649", "text": "Remove all of the non-ancestors operation nodes of node."}
{"_id": "q_2650", "text": "Yield a shallow view on a layer of this DAGCircuit for all d layers of this circuit.\n\n A layer is a circuit whose gates act on disjoint qubits, i.e.\n a layer has depth 1. The total number of layers equals the\n circuit depth d. The layers are indexed from 0 to d-1 with the\n earliest layer at index 0. The layers are constructed using a\n greedy algorithm. Each returned layer is a dict containing\n {\"graph\": circuit graph, \"partition\": list of qubit lists}.\n\n TODO: Gates that use the same cbits will end up in different\n layers as this is currently implemented. This may not be\n the desired behavior."}
{"_id": "q_2651", "text": "Return a set of non-conditional runs of \"op\" nodes with the given names.\n\n For example, \"... h q[0]; cx q[0],q[1]; cx q[0],q[1]; h q[1]; ..\"\n would produce the tuple of cx nodes as an element of the set returned\n from a call to collect_runs([\"cx\"]). If instead the cx nodes were\n \"cx q[0],q[1]; cx q[1],q[0];\", the method would still return the\n pair in a tuple. The namelist can contain names that are not\n in the circuit's basis.\n\n Nodes must have only one successor to continue the run."}
{"_id": "q_2652", "text": "Iterator for nodes that affect a given wire\n\n Args:\n wire (tuple(Register, index)): the wire to be looked at.\n only_ops (bool): True if only the ops nodes are wanted\n otherwise all nodes are returned.\n Yield:\n DAGNode: the successive ops on the given wire\n\n Raises:\n DAGCircuitError: if the given wire doesn't exist in the DAG"}
{"_id": "q_2653", "text": "Generate a TomographyBasis object.\n\n See TomographyBasis for further details.abs\n\n Args:\n prep_fun (callable) optional: the function which adds preparation\n gates to a circuit.\n meas_fun (callable) optional: the function which adds measurement\n gates to a circuit.\n\n Returns:\n TomographyBasis: A tomography basis."}
{"_id": "q_2654", "text": "Add state measurement gates to a circuit."}
{"_id": "q_2655", "text": "Generate a dictionary of tomography experiment configurations.\n\n This returns a data structure that is used by other tomography functions\n to generate state and process tomography circuits, and extract tomography\n data from results after execution on a backend.\n\n Quantum State Tomography:\n Be default it will return a set for performing Quantum State\n Tomography where individual qubits are measured in the Pauli basis.\n A custom measurement basis may also be used by defining a user\n `tomography_basis` and passing this in for the `meas_basis` argument.\n\n Quantum Process Tomography:\n A quantum process tomography set is created by specifying a preparation\n basis along with a measurement basis. The preparation basis may be a\n user defined `tomography_basis`, or one of the two built in basis 'SIC'\n or 'Pauli'.\n - SIC: Is a minimal symmetric informationally complete preparation\n basis for 4 states for each qubit (4 ^ number of qubits total\n preparation states). These correspond to the |0> state and the 3\n other vertices of a tetrahedron on the Bloch-sphere.\n - Pauli: Is a tomographically overcomplete preparation basis of the six\n eigenstates of the 3 Pauli operators (6 ^ number of qubits\n total preparation states).\n\n Args:\n meas_qubits (list): The qubits being measured.\n meas_basis (tomography_basis or str): The qubit measurement basis.\n The default value is 'Pauli'.\n prep_qubits (list or None): The qubits being prepared. If None then\n meas_qubits will be used for process tomography experiments.\n prep_basis (tomography_basis or None): The optional qubit preparation\n basis. If no basis is specified state tomography will be performed\n instead of process tomography. A built in basis may be specified by\n 'SIC' or 'Pauli' (SIC basis recommended for > 2 qubits).\n\n Returns:\n dict: A dict of tomography configurations that can be parsed by\n `create_tomography_circuits` and `tomography_data` functions\n for implementing quantum tomography experiments. This output contains\n fields \"qubits\", \"meas_basis\", \"circuits\". It may also optionally\n contain a field \"prep_basis\" for process tomography experiments.\n ```\n {\n 'qubits': qubits (list[ints]),\n 'meas_basis': meas_basis (tomography_basis),\n 'circuit_labels': (list[string]),\n 'circuits': (list[dict]) # prep and meas configurations\n # optionally for process tomography experiments:\n 'prep_basis': prep_basis (tomography_basis)\n }\n ```\n Raises:\n QiskitError: if the Qubits argument is not a list."}
{"_id": "q_2656", "text": "Generate a dictionary of process tomography experiment configurations.\n\n This returns a data structure that is used by other tomography functions\n to generate state and process tomography circuits, and extract tomography\n data from results after execution on a backend.\n\n A quantum process tomography set is created by specifying a preparation\n basis along with a measurement basis. The preparation basis may be a\n user defined `tomography_basis`, or one of the two built in basis 'SIC'\n or 'Pauli'.\n - SIC: Is a minimal symmetric informationally complete preparation\n basis for 4 states for each qubit (4 ^ number of qubits total\n preparation states). These correspond to the |0> state and the 3\n other vertices of a tetrahedron on the Bloch-sphere.\n - Pauli: Is a tomographically overcomplete preparation basis of the six\n eigenstates of the 3 Pauli operators (6 ^ number of qubits\n total preparation states).\n\n Args:\n meas_qubits (list): The qubits being measured.\n meas_basis (tomography_basis or str): The qubit measurement basis.\n The default value is 'Pauli'.\n prep_qubits (list or None): The qubits being prepared. If None then\n meas_qubits will be used for process tomography experiments.\n prep_basis (tomography_basis or str): The qubit preparation basis.\n The default value is 'SIC'.\n\n Returns:\n dict: A dict of tomography configurations that can be parsed by\n `create_tomography_circuits` and `tomography_data` functions\n for implementing quantum tomography experiments. This output contains\n fields \"qubits\", \"meas_basis\", \"prep_basus\", circuits\".\n ```\n {\n 'qubits': qubits (list[ints]),\n 'meas_basis': meas_basis (tomography_basis),\n 'prep_basis': prep_basis (tomography_basis),\n 'circuit_labels': (list[string]),\n 'circuits': (list[dict]) # prep and meas configurations\n }\n ```"}
{"_id": "q_2657", "text": "Add tomography measurement circuits to a QuantumProgram.\n\n The quantum program must contain a circuit 'name', which is treated as a\n state preparation circuit for state tomography, or as teh circuit being\n measured for process tomography. This function then appends the circuit\n with a set of measurements specified by the input `tomography_set`,\n optionally it also prepends the circuit with state preparation circuits if\n they are specified in the `tomography_set`.\n\n For n-qubit tomography with a tomographically complete set of preparations\n and measurements this results in $4^n 3^n$ circuits being added to the\n quantum program.\n\n Args:\n circuit (QuantumCircuit): The circuit to be appended with tomography\n state preparation and/or measurements.\n qreg (QuantumRegister): the quantum register containing qubits to be\n measured.\n creg (ClassicalRegister): the classical register containing bits to\n store measurement outcomes.\n tomoset (tomography_set): the dict of tomography configurations.\n\n Returns:\n list: A list of quantum tomography circuits for the input circuit.\n\n Raises:\n QiskitError: if circuit is not a valid QuantumCircuit\n\n Example:\n For a tomography set specifying state tomography of qubit-0 prepared\n by a circuit 'circ' this would return:\n ```\n ['circ_meas_X(0)', 'circ_meas_Y(0)', 'circ_meas_Z(0)']\n ```\n For process tomography of the same circuit with preparation in the\n SIC-POVM basis it would return:\n ```\n [\n 'circ_prep_S0(0)_meas_X(0)', 'circ_prep_S0(0)_meas_Y(0)',\n 'circ_prep_S0(0)_meas_Z(0)', 'circ_prep_S1(0)_meas_X(0)',\n 'circ_prep_S1(0)_meas_Y(0)', 'circ_prep_S1(0)_meas_Z(0)',\n 'circ_prep_S2(0)_meas_X(0)', 'circ_prep_S2(0)_meas_Y(0)',\n 'circ_prep_S2(0)_meas_Z(0)', 'circ_prep_S3(0)_meas_X(0)',\n 'circ_prep_S3(0)_meas_Y(0)', 'circ_prep_S3(0)_meas_Z(0)'\n ]\n ```"}
{"_id": "q_2658", "text": "Return a results dict for a state or process tomography experiment.\n\n Args:\n results (Result): Results from execution of a process tomography\n circuits on a backend.\n name (string): The name of the circuit being reconstructed.\n tomoset (tomography_set): the dict of tomography configurations.\n\n Returns:\n list: A list of dicts for the outcome of each process tomography\n measurement circuit."}
{"_id": "q_2659", "text": "Reconstruct a density matrix or process-matrix from tomography data.\n\n If the input data is state_tomography_data the returned operator will\n be a density matrix. If the input data is process_tomography_data the\n returned operator will be a Choi-matrix in the column-vectorization\n convention.\n\n Args:\n tomo_data (dict): process tomography measurement data.\n method (str): the fitting method to use.\n Available methods:\n - 'wizard' (default)\n - 'leastsq'\n options (dict or None): additional options for fitting method.\n\n Returns:\n numpy.array: The fitted operator.\n\n Available methods:\n - 'wizard' (Default): The returned operator will be constrained to be\n positive-semidefinite.\n Options:\n - 'trace': the trace of the returned operator.\n The default value is 1.\n - 'beta': hedging parameter for computing frequencies from\n zero-count data. The default value is 0.50922.\n - 'epsilon: threshold for truncating small eigenvalues to zero.\n The default value is 0\n - 'leastsq': Fitting without positive-semidefinite constraint.\n Options:\n - 'trace': Same as for 'wizard' method.\n - 'beta': Same as for 'wizard' method.\n Raises:\n Exception: if the `method` parameter is not valid."}
{"_id": "q_2660", "text": "Reconstruct a state from unconstrained least-squares fitting.\n\n Args:\n tomo_data (list[dict]): state or process tomography data.\n weights (list or array or None): weights to use for least squares\n fitting. The default is standard deviation from a binomial\n distribution.\n trace (float or None): trace of returned operator. The default is 1.\n beta (float or None): hedge parameter (>=0) for computing frequencies\n from zero-count data. The default value is 0.50922.\n\n Returns:\n numpy.array: A numpy array of the reconstructed operator."}
{"_id": "q_2661", "text": "Returns a projectors."}
{"_id": "q_2662", "text": "Monitor the status of a IBMQJob instance.\n\n Args:\n job (BaseJob): Job to monitor.\n interval (int): Time interval between status queries.\n monitor_async (bool): Monitor asyncronously (in Jupyter only).\n quiet (bool): If True, do not print status messages.\n output (file): The file like object to write status messages to.\n By default this is sys.stdout.\n\n Raises:\n QiskitError: When trying to run async outside of Jupyter\n ImportError: ipywidgets not available for notebook."}
{"_id": "q_2663", "text": "Compute Euler angles for a single-qubit gate.\n\n Find angles (theta, phi, lambda) such that\n unitary_matrix = phase * Rz(phi) * Ry(theta) * Rz(lambda)\n\n Args:\n unitary_matrix (ndarray): 2x2 unitary matrix\n\n Returns:\n tuple: (theta, phi, lambda) Euler angles of SU(2)\n\n Raises:\n QiskitError: if unitary_matrix not 2x2, or failure"}
{"_id": "q_2664", "text": "Extends dag with virtual qubits that are in layout but not in the circuit yet.\n\n Args:\n dag (DAGCircuit): DAG to extend.\n\n Returns:\n DAGCircuit: An extended DAG.\n\n Raises:\n TranspilerError: If there is not layout in the property set or not set at init time."}
{"_id": "q_2665", "text": "The qubits properties widget\n\n Args:\n backend (IBMQbackend): The backend.\n\n Returns:\n VBox: A VBox widget."}
{"_id": "q_2666", "text": "Widget for displaying job history\n\n Args:\n backend (IBMQbackend): The backend.\n\n Returns:\n Tab: A tab widget for history images."}
{"_id": "q_2667", "text": "Plots the job history of the user from the given list of jobs.\n\n Args:\n jobs (list): A list of jobs with type IBMQjob.\n interval (str): Interval over which to examine.\n\n Returns:\n fig: A Matplotlib figure instance."}
{"_id": "q_2668", "text": "transpile one or more circuits, according to some desired\n transpilation targets.\n\n All arguments may be given as either singleton or list. In case of list,\n the length must be equal to the number of circuits being transpiled.\n\n Transpilation is done in parallel using multiprocessing.\n\n Args:\n circuits (QuantumCircuit or list[QuantumCircuit]):\n Circuit(s) to transpile\n\n backend (BaseBackend):\n If set, transpiler options are automatically grabbed from\n backend.configuration() and backend.properties().\n If any other option is explicitly set (e.g. coupling_map), it\n will override the backend's.\n Note: the backend arg is purely for convenience. The resulting\n circuit may be run on any backend as long as it is compatible.\n\n basis_gates (list[str]):\n List of basis gate names to unroll to.\n e.g:\n ['u1', 'u2', 'u3', 'cx']\n If None, do not unroll.\n\n coupling_map (CouplingMap or list):\n Coupling map (perhaps custom) to target in mapping.\n Multiple formats are supported:\n a. CouplingMap instance\n\n b. list\n Must be given as an adjacency matrix, where each entry\n specifies all two-qubit interactions supported by backend\n e.g:\n [[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]\n\n backend_properties (BackendProperties):\n properties returned by a backend, including information on gate\n errors, readout errors, qubit coherence times, etc. For a backend\n that provides this information, it can be obtained with:\n ``backend.properties()``\n\n initial_layout (Layout or dict or list):\n Initial position of virtual qubits on physical qubits.\n If this layout makes the circuit compatible with the coupling_map\n constraints, it will be used.\n The final layout is not guaranteed to be the same, as the transpiler\n may permute qubits through swaps or other means.\n\n Multiple formats are supported:\n a. Layout instance\n\n b. dict\n virtual to physical:\n {qr[0]: 0,\n qr[1]: 3,\n qr[2]: 5}\n\n physical to virtual:\n {0: qr[0],\n 3: qr[1],\n 5: qr[2]}\n\n c. list\n virtual to physical:\n [0, 3, 5] # virtual qubits are ordered (in addition to named)\n\n physical to virtual:\n [qr[0], None, None, qr[1], None, qr[2]]\n\n seed_transpiler (int):\n sets random seed for the stochastic parts of the transpiler\n\n optimization_level (int):\n How much optimization to perform on the circuits.\n Higher levels generate more optimized circuits,\n at the expense of longer transpilation time.\n 0: no optimization\n 1: light optimization\n 2: heavy optimization\n\n pass_manager (PassManager):\n The pass manager to use for a custom pipeline of transpiler passes.\n If this arg is present, all other args will be ignored and the\n pass manager will be used directly (Qiskit will not attempt to\n auto-select a pass manager based on transpile options).\n\n seed_mapper (int):\n DEPRECATED in 0.8: use ``seed_transpiler`` kwarg instead\n\n Returns:\n QuantumCircuit or list[QuantumCircuit]: transpiled circuit(s).\n\n Raises:\n TranspilerError: in case of bad inputs to transpiler or errors in passes"}
{"_id": "q_2669", "text": "Execute a list of circuits or pulse schedules on a backend.\n\n The execution is asynchronous, and a handle to a job instance is returned.\n\n Args:\n experiments (QuantumCircuit or list[QuantumCircuit] or Schedule or list[Schedule]):\n Circuit(s) or pulse schedule(s) to execute\n\n backend (BaseBackend):\n Backend to execute circuits on.\n Transpiler options are automatically grabbed from\n backend.configuration() and backend.properties().\n If any other option is explicitly set (e.g. coupling_map), it\n will override the backend's.\n\n basis_gates (list[str]):\n List of basis gate names to unroll to.\n e.g:\n ['u1', 'u2', 'u3', 'cx']\n If None, do not unroll.\n\n coupling_map (CouplingMap or list):\n Coupling map (perhaps custom) to target in mapping.\n Multiple formats are supported:\n a. CouplingMap instance\n\n b. list\n Must be given as an adjacency matrix, where each entry\n specifies all two-qubit interactions supported by backend\n e.g:\n [[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]\n\n backend_properties (BackendProperties):\n Properties returned by a backend, including information on gate\n errors, readout errors, qubit coherence times, etc. For a backend\n that provides this information, it can be obtained with:\n ``backend.properties()``\n\n initial_layout (Layout or dict or list):\n Initial position of virtual qubits on physical qubits.\n If this layout makes the circuit compatible with the coupling_map\n constraints, it will be used.\n The final layout is not guaranteed to be the same, as the transpiler\n may permute qubits through swaps or other means.\n\n Multiple formats are supported:\n a. Layout instance\n\n b. dict\n virtual to physical:\n {qr[0]: 0,\n qr[1]: 3,\n qr[2]: 5}\n\n physical to virtual:\n {0: qr[0],\n 3: qr[1],\n 5: qr[2]}\n\n c. list\n virtual to physical:\n [0, 3, 5] # virtual qubits are ordered (in addition to named)\n\n physical to virtual:\n [qr[0], None, None, qr[1], None, qr[2]]\n\n seed_transpiler (int):\n Sets random seed for the stochastic parts of the transpiler\n\n optimization_level (int):\n How much optimization to perform on the circuits.\n Higher levels generate more optimized circuits,\n at the expense of longer transpilation time.\n 0: no optimization\n 1: light optimization\n 2: heavy optimization\n\n pass_manager (PassManager):\n The pass manager to use during transpilation. If this arg is present,\n auto-selection of pass manager based on the transpile options will be\n turned off and this pass manager will be used directly.\n\n qobj_id (str):\n String identifier to annotate the Qobj\n\n qobj_header (QobjHeader or dict):\n User input that will be inserted in Qobj header, and will also be\n copied to the corresponding Result header. Headers do not affect the run.\n\n shots (int):\n Number of repetitions of each circuit, for sampling. Default: 2014\n\n memory (bool):\n If True, per-shot measurement bitstrings are returned as well\n (provided the backend supports it). For OpenPulse jobs, only\n measurement level 2 supports this option. Default: False\n\n max_credits (int):\n Maximum credits to spend on job. Default: 10\n\n seed_simulator (int):\n Random seed to control sampling, for when backend is a simulator\n\n default_qubit_los (list):\n List of default qubit lo frequencies\n\n default_meas_los (list):\n List of default meas lo frequencies\n\n schedule_los (None or list[Union[Dict[PulseChannel, float], LoConfig]] or\n Union[Dict[PulseChannel, float], LoConfig]):\n Experiment LO configurations\n\n meas_level (int):\n Set the appropriate level of the measurement output for pulse experiments.\n\n meas_return (str):\n Level of measurement data for the backend to return\n For `meas_level` 0 and 1:\n \"single\" returns information from every shot.\n \"avg\" returns average measurement output (averaged over number of shots).\n\n memory_slots (int):\n Number of classical memory slots used in this job.\n\n memory_slot_size (int):\n Size of each memory slot if the output is Level 0.\n\n rep_time (int): repetition time of the experiment in \u03bcs.\n The delay between experiments will be rep_time.\n Must be from the list provided by the device.\n\n parameter_binds (list[dict{Parameter: Value}]):\n List of Parameter bindings over which the set of experiments will be\n executed. Each list element (bind) should be of the form\n {Parameter1: value1, Parameter2: value2, ...}. All binds will be\n executed across all experiments, e.g. if parameter_binds is a\n length-n list, and there are m experiments, a total of m x n\n experiments will be run (one for each experiment/bind pair).\n\n seed (int):\n DEPRECATED in 0.8: use ``seed_simulator`` kwarg instead\n\n seed_mapper (int):\n DEPRECATED in 0.8: use ``seed_transpiler`` kwarg instead\n\n config (dict):\n DEPRECATED in 0.8: use run_config instead\n\n circuits (QuantumCircuit or list[QuantumCircuit]):\n DEPRECATED in 0.8: use ``experiments`` kwarg instead.\n\n run_config (dict):\n Extra arguments used to configure the run (e.g. for Aer configurable backends)\n Refer to the backend documentation for details on these arguments\n Note: for now, these keyword arguments will both be copied to the\n Qobj config, and passed to backend.run()\n\n Returns:\n BaseJob: returns job instance derived from BaseJob\n\n Raises:\n QiskitError: if the execution cannot be interpreted as either circuits or schedules"}
{"_id": "q_2670", "text": "Return the primary drive channel of this qubit."}
{"_id": "q_2671", "text": "Return the primary measure channel of this qubit."}
{"_id": "q_2672", "text": "Return the primary acquire channel of this qubit."}
{"_id": "q_2673", "text": "Remove the handlers for the 'qiskit' logger."}
{"_id": "q_2674", "text": "Create a hinton representation.\n\n Graphical representation of the input array using a 2D city style\n graph (hinton).\n\n Args:\n rho (array): Density matrix\n figsize (tuple): Figure size in pixels."}
{"_id": "q_2675", "text": "Return the process fidelity between two quantum channels.\n\n This is given by\n\n F_p(E1, E2) = Tr[S2^dagger.S1])/dim^2\n\n where S1 and S2 are the SuperOp matrices for channels E1 and E2,\n and dim is the dimension of the input output statespace.\n\n Args:\n channel1 (QuantumChannel or matrix): a quantum channel or unitary matrix.\n channel2 (QuantumChannel or matrix): a quantum channel or unitary matrix.\n require_cptp (bool): require input channels to be CPTP [Default: True].\n\n Returns:\n array_like: The state fidelity F(state1, state2).\n\n Raises:\n QiskitError: if inputs channels do not have the same dimensions,\n have different input and output dimensions, or are not CPTP with\n `require_cptp=True`."}
{"_id": "q_2676", "text": "Set the input text data."}
{"_id": "q_2677", "text": "Pop a PLY lexer off the stack."}
{"_id": "q_2678", "text": "iterate over each block and replace it with an equivalent Unitary\n on the same wires."}
{"_id": "q_2679", "text": "Return converted `FrameChangeInstruction`.\n\n Args:\n shift(int): Offset time.\n instruction (FrameChangeInstruction): frame change instruction.\n Returns:\n dict: Dictionary of required parameters."}
{"_id": "q_2680", "text": "Return converted `PulseInstruction`.\n\n Args:\n shift(int): Offset time.\n instruction (PulseInstruction): drive instruction.\n Returns:\n dict: Dictionary of required parameters."}
{"_id": "q_2681", "text": "Return converted `Snapshot`.\n\n Args:\n shift(int): Offset time.\n instruction (Snapshot): snapshot instruction.\n Returns:\n dict: Dictionary of required parameters."}
{"_id": "q_2682", "text": "Sampler decorator base method.\n\n Samplers are used for converting an continuous function to a discretized pulse.\n\n They operate on a function with the signature:\n `def f(times: np.ndarray, *args, **kwargs) -> np.ndarray`\n Where `times` is a numpy array of floats with length n_times and the output array\n is a complex numpy array with length n_times. The output of the decorator is an\n instance of `FunctionalPulse` with signature:\n `def g(duration: int, *args, **kwargs) -> SamplePulse`\n\n Note if your continuous pulse function outputs a `complex` scalar rather than a\n `np.ndarray`, you should first vectorize it before applying a sampler.\n\n\n This class implements the sampler boilerplate for the sampler.\n\n Args:\n sample_function: A sampler function to be decorated."}
{"_id": "q_2683", "text": "Return the backends matching the specified filtering.\n\n Filter the `backends` list by their `configuration` or `status`\n attributes, or from a boolean callable. The criteria for filtering can\n be specified via `**kwargs` or as a callable via `filters`, and the\n backends must fulfill all specified conditions.\n\n Args:\n backends (list[BaseBackend]): list of backends.\n filters (callable): filtering conditions as a callable.\n **kwargs (dict): dict of criteria.\n\n Returns:\n list[BaseBackend]: a list of backend instances matching the\n conditions."}
{"_id": "q_2684", "text": "Resolve backend name from a deprecated name or an alias.\n\n A group will be resolved in order of member priorities, depending on\n availability.\n\n Args:\n name (str): name of backend to resolve\n backends (list[BaseBackend]): list of available backends.\n deprecated (dict[str: str]): dict of deprecated names.\n aliased (dict[str: list[str]]): dict of aliased names.\n\n Returns:\n str: resolved name (name of an available backend)\n\n Raises:\n LookupError: if name cannot be resolved through regular available\n names, nor deprecated, nor alias names."}
{"_id": "q_2685", "text": "Convert an observable in matrix form to dictionary form.\n\n Takes in a diagonal observable as a matrix and converts it to a dictionary\n form. Can also handle a list sorted of the diagonal elements.\n\n Args:\n matrix_observable (list): The observable to be converted to dictionary\n form. Can be a matrix or just an ordered list of observed values\n\n Returns:\n Dict: A dictionary with all observable states as keys, and corresponding\n values being the observed value for that state"}
{"_id": "q_2686", "text": "Verify each expression in a list."}
{"_id": "q_2687", "text": "Verify a user defined gate call."}
{"_id": "q_2688", "text": "Parse some data."}
{"_id": "q_2689", "text": "Parser runner.\n\n To use this module stand-alone."}
{"_id": "q_2690", "text": "Return a basis state ndarray.\n\n Args:\n str_state (string): a string representing the state.\n num (int): the number of qubits\n Returns:\n ndarray: state(2**num) a quantum state with basis basis state.\n Raises:\n QiskitError: if the dimensions is wrong"}
{"_id": "q_2691", "text": "maps a pure state to a state matrix\n\n Args:\n state (ndarray): the number of qubits\n flatten (bool): determine if state matrix of column work\n Returns:\n ndarray: state_mat(2**num, 2**num) if flatten is false\n ndarray: state_mat(4**num) if flatten is true stacked on by the column"}
{"_id": "q_2692", "text": "Calculate the purity of a quantum state.\n\n Args:\n state (ndarray): a quantum state\n Returns:\n float: purity."}
{"_id": "q_2693", "text": "Run the pass on the DAG, and write the discovered commutation relations\n into the property_set."}
{"_id": "q_2694", "text": "Creates a backend widget."}
{"_id": "q_2695", "text": "Updates the monitor info\n Called from another thread."}
{"_id": "q_2696", "text": "Get the number and size of unique registers from bit_labels list.\n\n Args:\n bit_labels (list): this list is of the form::\n\n [['reg1', 0], ['reg1', 1], ['reg2', 0]]\n\n which indicates a register named \"reg1\" of size 2\n and a register named \"reg2\" of size 1. This is the\n format of classic and quantum bit labels in qobj\n header.\n\n Yields:\n tuple: iterator of register_name:size pairs."}
{"_id": "q_2697", "text": "Get depth information for the circuit.\n\n Returns:\n int: number of columns in the circuit\n int: total size of columns in the circuit"}
{"_id": "q_2698", "text": "Get height, width & scale attributes for the beamer page.\n\n Returns:\n tuple: (height, width, scale) desirable page attributes"}
{"_id": "q_2699", "text": "Loads the QObj schema for use in future validations.\n\n Caches schema in _SCHEMAS module attribute.\n\n Args:\n file_path(str): Path to schema.\n name(str): Given name for schema. Defaults to file_path filename\n without schema.\n Return:\n schema(dict): Loaded schema."}
{"_id": "q_2700", "text": "Generate validator for JSON schema.\n\n Args:\n name (str): Name for validator. Will be validator key in\n `_VALIDATORS` dict.\n schema (dict): JSON schema `dict`. If not provided searches for schema\n in `_SCHEMAS`.\n check_schema (bool): Verify schema is valid.\n validator_class (jsonschema.IValidator): jsonschema IValidator instance.\n Default behavior is to determine this from the schema `$schema`\n field.\n **validator_kwargs (dict): Additional keyword arguments for validator.\n\n Return:\n jsonschema.IValidator: Validator for JSON schema.\n\n Raises:\n SchemaValidationError: Raised if validation fails."}
{"_id": "q_2701", "text": "Load all default schemas into `_SCHEMAS`."}
{"_id": "q_2702", "text": "Return a cascading explanation of the validation error.\n\n Returns a cascading explanation of the validation error in the form of::\n\n <validator> failed @ <subfield_path> because of:\n <validator> failed @ <subfield_path> because of:\n ...\n <validator> failed @ <subfield_path> because of:\n ...\n ...\n\n For example::\n\n 'oneOf' failed @ '<root>' because of:\n 'required' failed @ '<root>.config' because of:\n 'meas_level' is a required property\n\n Meaning the validator 'oneOf' failed while validating the whole object\n because of the validator 'required' failing while validating the property\n 'config' because its 'meas_level' field is missing.\n\n The cascade repeats the format \"<validator> failed @ <path> because of\"\n until there are no deeper causes. In this case, the string representation\n of the error is shown.\n\n Args:\n err (jsonschema.ValidationError): the instance to explain.\n level (int): starting level of indentation for the cascade of\n explanations.\n\n Return:\n str: a formatted string with the explanation of the error."}
{"_id": "q_2703", "text": "Majority gate."}
{"_id": "q_2704", "text": "Unmajority gate."}
{"_id": "q_2705", "text": "Convert QuantumCircuit to LaTeX string.\n\n Args:\n circuit (QuantumCircuit): input circuit\n scale (float): image scaling\n filename (str): optional filename to write latex\n style (dict or str): dictionary of style or file name of style file\n reverse_bits (bool): When set to True reverse the bit order inside\n registers for the output visualization.\n plot_barriers (bool): Enable/disable drawing barriers in the output\n circuit. Defaults to True.\n justify (str) : `left`, `right` or `none`. Defaults to `left`. Says how\n the circuit should be justified.\n\n Returns:\n str: Latex string appropriate for writing to file."}
{"_id": "q_2706", "text": "Draw a quantum circuit based on matplotlib.\n If `%matplotlib inline` is invoked in a Jupyter notebook, it visualizes a circuit inline.\n We recommend `%config InlineBackend.figure_format = 'svg'` for the inline visualization.\n\n Args:\n circuit (QuantumCircuit): a quantum circuit\n scale (float): scaling factor\n filename (str): file path to save image to\n style (dict or str): dictionary of style or file name of style file\n reverse_bits (bool): When set to True reverse the bit order inside\n registers for the output visualization.\n plot_barriers (bool): Enable/disable drawing barriers in the output\n circuit. Defaults to True.\n justify (str) : `left`, `right` or `none`. Defaults to `left`. Says how\n the circuit should be justified.\n\n\n Returns:\n matplotlib.figure: a matplotlib figure object for the circuit diagram"}
{"_id": "q_2707", "text": "Return a random dim x dim unitary Operator from the Haar measure.\n\n Args:\n dim (int): the dim of the state space.\n seed (int): Optional. To set a random seed.\n\n Returns:\n Operator: (dim, dim) unitary operator.\n\n Raises:\n QiskitError: if dim is not a positive power of 2."}
{"_id": "q_2708", "text": "Generate a random density matrix from the Hilbert-Schmidt metric.\n\n Args:\n N (int): the length of the density matrix.\n rank (int or None): the rank of the density matrix. The default\n value is full-rank.\n seed (int): Optional. To set a random seed.\n Returns:\n ndarray: rho (N,N a density matrix."}
{"_id": "q_2709", "text": "Generate a random density matrix from the Bures metric.\n\n Args:\n N (int): the length of the density matrix.\n rank (int or None): the rank of the density matrix. The default\n value is full-rank.\n seed (int): Optional. To set a random seed.\n Returns:\n ndarray: rho (N,N) a density matrix."}
{"_id": "q_2710", "text": "Return a list of custom gate names in this gate body."}
{"_id": "q_2711", "text": "Return the compose of a QuantumChannel with itself n times.\n\n Args:\n n (int): compute the matrix power of the superoperator matrix.\n\n Returns:\n SuperOp: the n-times composition channel as a SuperOp object.\n\n Raises:\n QiskitError: if the input and output dimensions of the\n QuantumChannel are not equal, or the power is not an integer."}
{"_id": "q_2712", "text": "Convert a list of circuits into a qobj.\n\n Args:\n circuits (list[QuantumCircuits] or QuantumCircuit): circuits to compile\n qobj_header (QobjHeader): header to pass to the results\n qobj_id (int): TODO: delete after qiskit-terra 0.8\n backend_name (str): TODO: delete after qiskit-terra 0.8\n config (dict): TODO: delete after qiskit-terra 0.8\n shots (int): TODO: delete after qiskit-terra 0.8\n max_credits (int): TODO: delete after qiskit-terra 0.8\n basis_gates (str): TODO: delete after qiskit-terra 0.8\n coupling_map (list): TODO: delete after qiskit-terra 0.8\n seed (int): TODO: delete after qiskit-terra 0.8\n memory (bool): TODO: delete after qiskit-terra 0.8\n\n Returns:\n Qobj: the Qobj to be run on the backends"}
{"_id": "q_2713", "text": "Expand 3+ qubit gates using their decomposition rules.\n\n Args:\n dag(DAGCircuit): input dag\n Returns:\n DAGCircuit: output dag with maximum node degrees of 2\n Raises:\n QiskitError: if a 3q+ gate is not decomposable"}
{"_id": "q_2714", "text": "Calculate a subcircuit that implements this unitary."}
{"_id": "q_2715", "text": "Validate if the value is of the type of the schema's model.\n\n Assumes the nested schema is a ``BaseSchema``."}
{"_id": "q_2716", "text": "Validate if it's a list of valid item-field values.\n\n Check if each element in the list can be validated by the item-field\n passed during construction."}
{"_id": "q_2717", "text": "Set the absolute tolerence parameter for float comparisons."}
{"_id": "q_2718", "text": "Set the relative tolerence parameter for float comparisons."}
{"_id": "q_2719", "text": "Return tuple of input dimension for specified subsystems."}
{"_id": "q_2720", "text": "Make a copy of current operator."}
{"_id": "q_2721", "text": "Return the compose of a operator with itself n times.\n\n Args:\n n (int): the number of times to compose with self (n>0).\n\n Returns:\n BaseOperator: the n-times composed operator.\n\n Raises:\n QiskitError: if the input and output dimensions of the operator\n are not equal, or the power is not a positive integer."}
{"_id": "q_2722", "text": "Perform a contraction using Numpy.einsum\n\n Args:\n tensor (np.array): a vector or matrix reshaped to a rank-N tensor.\n mat (np.array): a matrix reshaped to a rank-2M tensor.\n indices (list): tensor indices to contract with mat.\n shift (int): shift for indicies of tensor to contract [Default: 0].\n right_mul (bool): if True right multiply tensor by mat\n (else left multiply) [Default: False].\n\n Returns:\n Numpy.ndarray: the matrix multiplied rank-N tensor.\n\n Raises:\n QiskitError: if mat is not an even rank tensor."}
{"_id": "q_2723", "text": "Override ``_deserialize`` for customizing the exception raised."}
{"_id": "q_2724", "text": "Check if at least one of the possible choices validates the value.\n\n Possible choices are assumed to be ``ModelTypeValidator`` fields."}
{"_id": "q_2725", "text": "Return the state fidelity between two quantum states.\n\n Either input may be a state vector, or a density matrix. The state\n fidelity (F) for two density matrices is defined as::\n\n F(rho1, rho2) = Tr[sqrt(sqrt(rho1).rho2.sqrt(rho1))] ^ 2\n\n For a pure state and mixed state the fidelity is given by::\n\n F(|psi1>, rho2) = <psi1|rho2|psi1>\n\n For two pure states the fidelity is given by::\n\n F(|psi1>, |psi2>) = |<psi1|psi2>|^2\n\n Args:\n state1 (array_like): a quantum state vector or density matrix.\n state2 (array_like): a quantum state vector or density matrix.\n\n Returns:\n array_like: The state fidelity F(state1, state2)."}
{"_id": "q_2726", "text": "Apply real scalar function to singular values of a matrix.\n\n Args:\n a (array_like): (N, N) Matrix at which to evaluate the function.\n func (callable): Callable object that evaluates a scalar function f.\n\n Returns:\n ndarray: funm (N, N) Value of the matrix function specified by func\n evaluated at `A`."}
{"_id": "q_2727", "text": "Special case. Return self."}
{"_id": "q_2728", "text": "Set snapshot label to name\n\n Args:\n name (str or None): label to assign unitary\n\n Raises:\n TypeError: name is not string or None."}
{"_id": "q_2729", "text": "Convert to a Kraus or UnitaryGate circuit instruction.\n\n If the channel is unitary it will be added as a unitary gate,\n otherwise it will be added as a kraus simulator instruction.\n\n Returns:\n Instruction: A kraus instruction for the channel.\n\n Raises:\n QiskitError: if input data is not an N-qubit CPTP quantum channel."}
{"_id": "q_2730", "text": "Convert input into a QuantumChannel subclass object or Operator object"}
{"_id": "q_2731", "text": "Alternative constructor for a TensorFlowModel that\n accepts a `tf.keras.Model` instance.\n\n Parameters\n ----------\n model : `tensorflow.keras.Model`\n A `tensorflow.keras.Model` that accepts a single input tensor\n and returns a single output tensor representing logits.\n bounds : tuple\n Tuple of lower and upper bound for the pixel values, usually\n (0, 1) or (0, 255).\n input_shape : tuple\n The shape of a single input, e.g. (28, 28, 1) for MNIST.\n If None, tries to get the the shape from the model's\n input_shape attribute.\n channel_axis : int\n The index of the axis that represents color channels.\n preprocessing: 2-element tuple with floats or numpy arrays\n Elementwises preprocessing of input; we first subtract the first\n element of preprocessing from the input and then divide the input\n by the second element."}
{"_id": "q_2732", "text": "Interface to model.channel_axis for attacks.\n\n Parameters\n ----------\n batch : bool\n Controls whether the index of the axis for a batch of images\n (4 dimensions) or a single image (3 dimensions) should be returned."}
{"_id": "q_2733", "text": "Returns true if _backward and _forward_backward can be called\n by an attack, False otherwise."}
{"_id": "q_2734", "text": "Interface to model.predictions for attacks.\n\n Parameters\n ----------\n image : `numpy.ndarray`\n Single input with shape as expected by the model\n (without the batch dimension).\n strict : bool\n Controls if the bounds for the pixel values should be checked."}
{"_id": "q_2735", "text": "Interface to model.batch_predictions for attacks.\n\n Parameters\n ----------\n images : `numpy.ndarray`\n Batch of inputs with shape as expected by the model.\n greedy : bool\n Whether the first adversarial should be returned.\n strict : bool\n Controls if the bounds for the pixel values should be checked."}
{"_id": "q_2736", "text": "Interface to model.gradient for attacks.\n\n Parameters\n ----------\n image : `numpy.ndarray`\n Single input with shape as expected by the model\n (without the batch dimension).\n Defaults to the original image.\n label : int\n Label used to calculate the loss that is differentiated.\n Defaults to the original label.\n strict : bool\n Controls if the bounds for the pixel values should be checked."}
{"_id": "q_2737", "text": "Interface to model.predictions_and_gradient for attacks.\n\n Parameters\n ----------\n image : `numpy.ndarray`\n Single input with shape as expected by the model\n (without the batch dimension).\n Defaults to the original image.\n label : int\n Label used to calculate the loss that is differentiated.\n Defaults to the original label.\n strict : bool\n Controls if the bounds for the pixel values should be checked."}
{"_id": "q_2738", "text": "Interface to model.backward for attacks.\n\n Parameters\n ----------\n gradient : `numpy.ndarray`\n Gradient of some loss w.r.t. the logits.\n image : `numpy.ndarray`\n Single input with shape as expected by the model\n (without the batch dimension).\n\n Returns\n -------\n gradient : `numpy.ndarray`\n The gradient w.r.t the image.\n\n See Also\n --------\n :meth:`gradient`"}
{"_id": "q_2739", "text": "Returns the index of the largest logit, ignoring the class that\n is passed as `exclude`."}
{"_id": "q_2740", "text": "Concatenates the names of the given criteria in alphabetical order.\n\n If a sub-criterion is itself a combined criterion, its name is\n first split into the individual names and the names of the\n sub-sub criteria is used instead of the name of the sub-criterion.\n This is done recursively to ensure that the order and the hierarchy\n of the criteria does not influence the name.\n\n Returns\n -------\n str\n The alphabetically sorted names of the sub-criteria concatenated\n using double underscores between them."}
{"_id": "q_2741", "text": "Calculates the cross-entropy.\n\n Parameters\n ----------\n logits : array_like\n The logits predicted by the model.\n label : int\n The label describing the target distribution.\n\n Returns\n -------\n float\n The cross-entropy between softmax(logits) and onehot(label)."}
{"_id": "q_2742", "text": "Convenience method that calculates predictions for a single image.\n\n Parameters\n ----------\n image : `numpy.ndarray`\n Single input with shape as expected by the model\n (without the batch dimension).\n\n Returns\n -------\n `numpy.ndarray`\n Vector of predictions (logits, i.e. before the softmax) with\n shape (number of classes,).\n\n See Also\n --------\n :meth:`batch_predictions`"}
{"_id": "q_2743", "text": "Clone a remote git repository to a local path.\n\n :param git_uri: the URI to the git repository to be cloned\n :return: the generated local path where the repository has been cloned to"}
{"_id": "q_2744", "text": "Create Graphene Enum for sorting a SQLAlchemy class query\n\n Parameters\n - cls : Sqlalchemy model class\n Model used to create the sort enumerator\n - name : str, optional, default None\n Name to use for the enumerator. If not provided it will be set to `cls.__name__ + 'SortEnum'`\n - symbol_name : function, optional, default `_symbol_name`\n Function which takes the column name and a boolean indicating if the sort direction is ascending,\n and returns the symbol name for the current column and sort direction.\n The default function will create, for a column named 'foo', the symbols 'foo_asc' and 'foo_desc'\n\n Returns\n - Enum\n The Graphene enumerator"}
{"_id": "q_2745", "text": "Monkey patching _strptime to avoid problems related with non-english\n locale changes on the system.\n\n For example, if system's locale is set to fr_FR. Parser won't recognize\n any date since all languages are translated to english dates."}
{"_id": "q_2746", "text": "Get an ordered mapping with locale codes as keys\n and corresponding locale instances as values.\n\n :param languages:\n A list of language codes, e.g. ['en', 'es', 'zh-Hant'].\n If locales are not given, languages and region are\n used to construct locales to load.\n :type languages: list\n\n :param locales:\n A list of codes of locales which are to be loaded,\n e.g. ['fr-PF', 'qu-EC', 'af-NA']\n :type locales: list\n\n :param region:\n A region code, e.g. 'IN', '001', 'NE'.\n If locales are not given, languages and region are\n used to construct locales to load.\n :type region: str|unicode\n\n :param use_given_order:\n If True, the returned mapping is ordered in the order locales are given.\n :type allow_redetect_language: bool\n\n :param allow_conflicting_locales:\n if True, locales with same language and different region can be loaded.\n :type allow_conflicting_locales: bool\n\n :return: ordered locale code to locale instance mapping"}
{"_id": "q_2747", "text": "Yield locale instances.\n\n :param languages:\n A list of language codes, e.g. ['en', 'es', 'zh-Hant'].\n If locales are not given, languages and region are\n used to construct locales to load.\n :type languages: list\n\n :param locales:\n A list of codes of locales which are to be loaded,\n e.g. ['fr-PF', 'qu-EC', 'af-NA']\n :type locales: list\n\n :param region:\n A region code, e.g. 'IN', '001', 'NE'.\n If locales are not given, languages and region are\n used to construct locales to load.\n :type region: str|unicode\n\n :param use_given_order:\n If True, the returned mapping is ordered in the order locales are given.\n :type allow_redetect_language: bool\n\n :param allow_conflicting_locales:\n if True, locales with same language and different region can be loaded.\n :type allow_conflicting_locales: bool\n\n :yield: locale instances"}
{"_id": "q_2748", "text": "Check if tokens are valid tokens for the locale.\n\n :param tokens:\n a list of string or unicode tokens.\n :type tokens: list\n\n :return: True if tokens are valid, False otherwise."}
{"_id": "q_2749", "text": "Attemps to parse time part of date strings like '1 day ago, 2 PM'"}
{"_id": "q_2750", "text": "Check if the locale is applicable to translate date string.\n\n :param date_string:\n A string representing date and/or time in a recognizably valid format.\n :type date_string: str|unicode\n\n :param strip_timezone:\n If True, timezone is stripped from date string.\n :type strip_timezone: bool\n\n :return: boolean value representing if the locale is applicable for the date string or not."}
{"_id": "q_2751", "text": "Parse with formats and return a dictionary with 'period' and 'obj_date'.\n\n :returns: :class:`datetime.datetime`, dict or None"}
{"_id": "q_2752", "text": "return ammo generator"}
{"_id": "q_2753", "text": "translate http code to net code. if accertion failed, set net code to 314"}
{"_id": "q_2754", "text": "Generate phantom tool run config"}
{"_id": "q_2755", "text": "get merged info about phantom conf"}
{"_id": "q_2756", "text": "compose benchmark block"}
{"_id": "q_2757", "text": "This function polls stdout and stderr streams and writes their contents\n to log"}
{"_id": "q_2758", "text": "helper for above functions"}
{"_id": "q_2759", "text": "Read stepper info from json"}
{"_id": "q_2760", "text": "Write stepper info to json"}
{"_id": "q_2761", "text": "Create Load Plan as defined in schedule. Publish info about its duration."}
{"_id": "q_2762", "text": "Return rps for second t"}
{"_id": "q_2763", "text": "Execute and check exit code"}
{"_id": "q_2764", "text": "The reason why we have two separate methods for monitoring\n and aggregates is a strong difference in incoming data."}
{"_id": "q_2765", "text": "x\n Make a set of points for `this` label\n\n overall_quantiles, overall_meta, net_codes, proto_codes, histograms"}
{"_id": "q_2766", "text": "A feeder that runs in distinct thread in main process."}
{"_id": "q_2767", "text": "Set up logging"}
{"_id": "q_2768", "text": "override config options with user specified options"}
{"_id": "q_2769", "text": "call shutdown routines"}
{"_id": "q_2770", "text": "Collect data, cache it and send to listeners"}
{"_id": "q_2771", "text": "Returns a marker function of the requested marker_type\n\n >>> marker = get_marker('uniq')(__test_missile)\n >>> type(marker)\n <type 'str'>\n >>> len(marker)\n 32\n\n >>> get_marker('uri')(__test_missile)\n '_example_search_hello_help_us'\n\n >>> marker = get_marker('non-existent')(__test_missile)\n Traceback (most recent call last):\n ...\n NotImplementedError: No such marker: \"non-existent\"\n\n >>> get_marker('3')(__test_missile)\n '_example_search_hello'\n\n >>> marker = get_marker('3', True)\n >>> marker(__test_missile)\n '_example_search_hello#0'\n >>> marker(__test_missile)\n '_example_search_hello#1'"}
{"_id": "q_2772", "text": "Parse duration string, such as '3h2m3s' into milliseconds\n\n >>> parse_duration('3h2m3s')\n 10923000\n\n >>> parse_duration('0.3s')\n 300\n\n >>> parse_duration('5')\n 5000"}
{"_id": "q_2773", "text": "Start remote agent"}
{"_id": "q_2774", "text": "Searching for line in jmeter.log such as\n Waiting for possible shutdown message on port 4445"}
{"_id": "q_2775", "text": "Gracefull termination of running process"}
{"_id": "q_2776", "text": "Parse lines and return stats"}
{"_id": "q_2777", "text": "instantiate criterion from config string"}
{"_id": "q_2778", "text": "Prepare config data."}
{"_id": "q_2779", "text": "raise exception on disk space exceeded"}
{"_id": "q_2780", "text": "raise exception on RAM exceeded"}
{"_id": "q_2781", "text": "Gets next line for right panel"}
{"_id": "q_2782", "text": "Cut tuple of line chunks according to it's wisible lenght"}
{"_id": "q_2783", "text": "Right-pad lines of block to equal width"}
{"_id": "q_2784", "text": "Calculate wisible length of string"}
{"_id": "q_2785", "text": "Creates load plan timestamps generator\n\n >>> from util import take\n\n >>> take(7, LoadPlanBuilder().ramp(5, 4000).create())\n [0, 1000, 2000, 3000, 4000, 0, 0]\n\n >>> take(7, create(['ramp(5, 4s)']))\n [0, 1000, 2000, 3000, 4000, 0, 0]\n\n >>> take(12, create(['ramp(5, 4s)', 'wait(5s)', 'ramp(5,4s)']))\n [0, 1000, 2000, 3000, 4000, 9000, 10000, 11000, 12000, 13000, 0, 0]\n\n >>> take(7, create(['wait(5s)', 'ramp(5, 0)']))\n [5000, 5000, 5000, 5000, 5000, 0, 0]\n\n >>> take(7, create([]))\n [0, 0, 0, 0, 0, 0, 0]\n\n >>> take(12, create(['line(1, 9, 4s)']))\n [0, 500, 1000, 1500, 2000, 2500, 3000, 3500, 4000, 0, 0, 0]\n\n >>> take(12, create(['const(3, 5s)', 'line(7, 11, 2s)']))\n [0, 0, 0, 5000, 5000, 5000, 5000, 5500, 6000, 6500, 7000, 0]\n\n >>> take(12, create(['step(2, 10, 2, 3s)']))\n [0, 0, 3000, 3000, 6000, 6000, 9000, 9000, 12000, 12000, 0, 0]\n\n >>> take(12, LoadPlanBuilder().const(3, 1000).line(5, 10, 5000).steps)\n [(3, 1), (5, 1), (6, 1), (7, 1), (8, 1), (9, 1), (10, 1)]\n\n >>> take(12, LoadPlanBuilder().stairway(100, 950, 100, 30000).steps)\n [(100, 30), (200, 30), (300, 30), (400, 30), (500, 30), (600, 30), (700, 30), (800, 30), (900, 30), (950, 30)]\n\n >>> LoadPlanBuilder().stairway(100, 950, 100, 30000).instances\n 950\n\n >>> LoadPlanBuilder().const(3, 1000).line(5, 10, 5000).instances\n 10\n\n >>> LoadPlanBuilder().line(1, 100, 60000).instances\n 100"}
{"_id": "q_2786", "text": "format level str"}
{"_id": "q_2787", "text": "add right panel widget"}
{"_id": "q_2788", "text": "Send request to writer service."}
{"_id": "q_2789", "text": "Tells core to take plugin options and instantiate plugin classes"}
{"_id": "q_2790", "text": "Retrieve a plugin of desired class, KeyError raised otherwise"}
{"_id": "q_2791", "text": "Retrieve a list of plugins of desired class, KeyError raised otherwise"}
{"_id": "q_2792", "text": "Move or copy single file to artifacts dir"}
{"_id": "q_2793", "text": "Add file to be stored as result artifact on post-process phase"}
{"_id": "q_2794", "text": "Generate temp file name in artifacts base dir\n and close temp file handle"}
{"_id": "q_2795", "text": "Read configs set into storage"}
{"_id": "q_2796", "text": "Flush current stat to file"}
{"_id": "q_2797", "text": "return sections with specified prefix"}
{"_id": "q_2798", "text": "Return all items found in this chunk"}
{"_id": "q_2799", "text": "returns info object"}
{"_id": "q_2800", "text": "Prepare for monitoring - install agents etc"}
{"_id": "q_2801", "text": "Poll agents for data"}
{"_id": "q_2802", "text": "sends pending data set to listeners"}
{"_id": "q_2803", "text": "decode agents jsons, count diffs"}
{"_id": "q_2804", "text": "Perform one request, possibly raising RetryException in the case\n the response is 429. Otherwise, if error text contain \"code\" string,\n then it decodes to json object and returns APIError.\n Returns the body json in the 200 status."}
{"_id": "q_2805", "text": "Request a new order"}
{"_id": "q_2806", "text": "Get an order"}
{"_id": "q_2807", "text": "Result may be a python dictionary, array or a primitive type\n that can be converted to JSON for writing back the result."}
{"_id": "q_2808", "text": "Writes the message as part of the response and sets 404 status."}
{"_id": "q_2809", "text": "Makes the base dict for the response.\n The status is the string value for\n the key \"status\" of the response. This\n should be \"success\" or \"failure\"."}
{"_id": "q_2810", "text": "Makes the python dict corresponding to the\n JSON that needs to be sent for a successful\n response. Result is the actual payload\n that gets sent."}
{"_id": "q_2811", "text": "Helper function to get request argument.\n Raises exception if argument is missing.\n Returns the cluster argument."}
{"_id": "q_2812", "text": "Helper function to get request argument.\n Raises exception if argument is missing.\n Returns the role argument."}
{"_id": "q_2813", "text": "Helper function to get request argument.\n Raises exception if argument is missing.\n Returns the environ argument."}
{"_id": "q_2814", "text": "Helper function to get topology argument.\n Raises exception if argument is missing.\n Returns the topology argument."}
{"_id": "q_2815", "text": "Helper function to get starttime argument.\n Raises exception if argument is missing.\n Returns the starttime argument."}
{"_id": "q_2816", "text": "Helper function to get endtime argument.\n Raises exception if argument is missing.\n Returns the endtime argument."}
{"_id": "q_2817", "text": "Helper function to get length argument.\n Raises exception if argument is missing.\n Returns the length argument."}
{"_id": "q_2818", "text": "Helper function to get metricname arguments.\n Notice that it is get_argument\"s\" variation, which means that this can be repeated.\n Raises exception if argument is missing.\n Returns a list of metricname arguments"}
{"_id": "q_2819", "text": "Tries to connect to the Heron Server\n\n ``loop()`` method needs to be called after this."}
{"_id": "q_2820", "text": "Registers protobuf message builders that this client wants to receive\n\n :param msg_builder: callable to create a protobuf message that this client wants to receive"}
{"_id": "q_2821", "text": "This will extract heron directory from .pex file.\n\n For example,\n when __file__ is '/Users/heron-user/bin/heron/heron/tools/common/src/python/utils/config.pyc', and\n its real path is '/Users/heron-user/.heron/bin/heron/tools/common/src/python/utils/config.pyc',\n the internal variable ``path`` would be '/Users/heron-user/.heron', which is the heron directory\n\n This means the variable `go_above_dirs` below is 9.\n\n :return: root location of the .pex file"}
{"_id": "q_2822", "text": "if role is not provided, supply userid\n if environ is not provided, supply 'default'"}
{"_id": "q_2823", "text": "Parse the command line for overriding the defaults and\n create an override file."}
{"_id": "q_2824", "text": "Get the path of java executable"}
{"_id": "q_2825", "text": "Check if the release.yaml file exists"}
{"_id": "q_2826", "text": "Print version from release.yaml\n\n :param zipped_pex: True if the PEX file is built with flag `zip_safe=False'."}
{"_id": "q_2827", "text": "Returns the UUID with which the watch is\n registered. This UUID can be used to unregister\n the watch.\n Returns None if watch could not be registered.\n\n The argument 'callback' must be a function that takes\n exactly one argument, the topology on which\n the watch was triggered.\n Note that the watch will be unregistered in case\n it raises any Exception the first time.\n\n This callback is also called at the time\n of registration."}
{"_id": "q_2828", "text": "Unregister the watch with the given UUID."}
{"_id": "q_2829", "text": "Call all the callbacks.\n If any callback raises an Exception,\n unregister the corresponding watch."}
{"_id": "q_2830", "text": "set physical plan"}
{"_id": "q_2831", "text": "set packing plan"}
{"_id": "q_2832", "text": "set exectuion state"}
{"_id": "q_2833", "text": "Number of spouts + bolts"}
{"_id": "q_2834", "text": "Get the current state of this topology.\n The state values are from the topology.proto\n RUNNING = 1, PAUSED = 2, KILLED = 3\n if the state is None \"Unknown\" is returned."}
{"_id": "q_2835", "text": "Sync the topologies with the statemgrs."}
{"_id": "q_2836", "text": "Returns all the topologies for a given state manager."}
{"_id": "q_2837", "text": "Returns the repesentation of execution state that will\n be returned from Tracker."}
{"_id": "q_2838", "text": "Returns the representation of scheduler location that will\n be returned from Tracker."}
{"_id": "q_2839", "text": "Returns the representation of tmaster that will\n be returned from Tracker."}
{"_id": "q_2840", "text": "validate extra link"}
{"_id": "q_2841", "text": "Emits a new tuple from this Spout\n\n It is compatible with StreamParse API.\n\n :type tup: list or tuple\n :param tup: the new output Tuple to send from this spout,\n should contain only serializable data.\n :type tup_id: str or object\n :param tup_id: the ID for the Tuple. Leave this blank for an unreliable emit.\n (Same as messageId in Java)\n :type stream: str\n :param stream: the ID of the stream this Tuple should be emitted to.\n Leave empty to emit to the default stream.\n :type direct_task: int\n :param direct_task: the task to send the Tuple to if performing a direct emit.\n :type need_task_ids: bool\n :param need_task_ids: indicate whether or not you would like the task IDs the Tuple was emitted."}
{"_id": "q_2842", "text": "normalize raw logical plan info to table"}
{"_id": "q_2843", "text": "filter to keep bolts"}
{"_id": "q_2844", "text": "get physical plan"}
{"_id": "q_2845", "text": "create physical plan"}
{"_id": "q_2846", "text": "get execution state"}
{"_id": "q_2847", "text": "Helper function to get execution state with\n a callback. The future watch is placed\n only if isWatching is True."}
{"_id": "q_2848", "text": "Deserializes Java primitive data and objects serialized by ObjectOutputStream\n from a file-like object."}
{"_id": "q_2849", "text": "copy an object"}
{"_id": "q_2850", "text": "Fetches Instance jstack from heron-shell."}
{"_id": "q_2851", "text": "Create the parse for the update command"}
{"_id": "q_2852", "text": "flatten extra args"}
{"_id": "q_2853", "text": "Checks if a given gtype is sane"}
{"_id": "q_2854", "text": "Custom grouping from a given implementation of ICustomGrouping\n\n :param customgrouper: The ICustomGrouping implemention to use"}
{"_id": "q_2855", "text": "Update the value of CountMetric or MultiCountMetric\n\n :type name: str\n :param name: name of the registered metric to be updated.\n :type incr_by: int\n :param incr_by: specifies how much to increment. Default is 1.\n :type key: str or None\n :param key: specifies a key for MultiCountMetric. Needs to be `None` for updating CountMetric."}
{"_id": "q_2856", "text": "Update the value of ReducedMetric or MultiReducedMetric\n\n :type name: str\n :param name: name of the registered metric to be updated.\n :param value: specifies a value to be reduced.\n :type key: str or None\n :param key: specifies a key for MultiReducedMetric. Needs to be `None` for updating\n ReducedMetric."}
{"_id": "q_2857", "text": "Apply updates to the execute metrics"}
{"_id": "q_2858", "text": "Apply updates to the deserialization metrics"}
{"_id": "q_2859", "text": "Registers a given metric\n\n :param name: name of the metric\n :param metric: IMetric object to be registered\n :param time_bucket_in_sec: time interval for update to the metrics manager"}
{"_id": "q_2860", "text": "Offer to the buffer\n\n It is a non-blocking operation, and when the buffer is full, it raises Queue.Full exception"}
{"_id": "q_2861", "text": "Parse version to major, minor, patch, pre-release, build parts."}
{"_id": "q_2862", "text": "Returns all the file state_managers."}
{"_id": "q_2863", "text": "Increments the value of a given key by ``to_add``"}
{"_id": "q_2864", "text": "Adds a new key to this metric"}
{"_id": "q_2865", "text": "Add a new data tuple to the currently buffered set of tuples"}
{"_id": "q_2866", "text": "Add the checkpoint state message to be sent back the stmgr\n\n :param ckpt_id: The id of the checkpoint\n :ckpt_state: The checkpoint state"}
{"_id": "q_2867", "text": "Check if an entry in the class path exists as either a directory or a file"}
{"_id": "q_2868", "text": "Given a java classpath, check whether the path entries are valid or not"}
{"_id": "q_2869", "text": "Get a list of paths to included dependencies in the specified pex file\n\n Note that dependencies are located under `.deps` directory"}
{"_id": "q_2870", "text": "Loads pex file and its dependencies to the current python path"}
{"_id": "q_2871", "text": "Resolves duplicate package suffix problems\n\n When dynamically loading a pex file and a corresponding python class (bolt/spout/topology),\n if the top level package in which to-be-loaded classes reside is named 'heron', the path conflicts\n with this Heron Instance pex package (heron.instance.src.python...), making the Python\n interpreter unable to find the target class in a given pex file.\n This function resolves this issue by individually loading packages with suffix `heron` to\n avoid this issue.\n\n However, if a dependent module/class that is not directly specified under ``class_path``\n and has conflicts with other native heron packages, there is a possibility that\n such a class/module might not be imported correctly. For example, if a given ``class_path`` was\n ``heron.common.src.module.Class``, but it has a dependent module (such as by import statement),\n ``heron.common.src.python.dep_module.DepClass`` for example, pex_loader does not guarantee that\n ``DepClass` is imported correctly. This is because ``heron.common.src.python.dep_module`` is not\n explicitly added to sys.path, while ``heron.common.src.python`` module exists as the native heron\n package, from which ``dep_module`` cannot be found, so Python interpreter may raise ImportError.\n\n The best way to avoid this issue is NOT to dynamically load a pex file whose top level package\n name is ``heron``. Note that this method is included because some of the example topologies and\n tests have to have a pex with its top level package name of ``heron``."}
{"_id": "q_2872", "text": "Builds the topology and returns the builder"}
{"_id": "q_2873", "text": "For each kvp in config, do wildcard substitution on the values"}
{"_id": "q_2874", "text": "set default time"}
{"_id": "q_2875", "text": "Process a single tuple of input\n\n We add the (time, tuple) pair into our current_tuples. And then look for expiring\n elemnents"}
{"_id": "q_2876", "text": "Called every slide_interval"}
{"_id": "q_2877", "text": "Called every window_duration"}
{"_id": "q_2878", "text": "Get summary of stream managers registration summary"}
{"_id": "q_2879", "text": "Set up log, process and signal handlers"}
{"_id": "q_2880", "text": "Register exit handlers, initialize the executor and run it."}
{"_id": "q_2881", "text": "Returns the processes to handle streams, including the stream-mgr and the user code containing\n the stream logic of the topology"}
{"_id": "q_2882", "text": "For the given packing_plan, return the container plan with the given container_id. If protobufs\n supported maps, we could just get the plan by id, but it doesn't so we have a collection of\n containers to iterate over."}
{"_id": "q_2883", "text": "Get a map from all daemon services' name to the command to start them"}
{"_id": "q_2884", "text": "Start all commands and add them to the dict of processes to be monitored"}
{"_id": "q_2885", "text": "Monitor all processes in processes_to_monitor dict,\n restarting any if they fail, up to max_runs times."}
{"_id": "q_2886", "text": "Determines the commands to be run and compares them with the existing running commands.\n Then starts new ones required and kills old ones no longer required."}
{"_id": "q_2887", "text": "Builds the topology and submits it"}
{"_id": "q_2888", "text": "Force every module in modList to be placed into main"}
{"_id": "q_2889", "text": "Loads additional properties into class `cls`."}
{"_id": "q_2890", "text": "Returns last n lines from the filename. No exception handling"}
{"_id": "q_2891", "text": "Returns a serializer for a given context"}
{"_id": "q_2892", "text": "Registers a new timer task\n\n :param task: function to be run at a specified second from now\n :param second: how many seconds to wait before the timer is triggered"}
{"_id": "q_2893", "text": "Get the next timeout from now\n\n This should be used from do_wait().\n :returns (float) next_timeout, or 10.0 if there are no timer events"}
{"_id": "q_2894", "text": "Returns a parse tree for the query, each of the node is a\n subclass of Operator. This is both a lexical as well as syntax analyzer step."}
{"_id": "q_2895", "text": "Indicate that processing of a Tuple has failed\n\n It is compatible with StreamParse API."}
{"_id": "q_2896", "text": "Template slave config file"}
{"_id": "q_2897", "text": "Template scheduler.yaml"}
{"_id": "q_2898", "text": "Tempate uploader.yaml"}
{"_id": "q_2899", "text": "Template statemgr.yaml"}
{"_id": "q_2900", "text": "template heron tools"}
{"_id": "q_2901", "text": "get cluster info for standalone cluster"}
{"_id": "q_2902", "text": "Start a Heron standalone cluster"}
{"_id": "q_2903", "text": "Start Heron tracker and UI"}
{"_id": "q_2904", "text": "Wait for a nomad master to start"}
{"_id": "q_2905", "text": "Tar a directory"}
{"_id": "q_2906", "text": "Start master nodes"}
{"_id": "q_2907", "text": "read config files to get roles"}
{"_id": "q_2908", "text": "check if this host is this addr"}
{"_id": "q_2909", "text": "Resolve all symbolic references that `src` points to. Note that this\n is different than `os.path.realpath` as path components leading up to\n the final location may still be symbolic links."}
{"_id": "q_2910", "text": "normalize raw result to table"}
{"_id": "q_2911", "text": "Monitor the rootpath and call the callback\n corresponding to the change.\n This monitoring happens periodically. This function\n is called in a seperate thread from the main thread,\n because it sleeps for the intervals between each poll."}
{"_id": "q_2912", "text": "Get physical plan of a topology"}
{"_id": "q_2913", "text": "Get execution state"}
{"_id": "q_2914", "text": "Get scheduler location"}
{"_id": "q_2915", "text": "Creates SocketOptions object from a given sys_config dict"}
{"_id": "q_2916", "text": "Retrieves heron options from the `HERON_OPTIONS` environment variable.\n\n Heron options have the following format:\n\n cmdline.topologydefn.tmpdirectory=/var/folders/tmpdir\n cmdline.topology.initial.state=PAUSED\n\n In this case, the returned map will contain:\n\n #!json\n {\n \"cmdline.topologydefn.tmpdirectory\": \"/var/folders/tmpdir\",\n \"cmdline.topology.initial.state\": \"PAUSED\"\n }\n\n Currently supports the following options natively:\n\n - `cmdline.topologydefn.tmpdirectory`: (required) the directory to which this\n topology's defn file is written\n - `cmdline.topology.initial.state`: (default: \"RUNNING\") the initial state of the topology\n - `cmdline.topology.name`: (default: class name) topology name on deployment\n\n Returns: map mapping from key to value"}
{"_id": "q_2917", "text": "Add specs to the topology\n\n :type specs: HeronComponentSpec\n :param specs: specs to add to the topology"}
{"_id": "q_2918", "text": "Set topology-wide configuration to the topology\n\n :type config: dict\n :param config: topology-wide config"}
{"_id": "q_2919", "text": "Builds the topology and submits to the destination"}
{"_id": "q_2920", "text": "map from query parameter to query name"}
{"_id": "q_2921", "text": "Synced API call to get logical plans"}
{"_id": "q_2922", "text": "Synced API call to get topology information"}
{"_id": "q_2923", "text": "Synced API call to get component metrics"}
{"_id": "q_2924", "text": "Configure logger which dumps log on terminal\n\n :param level: logging level: info, warning, verbose...\n :type level: logging level\n :param logfile: log file name, default to None\n :type logfile: string\n :return: None\n :rtype: None"}
{"_id": "q_2925", "text": "Initializes a rotating logger\n\n It also makes sure that any StreamHandler is removed, so as to avoid stdout/stderr\n constipation issues"}
{"_id": "q_2926", "text": "simply set verbose level based on command-line args\n\n :param cl_args: CLI arguments\n :type cl_args: dict\n :return: None\n :rtype: None"}
{"_id": "q_2927", "text": "Returns Spout protobuf message"}
{"_id": "q_2928", "text": "Returns Component protobuf message"}
{"_id": "q_2929", "text": "Returns component-specific Config protobuf message\n\n It first adds ``topology.component.parallelism``, and is overriden by\n a user-defined component-specific configuration, specified by spec()."}
{"_id": "q_2930", "text": "Adds outputs to a given protobuf Bolt or Spout message"}
{"_id": "q_2931", "text": "Returns a set of output stream ids registered for this component"}
{"_id": "q_2932", "text": "Returns a StreamId protobuf message"}
{"_id": "q_2933", "text": "Returns a StreamSchema protobuf message"}
{"_id": "q_2934", "text": "Returns component_id of this GlobalStreamId\n\n Note that if HeronComponentSpec is specified as componentId and its name is not yet\n available (i.e. when ``name`` argument was not given in ``spec()`` method in Bolt or Spout),\n this property returns a message with uuid. However, this is provided only for safety\n with __eq__(), __str__(), and __hash__() methods, and not meant to be called explicitly\n before TopologyType class finally sets the name attribute of HeronComponentSpec."}
{"_id": "q_2935", "text": "Registers a new metric to this context"}
{"_id": "q_2936", "text": "Returns the declared inputs to specified component\n\n :return: map <streamId namedtuple (same structure as protobuf msg) -> gtype>, or\n None if not found"}
{"_id": "q_2937", "text": "invoke task hooks for every time spout acks a tuple\n\n :type message_id: str\n :param message_id: message id to which an acked tuple was anchored\n :type complete_latency_ns: float\n :param complete_latency_ns: complete latency in nano seconds"}
{"_id": "q_2938", "text": "invoke task hooks for every time spout fails a tuple\n\n :type message_id: str\n :param message_id: message id to which a failed tuple was anchored\n :type fail_latency_ns: float\n :param fail_latency_ns: fail latency in nano seconds"}
{"_id": "q_2939", "text": "invoke task hooks for every time bolt processes a tuple\n\n :type heron_tuple: HeronTuple\n :param heron_tuple: tuple that is executed\n :type execute_latency_ns: float\n :param execute_latency_ns: execute latency in nano seconds"}
{"_id": "q_2940", "text": "invoke task hooks for every time bolt fails a tuple\n\n :type heron_tuple: HeronTuple\n :param heron_tuple: tuple that is failed\n :type fail_latency_ns: float\n :param fail_latency_ns: fail latency in nano seconds"}
{"_id": "q_2941", "text": "Extract and execute the java files inside the tar and then add topology\n definition file created by running submitTopology\n\n We use the packer to make a package for the tar and dump it\n to a well-known location. We then run the main method of class\n with the specified arguments. We pass arguments as an environment variable HERON_OPTIONS.\n This will run the jar file with the topology class name.\n\n The submitter inside will write out the topology defn file to a location\n that we specify. Then we write the topology defn file to a well known\n packer location. We then write to appropriate places in zookeeper\n and launch the aurora jobs\n :param cl_args:\n :param unknown_args:\n :param tmp_dir:\n :return:"}
{"_id": "q_2942", "text": "Makes the http endpoint for the heron shell\n if shell port is present, otherwise returns None."}
{"_id": "q_2943", "text": "Make the url for log-file data in heron-shell\n from the info stored in stmgr."}
{"_id": "q_2944", "text": "Sends this outgoing packet to dispatcher's socket"}
{"_id": "q_2945", "text": "Creates an IncomingPacket object from header and data\n\n This method is for testing purposes"}
{"_id": "q_2946", "text": "Reads incoming data from asyncore.dispatcher"}
{"_id": "q_2947", "text": "Reads yaml config file and returns auto-typed config_dict"}
{"_id": "q_2948", "text": "Send messages in out_stream to the Stream Manager"}
{"_id": "q_2949", "text": "Called when state change is commanded by stream manager"}
{"_id": "q_2950", "text": "Checks if a given stream_id and tuple matches with the output schema\n\n :type stream_id: str\n :param stream_id: stream id into which tuple is sent\n :type tup: list\n :param tup: tuple that is going to be sent"}
{"_id": "q_2951", "text": "Adds the target component\n\n :type stream_id: str\n :param stream_id: stream id into which tuples are emitted\n :type task_ids: list of str\n :param task_ids: list of task ids to which tuples are emitted\n :type grouping: ICustomStreamGrouping object\n :param grouping: custom grouping to use\n :type source_comp_name: str\n :param source_comp_name: source component name"}
{"_id": "q_2952", "text": "Prepares the custom grouping for this component"}
{"_id": "q_2953", "text": "Format a line in the directory list based on the file's type and other attributes."}
{"_id": "q_2954", "text": "Format the date associated with a file to be displayed in directory listing."}
{"_id": "q_2955", "text": "Prefix to a filename in the directory listing. This is to make the\n listing similar to an output of \"ls -alh\"."}
{"_id": "q_2956", "text": "Read a chunk of a file from an offset upto the length."}
{"_id": "q_2957", "text": "Runs the command and returns its stdout and stderr."}
{"_id": "q_2958", "text": "Feed output of one command to the next and return final output\n Returns string output of chained application of commands."}
{"_id": "q_2959", "text": "normalize raw metrics API result to table"}
{"_id": "q_2960", "text": "run metrics subcommand"}
{"_id": "q_2961", "text": "run containers subcommand"}
{"_id": "q_2962", "text": "Creates a HeronTuple\n\n :param stream: protobuf message ``StreamId``\n :param tuple_key: tuple id\n :param values: a list of values\n :param roots: a list of protobuf message ``RootId``"}
{"_id": "q_2963", "text": "Creates a RootTupleInfo"}
{"_id": "q_2964", "text": "Updates the list of global error suppressions.\n\n Parses any lint directives in the file that have global effect.\n\n Args:\n lines: An array of strings, each representing a line of the file, with the\n last element being empty if the file is terminated with a newline."}
{"_id": "q_2965", "text": "Searches the string for the pattern, caching the compiled regexp."}
{"_id": "q_2966", "text": "Removes C++11 raw strings from lines.\n\n Before:\n static const char kData[] = R\"(\n multi-line string\n )\";\n\n After:\n static const char kData[] = \"\"\n (replaced by blank line)\n \"\";\n\n Args:\n raw_lines: list of raw lines.\n\n Returns:\n list of lines with C++11 raw strings replaced by empty strings."}
{"_id": "q_2967", "text": "We are inside a comment, find the end marker."}
{"_id": "q_2968", "text": "Clears a range of lines for multi-line comments."}
{"_id": "q_2969", "text": "Find the position just after the end of current parenthesized expression.\n\n Args:\n line: a CleansedLines line.\n startpos: start searching at this position.\n stack: nesting stack at startpos.\n\n Returns:\n On finding matching end: (index just after matching end, None)\n On finding an unclosed expression: (-1, None)\n Otherwise: (-1, new stack at end of this line)"}
{"_id": "q_2970", "text": "Find position at the matching start of current expression.\n\n This is almost the reverse of FindEndOfExpressionInLine, but note\n that the input position and returned position differs by 1.\n\n Args:\n line: a CleansedLines line.\n endpos: start searching at this position.\n stack: nesting stack at endpos.\n\n Returns:\n On finding matching start: (index at matching start, None)\n On finding an unclosed expression: (-1, None)\n Otherwise: (-1, new stack at beginning of this line)"}
{"_id": "q_2971", "text": "If input points to ) or } or ] or >, finds the position that opens it.\n\n If lines[linenum][pos] points to a ')' or '}' or ']' or '>', finds the\n linenum/pos that correspond to the opening of the expression.\n\n Args:\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n pos: A position on the line.\n\n Returns:\n A tuple (line, linenum, pos) pointer *at* the opening brace, or\n (line, 0, -1) if we never find the matching opening brace. Note\n we ignore strings and comments when matching; and the line we\n return is the 'cleansed' line at linenum."}
{"_id": "q_2972", "text": "Logs an error if no Copyright message appears at the top of the file."}
{"_id": "q_2973", "text": "Returns the CPP variable that should be used as a header guard.\n\n Args:\n filename: The name of a C++ header file.\n\n Returns:\n The CPP variable that should be used as a header guard in the\n named file."}
{"_id": "q_2974", "text": "Checks that the file contains a header guard.\n\n Logs an error if no #ifndef header guard is present. For other\n headers, checks that the full pathname is used.\n\n Args:\n filename: The name of the C++ header file.\n clean_lines: A CleansedLines instance containing the file.\n error: The function to call with any errors found."}
{"_id": "q_2975", "text": "Logs an error if a source file does not include its header."}
{"_id": "q_2976", "text": "Logs an error for each line containing bad characters.\n\n Two kinds of bad characters:\n\n 1. Unicode replacement characters: These indicate that either the file\n contained invalid UTF-8 (likely) or Unicode replacement characters (which\n it shouldn't). Note that it's possible for this to throw off line\n numbering if the invalid UTF-8 occurred adjacent to a newline.\n\n 2. NUL bytes. These are problematic for some tools.\n\n Args:\n filename: The name of the current file.\n lines: An array of strings, each representing a line of the file.\n error: The function to call with any errors found."}
{"_id": "q_2977", "text": "Checks for calls to thread-unsafe functions.\n\n Much code has been originally written without consideration of\n multi-threading. Also, engineers are relying on their old experience;\n they have learned posix before threading extensions were added. These\n tests guide the engineers to use thread-safe functions (when using\n posix directly).\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2978", "text": "Checks for the correctness of various spacing issues in the code.\n\n Things we check for: spaces around operators, spaces after\n if/for/while/switch, no spaces around parens in function calls, two\n spaces between code and comment, don't start a block with a blank\n line, don't end a function with a blank line, don't add a blank line\n after public/protected/private, don't have too many blank lines in a row.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n nesting_state: A NestingState instance which maintains information about\n the current stack of nested blocks being parsed.\n error: The function to call with any errors found."}
{"_id": "q_2979", "text": "Checks for horizontal spacing around parentheses.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2980", "text": "Check if expression looks like a type name, returns true if so.\n\n Args:\n clean_lines: A CleansedLines instance containing the file.\n nesting_state: A NestingState instance which maintains information about\n the current stack of nested blocks being parsed.\n expr: The expression to check.\n Returns:\n True, if token looks like a type."}
{"_id": "q_2981", "text": "Checks for additional blank line issues related to sections.\n\n Currently the only thing checked here is blank line before protected/private.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n class_info: A _ClassInfo objects.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2982", "text": "Return the most recent non-blank line and its line number.\n\n Args:\n clean_lines: A CleansedLines instance containing the file contents.\n linenum: The number of the line to check.\n\n Returns:\n A tuple with two elements. The first element is the contents of the last\n non-blank line before the current line, or the empty string if this is the\n first non-blank line. The second is the line number of that line, or -1\n if this is the first non-blank line."}
{"_id": "q_2983", "text": "Looks for redundant trailing semicolon.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2984", "text": "Find a replaceable CHECK-like macro.\n\n Args:\n line: line to search on.\n Returns:\n (macro name, start position), or (None, -1) if no replaceable\n macro is found."}
{"_id": "q_2985", "text": "Checks the use of CHECK and EXPECT macros.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2986", "text": "Check alternative keywords being used in boolean expressions.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2987", "text": "Determines the width of the line in column positions.\n\n Args:\n line: A string, which may be a Unicode string.\n\n Returns:\n The width of the line in column positions, accounting for Unicode\n combining characters and wide characters."}
{"_id": "q_2988", "text": "Drops common suffixes like _test.cc or -inl.h from filename.\n\n For example:\n >>> _DropCommonSuffixes('foo/foo-inl.h')\n 'foo/foo'\n >>> _DropCommonSuffixes('foo/bar/foo.cc')\n 'foo/bar/foo'\n >>> _DropCommonSuffixes('foo/foo_internal.h')\n 'foo/foo'\n >>> _DropCommonSuffixes('foo/foo_unusualinternal.h')\n 'foo/foo_unusualinternal'\n\n Args:\n filename: The input filename.\n\n Returns:\n The filename with the common suffix removed."}
{"_id": "q_2989", "text": "Figures out what kind of header 'include' is.\n\n Args:\n fileinfo: The current file cpplint is running over. A FileInfo instance.\n include: The path to a #included file.\n is_system: True if the #include used <> rather than \"\".\n\n Returns:\n One of the _XXX_HEADER constants.\n\n For example:\n >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'stdio.h', True)\n _C_SYS_HEADER\n >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'string', True)\n _CPP_SYS_HEADER\n >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/foo.h', False)\n _LIKELY_MY_HEADER\n >>> _ClassifyInclude(FileInfo('foo/foo_unknown_extension.cc'),\n ... 'bar/foo_other_ext.h', False)\n _POSSIBLE_MY_HEADER\n >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/bar.h', False)\n _OTHER_HEADER"}
{"_id": "q_2990", "text": "Check for unsafe global or static objects.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2991", "text": "Check for printf related issues.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2992", "text": "Check if current line is inside constructor initializer list.\n\n Args:\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n Returns:\n True if current line appears to be inside constructor initializer\n list, False otherwise."}
{"_id": "q_2993", "text": "Check for non-const references.\n\n Separate from CheckLanguage since it scans backwards from current\n line, instead of scanning forward.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n nesting_state: A NestingState instance which maintains information about\n the current stack of nested blocks being parsed.\n error: The function to call with any errors found."}
{"_id": "q_2994", "text": "Check if these two filenames belong to the same module.\n\n The concept of a 'module' here is a as follows:\n foo.h, foo-inl.h, foo.cc, foo_test.cc and foo_unittest.cc belong to the\n same 'module' if they are in the same directory.\n some/path/public/xyzzy and some/path/internal/xyzzy are also considered\n to belong to the same module here.\n\n If the filename_cc contains a longer path than the filename_h, for example,\n '/absolute/path/to/base/sysinfo.cc', and this file would include\n 'base/sysinfo.h', this function also produces the prefix needed to open the\n header. This is used by the caller of this function to more robustly open the\n header file. We don't have access to the real include paths in this context,\n so we need this guesswork here.\n\n Known bugs: tools/base/bar.cc and base/bar.h belong to the same module\n according to this implementation. Because of this, this function gives\n some false positives. This should be sufficiently rare in practice.\n\n Args:\n filename_cc: is the path for the source (e.g. .cc) file\n filename_h: is the path for the header path\n\n Returns:\n Tuple with a bool and a string:\n bool: True if filename_cc and filename_h belong to the same module.\n string: the additional prefix needed to open the header file."}
{"_id": "q_2995", "text": "Check that make_pair's template arguments are deduced.\n\n G++ 4.6 in C++11 mode fails badly if make_pair's template arguments are\n specified explicitly, and such use isn't intended in any case.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2996", "text": "Check if line contains a redundant \"virtual\" function-specifier.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2997", "text": "Flag those C++14 features that we restrict.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."}
{"_id": "q_2998", "text": "Performs lint checks and reports any errors to the given error function.\n\n Args:\n filename: Filename of the file that is being processed.\n file_extension: The extension (dot not included) of the file.\n lines: An array of strings, each representing a line of the file, with the\n last element being empty if the file is terminated with a newline.\n error: A callable to which errors are reported, which takes 4 arguments:\n filename, line number, error level, and message\n extra_check_functions: An array of additional check functions that will be\n run on each source line. Each function takes 4\n arguments: filename, clean_lines, line, error"}
{"_id": "q_2999", "text": "Loads the configuration files and processes the config overrides.\n\n Args:\n filename: The name of the file being processed by the linter.\n\n Returns:\n False if the current |filename| should not be processed further."}
{"_id": "q_3000", "text": "Parses the command line arguments.\n\n This may set the output format and verbosity level as side-effects.\n\n Args:\n args: The command line arguments:\n\n Returns:\n The list of filenames to lint."}
{"_id": "q_3001", "text": "Searches a list of filenames and replaces directories in the list with\n all files descending from those directories. Files with extensions not in\n the valid extensions list are excluded.\n\n Args:\n filenames: A list of files or directories\n\n Returns:\n A list of all files that are members of filenames or descended from a\n directory in filenames"}
{"_id": "q_3002", "text": "Check if a header has already been included.\n\n Args:\n header: header to check.\n Returns:\n Line number of previous occurrence, or -1 if the header has not\n been seen before."}
{"_id": "q_3003", "text": "Returns a non-empty error message if the next header is out of order.\n\n This function also updates the internal state to be ready to check\n the next include.\n\n Args:\n header_type: One of the _XXX_HEADER constants defined above.\n\n Returns:\n The empty string if the header is in the right order, or an\n error message describing what's wrong."}
{"_id": "q_3004", "text": "Bumps the module's error statistic."}
{"_id": "q_3005", "text": "Print a summary of errors by category, and the total."}
{"_id": "q_3006", "text": "Collapses strings and chars on a line to simple \"\" or '' blocks.\n\n We nix strings first so we're not fooled by text like '\"http://\"'\n\n Args:\n elided: The line being processed.\n\n Returns:\n The line with collapsed strings."}
{"_id": "q_3007", "text": "Check end of namespace comments."}
{"_id": "q_3008", "text": "Update preprocessor stack.\n\n We need to handle preprocessors due to classes like this:\n #ifdef SWIG\n struct ResultDetailsPageElementExtensionPoint {\n #else\n struct ResultDetailsPageElementExtensionPoint : public Extension {\n #endif\n\n We make the following assumptions (good enough for most files):\n - Preprocessor condition evaluates to true from #if up to first\n #else/#elif/#endif.\n\n - Preprocessor condition evaluates to false from #else/#elif up\n to #endif. We still perform lint checks on these lines, but\n these do not affect nesting stack.\n\n Args:\n line: current line to check."}
{"_id": "q_3009", "text": "Get class info on the top of the stack.\n\n Returns:\n A _ClassInfo object if we are inside a class, or None otherwise."}
{"_id": "q_3010", "text": "Checks that all classes and namespaces have been completely parsed.\n\n Call this when all lines in a file have been processed.\n Args:\n filename: The name of the current file.\n error: The function to call with any errors found."}
{"_id": "q_3011", "text": "Return a new Streamlet by applying map_function to each element of this Streamlet\n and flattening the result"}
{"_id": "q_3012", "text": "Return a new Streamlet containing only the elements that satisfy filter_function"}
{"_id": "q_3013", "text": "Return num_clones number of streamlets each containing all elements\n of the current streamlet"}
{"_id": "q_3014", "text": "Return a new Streamlet in which each element of this Streamlet are collected\n over a window defined by window_config and then reduced using the reduce_function\n reduce_function takes two element at one time and reduces them to one element that\n is used in the subsequent operations."}
{"_id": "q_3015", "text": "Returns a new Streamlet that consists of elements of both this and other_streamlet"}
{"_id": "q_3016", "text": "Return a new Streamlet by joining join_streamlet with this streamlet"}
{"_id": "q_3017", "text": "Return a new Streamlet by left join_streamlet with this streamlet"}
{"_id": "q_3018", "text": "extract common args"}
{"_id": "q_3019", "text": "Ensures argument obj is a native Python dictionary, raises an exception if not, and otherwise\n returns obj."}
{"_id": "q_3020", "text": "Ensures argument obj is either a dictionary or None; if the latter, instantiates an empty\n dictionary."}
{"_id": "q_3021", "text": "Record a stream of event records to json"}
{"_id": "q_3022", "text": "Read a config file and instantiate the RCParser.\n\n Create new :class:`configparser.ConfigParser` for the given **path**\n and instantiate the :class:`RCParser` with the ConfigParser as\n :attr:`config` attribute.\n\n If the **path** doesn't exist, raise :exc:`ConfigFileError`.\n Otherwise return a new :class:`RCParser` instance.\n\n :param path:\n Optional path to the config file to parse.\n If not given, use ``'~/.pypirc'``."}
{"_id": "q_3023", "text": "This recursive descent thing formats a config dict for GraphQL."}
{"_id": "q_3024", "text": "Get a pipeline by name. Only constructs that pipeline and caches it.\n\n Args:\n name (str): Name of the pipeline to retriever\n\n Returns:\n PipelineDefinition: Instance of PipelineDefinition with that name."}
{"_id": "q_3025", "text": "Return all pipelines as a list\n\n Returns:\n List[PipelineDefinition]:"}
{"_id": "q_3026", "text": "This function polls the process until it returns a valid\n item or returns PROCESS_DEAD_AND_QUEUE_EMPTY if it is in\n a state where the process has terminated and the queue is empty\n\n Warning: if the child process is in an infinite loop. This will\n also infinitely loop."}
{"_id": "q_3027", "text": "Waits until all there are no processes enqueued."}
{"_id": "q_3028", "text": "The schema for configuration data that describes the type, optionality, defaults, and description.\n\n Args:\n dagster_type (DagsterType):\n A ``DagsterType`` describing the schema of this field, ie `Dict({'example': Field(String)})`\n default_value (Any):\n A default value to use that respects the schema provided via dagster_type\n is_optional (bool): Whether the presence of this field is optional\n despcription (str):"}
{"_id": "q_3029", "text": "Builds the execution plan."}
{"_id": "q_3030", "text": "Here we build a new ExecutionPlan from a pipeline definition and the environment config.\n\n To do this, we iterate through the pipeline's solids in topological order, and hand off the\n execution steps for each solid to a companion _PlanBuilder object.\n\n Once we've processed the entire pipeline, we invoke _PlanBuilder.build() to construct the\n ExecutionPlan object."}
{"_id": "q_3031", "text": "Get the shell commands we'll use to actually build and publish a package to PyPI."}
{"_id": "q_3032", "text": "Tags all submodules for a new release.\n\n Ensures that git tags, as well as the version.py files in each submodule, agree and that the\n new version is strictly greater than the current version. Will fail if the new version\n is not an increment (following PEP 440). Creates a new git tag and commit."}
{"_id": "q_3033", "text": "Create a context definition from a pre-existing context. This can be useful\n in testing contexts where you may want to create a context manually and then\n pass it into a one-off PipelineDefinition\n\n Args:\n context (ExecutionContext): The context that will provided to the pipeline.\n Returns:\n PipelineContextDefinition: The passthrough context definition."}
{"_id": "q_3034", "text": "A decorator for a annotating a function that can take the selected properties\n of a ``config_value`` and an instance of a custom type and materialize it.\n\n Args:\n config_cls (Selector):"}
{"_id": "q_3035", "text": "Automagically wrap a block of text."}
{"_id": "q_3036", "text": "Download an object from s3.\n\n Args:\n info (ExpectationExecutionInfo): Must expose a boto3 S3 client as its `s3` resource.\n\n Returns:\n str:\n The path to the downloaded object."}
{"_id": "q_3037", "text": "Wraps the execution of user-space code in an error boundary. This places a uniform\n policy around an user code invoked by the framework. This ensures that all user\n errors are wrapped in the DagsterUserCodeExecutionError, and that the original stack\n trace of the user error is preserved, so that it can be reported without confusing\n framework code in the stack trace, if a tool author wishes to do so. This has\n been especially help in a notebooking context."}
{"_id": "q_3038", "text": "The missing mkdir -p functionality in os."}
{"_id": "q_3039", "text": "In the event of pipeline initialization failure, we want to be able to log the failure\n without a dependency on the ExecutionContext to initialize DagsterLog"}
{"_id": "q_3040", "text": "Whether the solid execution was successful"}
{"_id": "q_3041", "text": "Whether the solid execution was skipped"}
{"_id": "q_3042", "text": "Returns transformed value either for DEFAULT_OUTPUT or for the output\n given as output_name. Returns None if execution result isn't a success.\n\n Reconstructs the pipeline context to materialize value."}
{"_id": "q_3043", "text": "Returns the failing step's data that happened during this solid's execution, if any"}
{"_id": "q_3044", "text": "A permissive dict will permit the user to partially specify the permitted fields. Any fields\n that are specified and passed in will be type checked. Other fields will be allowed, but\n will be ignored by the type checker."}
{"_id": "q_3045", "text": "Execute the user-specified transform for the solid. Wrap in an error boundary and do\n all relevant logging and metrics tracking"}
{"_id": "q_3046", "text": "Takes a python cls and creates a type for it in the Dagster domain.\n\n Args:\n existing_type (cls)\n The python type you want to project in to the Dagster type system.\n name (Optional[str]):\n description (Optiona[str]):\n input_schema (Optional[InputSchema]):\n An instance of a class that inherits from :py:class:`InputSchema` that\n can map config data to a value of this type.\n\n output_schema (Optiona[OutputSchema]):\n An instance of a class that inherits from :py:class:`OutputSchema` that\n can map config data to persisting values of this type.\n\n serialization_strategy (Optional[SerializationStrategy]):\n The default behavior for how to serialize this value for\n persisting between execution steps.\n\n storage_plugins (Optional[Dict[RunStorageMode, TypeStoragePlugin]]):\n Storage type specific overrides for the serialization strategy.\n This allows for storage specific optimzations such as effecient\n distributed storage on S3."}
{"_id": "q_3047", "text": "A decorator for creating a resource. The decorated function will be used as the \n resource_fn in a ResourceDefinition."}
{"_id": "q_3048", "text": "Events API v2 enables you to add PagerDuty's advanced event and incident management\n functionality to any system that can make an outbound HTTP connection.\n\n Arguments:\n summary {string} -- A high-level, text summary message of the event. Will be used to\n construct an alert's description.\n\n Example: \"PING OK - Packet loss = 0%, RTA = 1.41 ms\" \"Host\n 'acme-andromeda-sv1-c40 :: 179.21.24.50' is DOWN\"\n\n source {string} -- Specific human-readable unique identifier, such as a hostname, for\n the system having the problem.\n\n Examples:\n \"prod05.theseus.acme-widgets.com\"\n \"171.26.23.22\"\n \"aws:elasticache:us-east-1:852511987:cluster/api-stats-prod-003\"\n \"9c09acd49a25\"\n\n severity {string} -- How impacted the affected system is. Displayed to users in lists\n and influences the priority of any created incidents. Must be one\n of {info, warning, error, critical}\n\n Keyword Arguments:\n event_action {str} -- There are three types of events that PagerDuty recognizes, and\n are used to represent different types of activity in your\n monitored systems. (default: 'trigger')\n * trigger: When PagerDuty receives a trigger event, it will either open a new alert,\n or add a new trigger log entry to an existing alert, depending on the\n provided dedup_key. Your monitoring tools should send PagerDuty a trigger\n when a new problem has been detected. You may send additional triggers\n when a previously detected problem has occurred again.\n\n * acknowledge: acknowledge events cause the referenced incident to enter the\n acknowledged state. While an incident is acknowledged, it won't\n generate any additional notifications, even if it receives new\n trigger events. Your monitoring tools should send PagerDuty an\n acknowledge event when they know someone is presently working on the\n problem.\n\n * resolve: resolve events cause the referenced incident to enter the resolved state.\n Once an incident is resolved, it won't generate any additional\n notifications. New trigger events with the same dedup_key as a resolved\n incident won't re-open the incident. Instead, a new incident will be\n created. Your monitoring tools should send PagerDuty a resolve event when\n the problem that caused the initial trigger event has been fixed.\n\n dedup_key {string} -- Deduplication key for correlating triggers and resolves. The\n maximum permitted length of this property is 255 characters.\n\n timestamp {string} -- Timestamp (ISO 8601). When the upstream system detected / created\n the event. This is useful if a system batches or holds events\n before sending them to PagerDuty.\n\n Optional - Will be auto-generated by PagerDuty if not provided.\n\n Example:\n 2015-07-17T08:42:58.315+0000\n\n component {string} -- The part or component of the affected system that is broken.\n\n Examples:\n \"keepalive\"\n \"webping\"\n \"mysql\"\n \"wqueue\"\n\n group {string} -- A cluster or grouping of sources. For example, sources\n \u201cprod-datapipe-02\u201d and \u201cprod-datapipe-03\u201d might both be part of\n \u201cprod-datapipe\u201d\n\n Examples:\n \"prod-datapipe\"\n \"www\"\n \"web_stack\"\n\n event_class {string} -- The class/type of the event.\n\n Examples:\n \"High CPU\"\n \"Latency\"\n \"500 Error\"\n\n custom_details {Dict[str, str]} -- Additional details about the event and affected\n system.\n\n Example:\n {\"ping time\": \"1500ms\", \"load avg\": 0.75 }"}
{"_id": "q_3049", "text": "Default method to acquire database connection parameters.\n\n Sets connection parameters to match settings.py, and sets\n default values to blank fields."}
{"_id": "q_3050", "text": "Closes the client connection to the database."}
{"_id": "q_3051", "text": "Overrides standard to_python method from django models to allow\r\n correct translation of Mongo array to a python list."}
{"_id": "q_3052", "text": "Filter the queryset for the instance this manager is bound to."}
{"_id": "q_3053", "text": "Computes the matrix of expected false positives for all possible\n sub-intervals of the complete domain of set sizes, assuming uniform\n distribution of set_sizes within each sub-intervals.\n\n Args:\n cum_counts: the complete cummulative distribution of set sizes.\n sizes: the complete domain of set sizes.\n\n Return (np.array): the 2-D array of expected number of false positives\n for every pair of [l, u] interval, where l is axis-0 and u is\n axis-1."}
{"_id": "q_3054", "text": "Computes the matrix of expected false positives for all possible\n sub-intervals of the complete domain of set sizes.\n\n Args:\n counts: the complete distribution of set sizes.\n sizes: the complete domain of set sizes.\n\n Return (np.array): the 2-D array of expected number of false positives\n for every pair of [l, u] interval, where l is axis-0 and u is\n axis-1."}
{"_id": "q_3055", "text": "Compute the optimal partitions given a distribution of set sizes.\n\n Args:\n sizes (numpy.array): The complete domain of set sizes in ascending\n order.\n counts (numpy.array): The frequencies of all set sizes in the same\n order as `sizes`.\n num_part (int): The number of partitions to create.\n\n Returns:\n list: A list of partitions in the form of `(lower, upper)` tuples,\n where `lower` and `upper` are lower and upper bound (inclusive)\n set sizes of each partition."}
{"_id": "q_3056", "text": "Compute the byte size after serialization.\n\n Args:\n byteorder (str, optional): This is byte order of the serialized data. Use one\n of the `byte order characters\n <https://docs.python.org/3/library/struct.html#byte-order-size-and-alignment>`_:\n ``@``, ``=``, ``<``, ``>``, and ``!``.\n Default is ``@`` -- the native order.\n\n Returns:\n int: Size in number of bytes after serialization."}
{"_id": "q_3057", "text": "Serialize this lean MinHash and store the result in an allocated buffer.\n\n Args:\n buf (buffer): `buf` must implement the `buffer`_ interface.\n One such example is the built-in `bytearray`_ class.\n byteorder (str, optional): This is byte order of the serialized data. Use one\n of the `byte order characters\n <https://docs.python.org/3/library/struct.html#byte-order-size-and-alignment>`_:\n ``@``, ``=``, ``<``, ``>``, and ``!``.\n Default is ``@`` -- the native order.\n\n This is preferred over using `pickle`_ if the serialized lean MinHash needs\n to be used by another program in a different programming language.\n\n The serialization schema:\n 1. The first 8 bytes is the seed integer\n 2. The next 4 bytes is the number of hash values\n 3. The rest is the serialized hash values, each uses 4 bytes\n\n Example:\n To serialize a single lean MinHash into a `bytearray`_ buffer.\n\n .. code-block:: python\n\n buf = bytearray(lean_minhash.bytesize())\n lean_minhash.serialize(buf)\n\n To serialize multiple lean MinHash into a `bytearray`_ buffer.\n\n .. code-block:: python\n\n # assuming lean_minhashs is a list of LeanMinHash with the same size\n size = lean_minhashs[0].bytesize()\n buf = bytearray(size*len(lean_minhashs))\n for i, lean_minhash in enumerate(lean_minhashs):\n lean_minhash.serialize(buf[i*size:])\n\n .. _`buffer`: https://docs.python.org/3/c-api/buffer.html\n .. _`bytearray`: https://docs.python.org/3.6/library/functions.html#bytearray\n .. _`byteorder`: https://docs.python.org/3/library/struct.html"}
{"_id": "q_3058", "text": "Deserialize a lean MinHash from a buffer.\n\n Args:\n buf (buffer): `buf` must implement the `buffer`_ interface.\n One such example is the built-in `bytearray`_ class.\n byteorder (str. optional): This is byte order of the serialized data. Use one\n of the `byte order characters\n <https://docs.python.org/3/library/struct.html#byte-order-size-and-alignment>`_:\n ``@``, ``=``, ``<``, ``>``, and ``!``.\n Default is ``@`` -- the native order.\n\n Return:\n datasketch.LeanMinHash: The deserialized lean MinHash\n\n Example:\n To deserialize a lean MinHash from a buffer.\n\n .. code-block:: python\n\n lean_minhash = LeanMinHash.deserialize(buf)"}
{"_id": "q_3059", "text": "Update this MinHash with a new value.\n The value will be hashed using the hash function specified by\n the `hashfunc` argument in the constructor.\n\n Args:\n b: The value to be hashed using the hash function specified.\n\n Example:\n To update with a new string value (using the default SHA1 hash\n function, which requires bytes as input):\n\n .. code-block:: python\n\n minhash = Minhash()\n minhash.update(\"new value\".encode('utf-8'))\n\n We can also use a different hash function, for example, `pyfarmhash`:\n\n .. code-block:: python\n\n import farmhash\n def _hash_32(b):\n return farmhash.hash32(b)\n minhash = MinHash(hashfunc=_hash_32)\n minhash.update(\"new value\")"}
{"_id": "q_3060", "text": "Merge the other MinHash with this one, making this one the union\n of both.\n\n Args:\n other (datasketch.MinHash): The other MinHash."}
{"_id": "q_3061", "text": "Create a MinHash which is the union of the MinHash objects passed as arguments.\n\n Args:\n *mhs: The MinHash objects to be united. The argument list length is variable,\n but must be at least 2.\n\n Returns:\n datasketch.MinHash: A new union MinHash."}
{"_id": "q_3062", "text": "Index all sets given their keys, MinHashes, and sizes.\n It can be called only once after the index is created.\n\n Args:\n entries (`iterable` of `tuple`): An iterable of tuples, each must be\n in the form of `(key, minhash, size)`, where `key` is the unique\n identifier of a set, `minhash` is the MinHash of the set,\n and `size` is the size or number of unique items in the set.\n\n Note:\n `size` must be positive."}
{"_id": "q_3063", "text": "Create a new weighted MinHash given a weighted Jaccard vector.\n Each dimension is an integer \n frequency of the corresponding element in the multi-set represented\n by the vector.\n\n Args:\n v (numpy.array): The Jaccard vector."}
{"_id": "q_3064", "text": "Estimate the cardinality of the data values seen so far.\n\n Returns:\n int: The estimated cardinality."}
{"_id": "q_3065", "text": "Merge the other HyperLogLog with this one, making this the union of the\n two.\n\n Args:\n other (datasketch.HyperLogLog):"}
{"_id": "q_3066", "text": "Computes the average precision at k.\n\n This function computes the average prescision at k between two lists of\n items.\n\n Parameters\n ----------\n actual : list\n A list of elements that are to be predicted (order doesn't matter)\n predicted : list\n A list of predicted elements (order does matter)\n k : int, optional\n The maximum number of predicted elements\n\n Returns\n -------\n score : double\n The average precision at k over the input lists"}
{"_id": "q_3067", "text": "Computes the mean average precision at k.\n\n This function computes the mean average prescision at k between two lists\n of lists of items.\n\n Parameters\n ----------\n actual : list\n A list of lists of elements that are to be predicted \n (order doesn't matter in the lists)\n predicted : list\n A list of lists of predicted elements\n (order matters in the lists)\n k : int, optional\n The maximum number of predicted elements\n\n Returns\n -------\n score : double\n The mean average precision at k over the input lists"}
{"_id": "q_3068", "text": "Return the approximate top-k keys that have the highest\n Jaccard similarities to the query set.\n\n Args:\n minhash (datasketch.MinHash): The MinHash of the query set.\n k (int): The maximum number of keys to return.\n\n Returns:\n `list` of at most k keys."}
{"_id": "q_3069", "text": "Cleanup client resources and disconnect from AsyncMinHashLSH storage."}
{"_id": "q_3070", "text": "Return ordered storage system based on the specified config.\n\n The canonical example of such a storage container is\n ``defaultdict(list)``. Thus, the return value of this method contains\n keys and values. The values are ordered lists with the last added\n item at the end.\n\n Args:\n config (dict): Defines the configurations for the storage.\n For in-memory storage, the config ``{'type': 'dict'}`` will\n suffice. For Redis storage, the type should be ``'redis'`` and\n the configurations for the Redis database should be supplied\n under the key ``'redis'``. These parameters should be in a form\n suitable for `redis.Redis`. The parameters may alternatively\n contain references to environment variables, in which case\n literal configuration values should be replaced by dicts of\n the form::\n\n {'env': 'REDIS_HOSTNAME',\n 'default': 'localhost'}\n\n For a full example, see :ref:`minhash_lsh_at_scale`\n\n name (bytes, optional): A reference name for this storage container.\n For dict-type containers, this is ignored. For Redis containers,\n this name is used to prefix keys pertaining to this storage\n container within the database."}
{"_id": "q_3071", "text": "Return an unordered storage system based on the specified config.\n\n The canonical example of such a storage container is\n ``defaultdict(set)``. Thus, the return value of this method contains\n keys and values. The values are unordered sets.\n\n Args:\n config (dict): Defines the configurations for the storage.\n For in-memory storage, the config ``{'type': 'dict'}`` will\n suffice. For Redis storage, the type should be ``'redis'`` and\n the configurations for the Redis database should be supplied\n under the key ``'redis'``. These parameters should be in a form\n suitable for `redis.Redis`. The parameters may alternatively\n contain references to environment variables, in which case\n literal configuration values should be replaced by dicts of\n the form::\n\n {'env': 'REDIS_HOSTNAME',\n 'default': 'localhost'}\n\n For a full example, see :ref:`minhash_lsh_at_scale`\n\n name (bytes, optional): A reference name for this storage container.\n For dict-type containers, this is ignored. For Redis containers,\n this name is used to prefix keys pertaining to this storage\n container within the database."}
{"_id": "q_3072", "text": "Parses command strings and returns a Popen-ready list."}
{"_id": "q_3073", "text": "Executes a given commmand and returns Response.\n\n Blocks until process is complete, or timeout is reached."}
{"_id": "q_3074", "text": "Spawns a new process from the given command."}
{"_id": "q_3075", "text": "Sends a line to std_in."}
{"_id": "q_3076", "text": "Converts Py type to PyJs type"}
{"_id": "q_3077", "text": "note py_arr elems are NOT converted to PyJs types!"}
{"_id": "q_3078", "text": "note py_obj items are NOT converted to PyJs types!"}
{"_id": "q_3079", "text": "Adds op_code with specified args to tape"}
{"_id": "q_3080", "text": "Records locations of labels and compiles the code"}
{"_id": "q_3081", "text": "returns n digit string representation of the num"}
{"_id": "q_3082", "text": "Takes the replacement template and some info about the match and returns filled template"}
{"_id": "q_3083", "text": "what can be either name of the op, or node, or a list of statements."}
{"_id": "q_3084", "text": "Translates esprima syntax tree to python by delegating to appropriate translating node"}
{"_id": "q_3085", "text": "Decorator limiting resulting line length in order to avoid python parser stack overflow -\n If expression longer than LINE_LEN_LIMIT characters then it will be moved to upper line\n USE ONLY ON EXPRESSIONS!!!"}
{"_id": "q_3086", "text": "Does not chceck whether t is not resticted or internal"}
{"_id": "q_3087", "text": "Translates input JS file to python and saves the it to the output path.\n It appends some convenience code at the end so that it is easy to import JS objects.\n\n For example we have a file 'example.js' with: var a = function(x) {return x}\n translate_file('example.js', 'example.py')\n\n Now example.py can be easily importend and used:\n >>> from example import example\n >>> example.a(30)\n 30"}
{"_id": "q_3088", "text": "executes javascript js in current context\n\n During initial execute() the converted js is cached for re-use. That means next time you\n run the same javascript snippet you save many instructions needed to parse and convert the\n js code to python code.\n\n This cache causes minor overhead (a cache dicts is updated) but the Js=>Py conversion process\n is typically expensive compared to actually running the generated python code.\n\n Note that the cache is just a dict, it has no expiration or cleanup so when running this\n in automated situations with vast amounts of snippets it might increase memory usage."}
{"_id": "q_3089", "text": "evaluates expression in current context and returns its value"}
{"_id": "q_3090", "text": "Dont use this method from inside bytecode to call other bytecode."}
{"_id": "q_3091", "text": "n may be the inside of block or object"}
{"_id": "q_3092", "text": "n may be the inside of block or object.\n last is the code before object"}
{"_id": "q_3093", "text": "returns True if regexp starts at n else returns False\n checks whether it is not a division"}
{"_id": "q_3094", "text": "Returns a first index>=start of chat not in charset"}
{"_id": "q_3095", "text": "checks if self is in other"}
{"_id": "q_3096", "text": "Set the social login process state to connect rather than login\n Refer to the implementation of get_social_login in base class and to the\n allauth.socialaccount.helpers module complete_social_login function."}
{"_id": "q_3097", "text": "Select the correct text from the Japanese number, reading and\n alternatives"}
{"_id": "q_3098", "text": "Download and extract processed data and embeddings."}
{"_id": "q_3099", "text": "Make a grid of images, via numpy.\n\n Args:\n tensor (Tensor or list): 4D mini-batch Tensor of shape (B x C x H x W)\n or a list of images all of the same size.\n nrow (int, optional): Number of images displayed in each row of the grid.\n The Final grid size is (B / nrow, nrow). Default is 8.\n padding (int, optional): amount of padding. Default is 2.\n pad_value (float, optional): Value for the padded pixels."}
{"_id": "q_3100", "text": "Save a given Tensor into an image file.\n\n Args:\n tensor (Tensor or list): Image to be saved. If given a mini-batch tensor,\n saves the tensor as a grid of images by calling ``make_grid``.\n **kwargs: Other arguments are documented in ``make_grid``."}
{"_id": "q_3101", "text": "Remove types from function arguments in cython"}
{"_id": "q_3102", "text": "Parse scoped selector."}
{"_id": "q_3103", "text": "Parse a single literal value.\n\n Returns:\n The parsed value."}
{"_id": "q_3104", "text": "Advances to next line."}
{"_id": "q_3105", "text": "Try to parse a configurable reference (@[scope/name/]fn_name[()])."}
{"_id": "q_3106", "text": "Convert an operative config string to markdown format."}
{"_id": "q_3107", "text": "Make sure `fn` can be wrapped cleanly by functools.wraps."}
{"_id": "q_3108", "text": "Decorate a function or class with the given decorator.\n\n When `fn_or_cls` is a function, applies `decorator` to the function and\n returns the (decorated) result.\n\n When `fn_or_cls` is a class and the `subclass` parameter is `False`, this will\n replace `fn_or_cls.__init__` with the result of applying `decorator` to it.\n\n When `fn_or_cls` is a class and `subclass` is `True`, this will subclass the\n class, but with `__init__` defined to be the result of applying `decorator` to\n `fn_or_cls.__init__`. The decorated class has metadata (docstring, name, and\n module information) copied over from `fn_or_cls`. The goal is to provide a\n decorated class the behaves as much like the original as possible, without\n modifying it (for example, inspection operations using `isinstance` or\n `issubclass` should behave the same way as on the original class).\n\n Args:\n decorator: The decorator to use.\n fn_or_cls: The function or class to decorate.\n subclass: Whether to decorate classes by subclassing. This argument is\n ignored if `fn_or_cls` is not a class.\n\n Returns:\n The decorated function or class."}
{"_id": "q_3109", "text": "Binds the parameter value specified by `binding_key` to `value`.\n\n The `binding_key` argument should either be a string of the form\n `maybe/scope/optional.module.names.configurable_name.parameter_name`, or a\n list or tuple of `(scope, selector, parameter_name)`, where `selector`\n corresponds to `optional.module.names.configurable_name`. Once this function\n has been called, subsequent calls (in the specified scope) to the specified\n configurable function will have `value` supplied to their `parameter_name`\n parameter.\n\n Example:\n\n @configurable('fully_connected_network')\n def network_fn(num_layers=5, units_per_layer=1024):\n ...\n\n def main(_):\n config.bind_parameter('fully_connected_network.num_layers', 3)\n network_fn() # Called with num_layers == 3, not the default of 5.\n\n Args:\n binding_key: The parameter whose value should be set. This can either be a\n string, or a tuple of the form `(scope, selector, parameter)`.\n value: The desired value.\n\n Raises:\n RuntimeError: If the config is locked.\n ValueError: If no function can be found matching the configurable name\n specified by `binding_key`, or if the specified parameter name is\n blacklisted or not in the function's whitelist (if present)."}
{"_id": "q_3110", "text": "Gets cached argspec for `fn`."}
{"_id": "q_3111", "text": "Returns the names of the supplied arguments to the given function."}
{"_id": "q_3112", "text": "Retrieve all default values for configurable parameters of a function.\n\n Any parameters included in the supplied blacklist, or not included in the\n supplied whitelist, are excluded.\n\n Args:\n fn: The function whose parameter values should be retrieved.\n whitelist: The whitelist (or `None`) associated with the function.\n blacklist: The blacklist (or `None`) associated with the function.\n\n Returns:\n A dictionary mapping configurable parameter names to their default values."}
{"_id": "q_3113", "text": "Decorator to make a function or class configurable.\n\n This decorator registers the decorated function/class as configurable, which\n allows its parameters to be supplied from the global configuration (i.e., set\n through `bind_parameter` or `parse_config`). The decorated function is\n associated with a name in the global configuration, which by default is simply\n the name of the function or class, but can be specified explicitly to avoid\n naming collisions or improve clarity.\n\n If some parameters should not be configurable, they can be specified in\n `blacklist`. If only a restricted set of parameters should be configurable,\n they can be specified in `whitelist`.\n\n The decorator can be used without any parameters as follows:\n\n @config.configurable\n def some_configurable_function(param1, param2='a default value'):\n ...\n\n In this case, the function is associated with the name\n `'some_configurable_function'` in the global configuration, and both `param1`\n and `param2` are configurable.\n\n The decorator can be supplied with parameters to specify the configurable name\n or supply a whitelist/blacklist:\n\n @config.configurable('explicit_configurable_name', whitelist='param2')\n def some_configurable_function(param1, param2='a default value'):\n ...\n\n In this case, the configurable is associated with the name\n `'explicit_configurable_name'` in the global configuration, and only `param2`\n is configurable.\n\n Classes can be decorated as well, in which case parameters of their\n constructors are made configurable:\n\n @config.configurable\n class SomeClass(object):\n def __init__(self, param1, param2='a default value'):\n ...\n\n In this case, the name of the configurable is `'SomeClass'`, and both `param1`\n and `param2` are configurable.\n\n Args:\n name_or_fn: A name for this configurable, or a function to decorate (in\n which case the name will be taken from that function). If not set,\n defaults to the name of the function/class that is being made\n configurable. If a name is provided, it may also include module components\n to be used for disambiguation (these will be appended to any components\n explicitly specified by `module`).\n module: The module to associate with the configurable, to help handle naming\n collisions. By default, the module of the function or class being made\n configurable will be used (if no module is specified as part of the name).\n whitelist: A whitelisted set of kwargs that should be configurable. All\n other kwargs will not be configurable. Only one of `whitelist` or\n `blacklist` should be specified.\n blacklist: A blacklisted set of kwargs that should not be configurable. All\n other kwargs will be configurable. Only one of `whitelist` or `blacklist`\n should be specified.\n\n Returns:\n When used with no parameters (or with a function/class supplied as the first\n parameter), it returns the decorated function or class. When used with\n parameters, it returns a function that can be applied to decorate the target\n function or class."}
{"_id": "q_3114", "text": "Retrieve the \"operative\" configuration as a config string.\n\n The operative configuration consists of all parameter values used by\n configurable functions that are actually called during execution of the\n current program. Parameters associated with configurable functions that are\n not called (and so can have no effect on program execution) won't be included.\n\n The goal of the function is to return a config that captures the full set of\n relevant configurable \"hyperparameters\" used by a program. As such, the\n returned configuration will include the default values of arguments from\n configurable functions (as long as the arguments aren't blacklisted or missing\n from a supplied whitelist), as well as any parameter values overridden via\n `bind_parameter` or through `parse_config`.\n\n Any parameters that can't be represented as literals (capable of being parsed\n by `parse_config`) are excluded. The resulting config string is sorted\n lexicographically and grouped by configurable name.\n\n Args:\n max_line_length: A (soft) constraint on the maximum length of a line in the\n formatted string. Large nested structures will be split across lines, but\n e.g. long strings won't be split into a concatenation of shorter strings.\n continuation_indent: The indentation for continued lines.\n\n Returns:\n A config string capturing all parameter values used by the current program."}
{"_id": "q_3115", "text": "Parse a list of config files followed by extra Gin bindings.\n\n This function is equivalent to:\n\n for config_file in config_files:\n gin.parse_config_file(config_file, skip_configurables)\n gin.parse_config(bindings, skip_configurables)\n if finalize_config:\n gin.finalize()\n\n Args:\n config_files: A list of paths to the Gin config files.\n bindings: A list of individual parameter binding strings.\n finalize_config: Whether to finalize the config after parsing and binding\n (defaults to True).\n skip_unknown: A boolean indicating whether unknown configurables and imports\n should be skipped instead of causing errors (alternatively a list of\n configurable names to skip if unknown). See `parse_config` for additional\n details."}
{"_id": "q_3116", "text": "Parse and return a single Gin value."}
{"_id": "q_3117", "text": "A function that should be called after parsing all Gin config files.\n\n Calling this function allows registered \"finalize hooks\" to inspect (and\n potentially modify) the Gin config, to provide additional functionality. Hooks\n should not modify the configuration object they receive directly; instead,\n they should return a dictionary mapping Gin binding keys to (new or updated)\n values. This way, all hooks see the config as originally parsed.\n\n Raises:\n RuntimeError: If the config is already locked.\n ValueError: If two or more hooks attempt to modify or introduce bindings for\n the same key. Since it is difficult to control the order in which hooks\n are registered, allowing this could yield unpredictable behavior."}
{"_id": "q_3118", "text": "Provides an iterator over references in the given config.\n\n Args:\n config: A dictionary mapping scoped configurable names to argument bindings.\n to: If supplied, only yield references whose `configurable_fn` matches `to`.\n\n Yields:\n `ConfigurableReference` instances within `config`, maybe restricted to those\n matching the `to` parameter if it is supplied."}
{"_id": "q_3119", "text": "Creates a constant that can be referenced from gin config files.\n\n After calling this function in Python, the constant can be referenced from\n within a Gin config file using the macro syntax. For example, in Python:\n\n gin.constant('THE_ANSWER', 42)\n\n Then, in a Gin config file:\n\n meaning.of_life = %THE_ANSWER\n\n Note that any Python object can be used as the value of a constant (including\n objects not representable as Gin literals). Values will be stored until\n program termination in a Gin-internal dictionary, so avoid creating constants\n with values that should have a limited lifetime.\n\n Optionally, a disambiguating module may be prefixed onto the constant\n name. For instance:\n\n gin.constant('some.modules.PI', 3.14159)\n\n Args:\n name: The name of the constant, possibly prepended by one or more\n disambiguating module components separated by periods. An macro with this\n name (including the modules) will be created.\n value: The value of the constant. This can be anything (including objects\n not representable as Gin literals). The value will be stored and returned\n whenever the constant is referenced.\n\n Raises:\n ValueError: If the constant's selector is invalid, or a constant with the\n given selector already exists."}
{"_id": "q_3120", "text": "Decorator for an enum class that generates Gin constants from values.\n\n Generated constants have format `module.ClassName.ENUM_VALUE`. The module\n name is optional when using the constant.\n\n Args:\n cls: Class type.\n module: The module to associate with the constants, to help handle naming\n collisions. If `None`, `cls.__module__` will be used.\n\n Returns:\n Class type (identity function).\n\n Raises:\n TypeError: When applied to a non-enum class."}
{"_id": "q_3121", "text": "Retrieves all selectors matching `partial_selector`.\n\n For instance, if \"one.a.b\" and \"two.a.b\" are stored in a `SelectorMap`, both\n `matching_selectors('b')` and `matching_selectors('a.b')` will return them.\n\n In the event that `partial_selector` exactly matches an existing complete\n selector, only that complete selector is returned. For instance, if\n \"a.b.c.d\" and \"c.d\" are stored, `matching_selectors('c.d')` will return only\n `['c.d']`, while `matching_selectors('d')` will return both.\n\n Args:\n partial_selector: The partial selector to find matches for.\n\n Returns:\n A list of selectors matching `partial_selector`."}
{"_id": "q_3122", "text": "Returns all values matching `partial_selector` as a list."}
{"_id": "q_3123", "text": "Returns the minimal selector that uniquely matches `complete_selector`.\n\n Args:\n complete_selector: A complete selector stored in the map.\n\n Returns:\n A partial selector that unambiguously matches `complete_selector`.\n\n Raises:\n KeyError: If `complete_selector` is not in the map."}
{"_id": "q_3124", "text": "Sets the access permissions of the map.\n\n :param perms: the new permissions."}
{"_id": "q_3125", "text": "Check if there is enough permissions for access"}
{"_id": "q_3126", "text": "Creates a new mapping in the memory address space.\n\n :param addr: the starting address (took as hint). If C{addr} is C{0} the first big enough\n chunk of memory will be selected as starting address.\n :param size: the length of the mapping.\n :param perms: the access permissions to this memory.\n :param data_init: optional data to initialize this memory.\n :param name: optional name to give to this mapping\n :return: the starting address where the memory was mapped.\n :raises error:\n - 'Address shall be concrete' if C{addr} is not an integer number.\n - 'Address too big' if C{addr} goes beyond the limit of the memory.\n - 'Map already used' if the piece of memory starting in C{addr} and with length C{size} isn't free.\n :rtype: int"}
{"_id": "q_3127", "text": "Translates a register ID from the disassembler object into the\n register name based on manticore's alias in the register file\n\n :param int reg_id: Register ID"}
{"_id": "q_3128", "text": "Dynamic interface for writing cpu registers\n\n :param str register: register name (as listed in `self.all_registers`)\n :param value: register value\n :type value: int or long or Expression"}
{"_id": "q_3129", "text": "Dynamic interface for reading cpu registers\n\n :param str register: register name (as listed in `self.all_registers`)\n :return: register value\n :rtype: int or long or Expression"}
{"_id": "q_3130", "text": "Selects bytes from memory. Attempts to do so faster than via read_bytes.\n\n :param where: address to read from\n :param size: number of bytes to read\n :return: the bytes in memory"}
{"_id": "q_3131", "text": "Reads int from memory\n\n :param int where: address to read from\n :param size: number of bits to read\n :return: the value read\n :rtype: int or BitVec\n :param force: whether to ignore memory permissions"}
{"_id": "q_3132", "text": "Read a NUL-terminated concrete buffer from memory. Stops reading at first symbolic byte.\n\n :param int where: Address to read string from\n :param int max_length:\n The size in bytes to cap the string at, or None [default] for no\n limit.\n :param force: whether to ignore memory permissions\n :return: string read\n :rtype: str"}
{"_id": "q_3133", "text": "Write `data` to the stack and decrement the stack pointer accordingly.\n\n :param str data: Data to write\n :param force: whether to ignore memory permissions"}
{"_id": "q_3134", "text": "Read `nbytes` from the stack, increment the stack pointer, and return\n data.\n\n :param int nbytes: How many bytes to read\n :param force: whether to ignore memory permissions\n :return: Data read from the stack"}
{"_id": "q_3135", "text": "Read a value from the stack and increment the stack pointer.\n\n :param force: whether to ignore memory permissions\n :return: Value read"}
{"_id": "q_3136", "text": "Decode, and execute one instruction pointed by register PC"}
{"_id": "q_3137", "text": "Notify listeners that an instruction has been executed."}
{"_id": "q_3138", "text": "If we could not handle emulating an instruction, use Unicorn to emulate\n it.\n\n :param capstone.CsInsn instruction: The instruction object to emulate"}
{"_id": "q_3139", "text": "remove decoded instruction from instruction cache"}
{"_id": "q_3140", "text": "CPUID instruction.\n\n The ID flag (bit 21) in the EFLAGS register indicates support for the\n CPUID instruction. If a software procedure can set and clear this\n flag, the processor executing the procedure supports the CPUID\n instruction. This instruction operates the same in non-64-bit modes and\n 64-bit mode. CPUID returns processor identification and feature\n information in the EAX, EBX, ECX, and EDX registers.\n\n The instruction's output is dependent on the contents of the EAX\n register upon execution.\n\n :param cpu: current CPU."}
{"_id": "q_3141", "text": "Logical inclusive OR.\n\n Performs a bitwise inclusive OR operation between the destination (first)\n and source (second) operands and stores the result in the destination operand location.\n\n Each bit of the result of the OR instruction is set to 0 if both corresponding\n bits of the first and second operands are 0; otherwise, each bit is set\n to 1.\n\n The OF and CF flags are cleared; the SF, ZF, and PF flags are set according to the result::\n\n DEST = DEST OR SRC;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3142", "text": "ASCII adjust AX after multiply.\n\n Adjusts the result of the multiplication of two unpacked BCD values\n to create a pair of unpacked (base 10) BCD values. The AX register is\n the implied source and destination operand for this instruction. The AAM\n instruction is only useful when it follows a MUL instruction that multiplies\n (binary multiplication) two unpacked BCD values and stores a word result\n in the AX register. The AAM instruction then adjusts the contents of the\n AX register to contain the correct 2-digit unpacked (base 10) BCD result.\n\n The SF, ZF, and PF flags are set according to the resulting binary value in the AL register.\n\n This instruction executes as described in compatibility mode and legacy mode.\n It is not valid in 64-bit mode.::\n\n tempAL = AL;\n AH = tempAL / 10;\n AL = tempAL MOD 10;\n\n :param cpu: current CPU."}
{"_id": "q_3143", "text": "ASCII Adjust AL after subtraction.\n\n Adjusts the result of the subtraction of two unpacked BCD values to create a unpacked\n BCD result. The AL register is the implied source and destination operand for this instruction.\n The AAS instruction is only useful when it follows a SUB instruction that subtracts\n (binary subtraction) one unpacked BCD value from another and stores a byte result in the AL\n register. The AAA instruction then adjusts the contents of the AL register to contain the\n correct 1-digit unpacked BCD result. If the subtraction produced a decimal carry, the AH register\n is decremented by 1, and the CF and AF flags are set. If no decimal carry occurred, the CF and AF\n flags are cleared, and the AH register is unchanged. In either case, the AL register is left with\n its top nibble set to 0.\n\n The AF and CF flags are set to 1 if there is a decimal borrow; otherwise, they are cleared to 0.\n\n This instruction executes as described in compatibility mode and legacy mode.\n It is not valid in 64-bit mode.::\n\n\n IF ((AL AND 0FH) > 9) Operators.OR(AF = 1)\n THEN\n AX = AX - 6;\n AH = AH - 1;\n AF = 1;\n CF = 1;\n ELSE\n CF = 0;\n AF = 0;\n FI;\n AL = AL AND 0FH;\n\n :param cpu: current CPU."}
{"_id": "q_3144", "text": "Adds with carry.\n\n Adds the destination operand (first operand), the source operand (second operand),\n and the carry (CF) flag and stores the result in the destination operand. The state\n of the CF flag represents a carry from a previous addition. When an immediate value\n is used as an operand, it is sign-extended to the length of the destination operand\n format. The ADC instruction does not distinguish between signed or unsigned operands.\n Instead, the processor evaluates the result for both data types and sets the OF and CF\n flags to indicate a carry in the signed or unsigned result, respectively. The SF flag\n indicates the sign of the signed result. The ADC instruction is usually executed as\n part of a multibyte or multiword addition in which an ADD instruction is followed by an\n ADC instruction::\n\n DEST = DEST + SRC + CF;\n\n The OF, SF, ZF, AF, CF, and PF flags are set according to the result.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3145", "text": "Compares and exchanges bytes.\n\n Compares the 64-bit value in EDX:EAX (or 128-bit value in RDX:RAX if\n operand size is 128 bits) with the operand (destination operand). If\n the values are equal, the 64-bit value in ECX:EBX (or 128-bit value in\n RCX:RBX) is stored in the destination operand. Otherwise, the value in\n the destination operand is loaded into EDX:EAX (or RDX:RAX)::\n\n IF (64-Bit Mode and OperandSize = 64)\n THEN\n IF (RDX:RAX = DEST)\n THEN\n ZF = 1;\n DEST = RCX:RBX;\n ELSE\n ZF = 0;\n RDX:RAX = DEST;\n FI\n ELSE\n IF (EDX:EAX = DEST)\n THEN\n ZF = 1;\n DEST = ECX:EBX;\n ELSE\n ZF = 0;\n EDX:EAX = DEST;\n FI;\n FI;\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3146", "text": "Decimal adjusts AL after addition.\n\n Adjusts the sum of two packed BCD values to create a packed BCD result. The AL register\n is the implied source and destination operand. If a decimal carry is detected, the CF\n and AF flags are set accordingly.\n The CF and AF flags are set if the adjustment of the value results in a decimal carry in\n either digit of the result. The SF, ZF, and PF flags are set according to the result.\n\n This instruction is not valid in 64-bit mode.::\n\n IF (((AL AND 0FH) > 9) or AF = 1)\n THEN\n AL = AL + 6;\n CF = CF OR CarryFromLastAddition; (* CF OR carry from AL = AL + 6 *)\n AF = 1;\n ELSE\n AF = 0;\n FI;\n IF ((AL AND F0H) > 90H) or CF = 1)\n THEN\n AL = AL + 60H;\n CF = 1;\n ELSE\n CF = 0;\n FI;\n\n :param cpu: current CPU."}
{"_id": "q_3147", "text": "Signed divide.\n\n Divides (signed) the value in the AL, AX, or EAX register by the source\n operand and stores the result in the AX, DX:AX, or EDX:EAX registers.\n The source operand can be a general-purpose register or a memory\n location. The action of this instruction depends on the operand size.::\n\n IF SRC = 0\n THEN #DE; (* divide error *)\n FI;\n IF OpernadSize = 8 (* word/byte operation *)\n THEN\n temp = AX / SRC; (* signed division *)\n IF (temp > 7FH) Operators.OR(temp < 80H)\n (* if a positive result is greater than 7FH or a negative result is\n less than 80H *)\n THEN #DE; (* divide error *) ;\n ELSE\n AL = temp;\n AH = AX SignedModulus SRC;\n FI;\n ELSE\n IF OpernadSize = 16 (* doubleword/word operation *)\n THEN\n temp = DX:AX / SRC; (* signed division *)\n IF (temp > 7FFFH) Operators.OR(temp < 8000H)\n (* if a positive result is greater than 7FFFH *)\n (* or a negative result is less than 8000H *)\n THEN #DE; (* divide error *) ;\n ELSE\n AX = temp;\n DX = DX:AX SignedModulus SRC;\n FI;\n ELSE (* quadword/doubleword operation *)\n temp = EDX:EAX / SRC; (* signed division *)\n IF (temp > 7FFFFFFFH) Operators.OR(temp < 80000000H)\n (* if a positive result is greater than 7FFFFFFFH *)\n (* or a negative result is less than 80000000H *)\n THEN #DE; (* divide error *) ;\n ELSE\n EAX = temp;\n EDX = EDX:EAX SignedModulus SRC;\n FI;\n FI;\n FI;\n\n :param cpu: current CPU.\n :param src: source operand."}
{"_id": "q_3148", "text": "Signed multiply.\n\n Performs a signed multiplication of two operands. This instruction has\n three forms, depending on the number of operands.\n - One-operand form. This form is identical to that used by the MUL\n instruction. Here, the source operand (in a general-purpose\n register or memory location) is multiplied by the value in the AL,\n AX, or EAX register (depending on the operand size) and the product\n is stored in the AX, DX:AX, or EDX:EAX registers, respectively.\n - Two-operand form. With this form the destination operand (the\n first operand) is multiplied by the source operand (second\n operand). The destination operand is a general-purpose register and\n the source operand is an immediate value, a general-purpose\n register, or a memory location. The product is then stored in the\n destination operand location.\n - Three-operand form. This form requires a destination operand (the\n first operand) and two source operands (the second and the third\n operands). Here, the first source operand (which can be a\n general-purpose register or a memory location) is multiplied by the\n second source operand (an immediate value). The product is then\n stored in the destination operand (a general-purpose register).\n\n When an immediate value is used as an operand, it is sign-extended to\n the length of the destination operand format. The CF and OF flags are\n set when significant bits are carried into the upper half of the\n result. The CF and OF flags are cleared when the result fits exactly in\n the lower half of the result. The three forms of the IMUL instruction\n are similar in that the length of the product is calculated to twice\n the length of the operands. With the one-operand form, the product is\n stored exactly in the destination. With the two- and three- operand\n forms, however, result is truncated to the length of the destination\n before it is stored in the destination register. Because of this\n truncation, the CF or OF flag should be tested to ensure that no\n significant bits are lost. The two- and three-operand forms may also be\n used with unsigned operands because the lower half of the product is\n the same regardless if the operands are signed or unsigned. The CF and\n OF flags, however, cannot be used to determine if the upper half of the\n result is non-zero::\n\n IF (NumberOfOperands == 1)\n THEN\n IF (OperandSize == 8)\n THEN\n AX = AL * SRC (* Signed multiplication *)\n IF AL == AX\n THEN\n CF = 0; OF = 0;\n ELSE\n CF = 1; OF = 1;\n FI;\n ELSE\n IF OperandSize == 16\n THEN\n DX:AX = AX * SRC (* Signed multiplication *)\n IF sign_extend_to_32 (AX) == DX:AX\n THEN\n CF = 0; OF = 0;\n ELSE\n CF = 1; OF = 1;\n FI;\n ELSE\n IF OperandSize == 32\n THEN\n EDX:EAX = EAX * SRC (* Signed multiplication *)\n IF EAX == EDX:EAX\n THEN\n CF = 0; OF = 0;\n ELSE\n CF = 1; OF = 1;\n FI;\n ELSE (* OperandSize = 64 *)\n RDX:RAX = RAX * SRC (* Signed multiplication *)\n IF RAX == RDX:RAX\n THEN\n CF = 0; OF = 0;\n ELSE\n CF = 1; OF = 1;\n FI;\n FI;\n FI;\n ELSE\n IF (NumberOfOperands = 2)\n THEN\n temp = DEST * SRC (* Signed multiplication; temp is double DEST size *)\n DEST = DEST * SRC (* Signed multiplication *)\n IF temp != DEST\n THEN\n CF = 1; OF = 1;\n ELSE\n CF = 0; OF = 0;\n FI;\n ELSE (* NumberOfOperands = 3 *)\n DEST = SRC1 * SRC2 (* Signed multiplication *)\n temp = SRC1 * SRC2 (* Signed multiplication; temp is double SRC1 size *)\n IF temp != DEST\n THEN\n CF = 1; OF = 1;\n ELSE\n CF = 0; OF = 0;\n FI;\n FI;\n FI;\n\n :param cpu: current CPU.\n :param operands: variable list of operands."}
{"_id": "q_3149", "text": "Unsigned multiply.\n\n Performs an unsigned multiplication of the first operand (destination\n operand) and the second operand (source operand) and stores the result\n in the destination operand. The destination operand is an implied operand\n located in register AL, AX or EAX (depending on the size of the operand);\n the source operand is located in a general-purpose register or a memory location.\n\n The result is stored in register AX, register pair DX:AX, or register\n pair EDX:EAX (depending on the operand size), with the high-order bits\n of the product contained in register AH, DX, or EDX, respectively. If\n the high-order bits of the product are 0, the CF and OF flags are cleared;\n otherwise, the flags are set::\n\n IF byte operation\n THEN\n AX = AL * SRC\n ELSE (* word or doubleword operation *)\n IF OperandSize = 16\n THEN\n DX:AX = AX * SRC\n ELSE (* OperandSize = 32 *)\n EDX:EAX = EAX * SRC\n FI;\n FI;\n\n :param cpu: current CPU.\n :param src: source operand."}
{"_id": "q_3150", "text": "Two's complement negation.\n\n Replaces the value of operand (the destination operand) with its two's complement.\n (This operation is equivalent to subtracting the operand from 0.) The destination operand is\n located in a general-purpose register or a memory location::\n\n IF DEST = 0\n THEN CF = 0\n ELSE CF = 1;\n FI;\n DEST = - (DEST)\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3151", "text": "Integer subtraction with borrow.\n\n Adds the source operand (second operand) and the carry (CF) flag, and\n subtracts the result from the destination operand (first operand). The\n result of the subtraction is stored in the destination operand. The\n destination operand can be a register or a memory location; the source\n operand can be an immediate, a register, or a memory location.\n (However, two memory operands cannot be used in one instruction.) The\n state of the CF flag represents a borrow from a previous subtraction.\n When an immediate value is used as an operand, it is sign-extended to\n the length of the destination operand format.\n The SBB instruction does not distinguish between signed or unsigned\n operands. Instead, the processor evaluates the result for both data\n types and sets the OF and CF flags to indicate a borrow in the signed\n or unsigned result, respectively. The SF flag indicates the sign of the\n signed result. The SBB instruction is usually executed as part of a\n multibyte or multiword subtraction in which a SUB instruction is\n followed by a SBB instruction::\n\n DEST = DEST - (SRC + CF);\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3152", "text": "Exchanges and adds.\n\n Exchanges the first operand (destination operand) with the second operand\n (source operand), then loads the sum of the two values into the destination\n operand. The destination operand can be a register or a memory location;\n the source operand is a register.\n This instruction can be used with a LOCK prefix::\n\n TEMP = SRC + DEST\n SRC = DEST\n DEST = TEMP\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3153", "text": "Byte swap.\n\n Reverses the byte order of a 32-bit (destination) register: bits 0 through\n 7 are swapped with bits 24 through 31, and bits 8 through 15 are swapped\n with bits 16 through 23. This instruction is provided for converting little-endian\n values to big-endian format and vice versa.\n To swap bytes in a word value (16-bit register), use the XCHG instruction.\n When the BSWAP instruction references a 16-bit register, the result is\n undefined::\n\n TEMP = DEST\n DEST[7..0] = TEMP[31..24]\n DEST[15..8] = TEMP[23..16]\n DEST[23..16] = TEMP[15..8]\n DEST[31..24] = TEMP[7..0]\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3154", "text": "Conditional move - Greater.\n\n Tests the status flags in the EFLAGS register and moves the source operand\n (second operand) to the destination operand (first operand) if the given\n test condition is true.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3155", "text": "Conditional move - Overflow.\n\n Tests the status flags in the EFLAGS register and moves the source operand\n (second operand) to the destination operand (first operand) if the given\n test condition is true.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3156", "text": "Conditional move - Not overflow.\n\n Tests the status flags in the EFLAGS register and moves the source operand\n (second operand) to the destination operand (first operand) if the given\n test condition is true.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3157", "text": "Loads status flags into AH register.\n\n Moves the low byte of the EFLAGS register (which includes status flags\n SF, ZF, AF, PF, and CF) to the AH register. Reserved bits 1, 3, and 5\n of the EFLAGS register are set in the AH register::\n\n AH = EFLAGS(SF:ZF:0:AF:0:PF:1:CF);\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3158", "text": "Loads effective address.\n\n Computes the effective address of the second operand (the source operand) and stores it in the first operand\n (destination operand). The source operand is a memory address (offset part) specified with one of the processors\n addressing modes; the destination operand is a general-purpose register. The address-size and operand-size\n attributes affect the action performed by this instruction. The operand-size\n attribute of the instruction is determined by the chosen register; the address-size attribute is determined by the\n attribute of the code segment.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3159", "text": "Moves data after swapping bytes.\n\n Performs a byte swap operation on the data copied from the second operand (source operand) and store the result\n in the first operand (destination operand). The source operand can be a general-purpose register, or memory location; the destination register can be a general-purpose register, or a memory location; however, both operands can\n not be registers, and only one operand can be a memory location. Both operands must be the same size, which can\n be a word, a doubleword or quadword.\n The MOVBE instruction is provided for swapping the bytes on a read from memory or on a write to memory; thus\n providing support for converting little-endian values to big-endian format and vice versa.\n In 64-bit mode, the instruction's default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits::\n\n TEMP = SRC\n IF ( OperandSize = 16)\n THEN\n DEST[7:0] = TEMP[15:8];\n DEST[15:8] = TEMP[7:0];\n ELSE IF ( OperandSize = 32)\n DEST[7:0] = TEMP[31:24];\n DEST[15:8] = TEMP[23:16];\n DEST[23:16] = TEMP[15:8];\n DEST[31:23] = TEMP[7:0];\n ELSE IF ( OperandSize = 64)\n DEST[7:0] = TEMP[63:56];\n DEST[15:8] = TEMP[55:48];\n DEST[23:16] = TEMP[47:40];\n DEST[31:24] = TEMP[39:32];\n DEST[39:32] = TEMP[31:24];\n DEST[47:40] = TEMP[23:16];\n DEST[55:48] = TEMP[15:8];\n DEST[63:56] = TEMP[7:0];\n FI;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3160", "text": "Stores AH into flags.\n\n Loads the SF, ZF, AF, PF, and CF flags of the EFLAGS register with values\n from the corresponding bits in the AH register (bits 7, 6, 4, 2, and 0,\n respectively). Bits 1, 3, and 5 of register AH are ignored; the corresponding\n reserved bits (1, 3, and 5) in the EFLAGS register remain as shown below::\n\n EFLAGS(SF:ZF:0:AF:0:PF:1:CF) = AH;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3161", "text": "Sets byte if below or equal.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3162", "text": "Sets if carry.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3163", "text": "Sets byte if equal.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3164", "text": "Sets byte if greater or equal.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3165", "text": "Sets byte if not above or equal.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3166", "text": "Sets byte if not below.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3167", "text": "Sets byte if not less or equal.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3168", "text": "Sets byte if not sign.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3169", "text": "Sets byte if not zero.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3170", "text": "Sets byte if overflow.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3171", "text": "Sets byte if parity.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3172", "text": "Sets byte if parity odd.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3173", "text": "Sets byte if sign.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3174", "text": "Sets byte if zero.\n\n :param cpu: current CPU.\n :param dest: destination operand."}
{"_id": "q_3175", "text": "High level procedure exit.\n\n Releases the stack frame set up by an earlier ENTER instruction. The\n LEAVE instruction copies the frame pointer (in the EBP register) into\n the stack pointer register (ESP), which releases the stack space allocated\n to the stack frame. The old frame pointer (the frame pointer for the calling\n procedure that was saved by the ENTER instruction) is then popped from\n the stack into the EBP register, restoring the calling procedure's stack\n frame.\n A RET instruction is commonly executed following a LEAVE instruction\n to return program control to the calling procedure::\n\n IF Stackaddress_bit_size = 32\n THEN\n ESP = EBP;\n ELSE (* Stackaddress_bit_size = 16*)\n SP = BP;\n FI;\n IF OperandSize = 32\n THEN\n EBP = Pop();\n ELSE (* OperandSize = 16*)\n BP = Pop();\n FI;\n\n :param cpu: current CPU."}
{"_id": "q_3176", "text": "Pushes a value onto the stack.\n\n Decrements the stack pointer and then stores the source operand on the top of the stack.\n\n :param cpu: current CPU.\n :param src: source operand."}
{"_id": "q_3177", "text": "Procedure call.\n\n Saves procedure linking information on the stack and branches to the called procedure specified using the target\n operand. The target operand specifies the address of the first instruction in the called procedure. The operand can\n be an immediate value, a general-purpose register, or a memory location.\n\n :param cpu: current CPU.\n :param op0: target operand."}
{"_id": "q_3178", "text": "Returns from procedure.\n\n Transfers program control to a return address located on the top of\n the stack. The address is usually placed on the stack by a CALL instruction,\n and the return is made to the instruction that follows the CALL instruction.\n The optional source operand specifies the number of stack bytes to be\n released after the return address is popped; the default is none.\n\n :param cpu: current CPU.\n :param operands: variable operands list."}
{"_id": "q_3179", "text": "Jumps short if above.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3180", "text": "Jumps short if below.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3181", "text": "Jumps short if below or equal.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3182", "text": "Jumps short if carry.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3183", "text": "Jumps short if CX register is 0.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3184", "text": "Jumps short if ECX register is 0.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3185", "text": "Jumps short if greater.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3186", "text": "Jumps short if greater or equal.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3187", "text": "Jumps short if not equal.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3188", "text": "Jumps short if not parity.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3189", "text": "Jumps short if overflow.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3190", "text": "Jumps short if sign.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3191", "text": "Jumps short if zero.\n\n :param cpu: current CPU.\n :param target: destination operand."}
{"_id": "q_3192", "text": "Rotates through carry left.\n\n Shifts (rotates) the bits of the first operand (destination operand) the number of bit positions specified in the\n second operand (count operand) and stores the result in the destination operand. The destination operand can be\n a register or a memory location; the count operand is an unsigned integer that can be an immediate or a value in\n the CL register. In legacy and compatibility mode, the processor restricts the count to a number between 0 and 31\n by masking all the bits in the count operand except the 5 least-significant bits.\n\n The RCL instruction shifts the CF flag into the least-significant bit and shifts the most-significant bit into the CF flag.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: count operand."}
{"_id": "q_3193", "text": "Shift arithmetic right.\n\n The shift arithmetic right (SAR) and shift logical right (SHR) instructions shift the bits of the destination operand to\n the right (toward less significant bit locations). For each shift count, the least significant bit of the destination\n operand is shifted into the CF flag, and the most significant bit is either set or cleared depending on the instruction\n type. The SHR instruction clears the most significant bit. the SAR instruction sets or clears the most significant bit\n to correspond to the sign (most significant bit) of the original value in the destination operand. In effect, the SAR\n instruction fills the empty bit position's shifted value with the sign of the unshifted value\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3194", "text": "Shift logical right.\n\n The shift arithmetic right (SAR) and shift logical right (SHR)\n instructions shift the bits of the destination operand to the right\n (toward less significant bit locations). For each shift count, the\n least significant bit of the destination operand is shifted into the CF\n flag, and the most significant bit is either set or cleared depending\n on the instruction type. The SHR instruction clears the most\n significant bit.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: count operand."}
{"_id": "q_3195", "text": "Double precision shift right.\n\n Shifts the first operand (destination operand) to the left the number of bits specified by the third operand\n (count operand). The second operand (source operand) provides bits to shift in from the right (starting with\n the least significant bit of the destination operand).\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand.\n :param count: count operand"}
{"_id": "q_3196", "text": "Bit scan forward.\n\n Searches the source operand (second operand) for the least significant\n set bit (1 bit). If a least significant 1 bit is found, its bit index\n is stored in the destination operand (first operand). The source operand\n can be a register or a memory location; the destination operand is a register.\n The bit index is an unsigned offset from bit 0 of the source operand.\n If the contents source operand are 0, the contents of the destination\n operand is undefined::\n\n IF SRC = 0\n THEN\n ZF = 1;\n DEST is undefined;\n ELSE\n ZF = 0;\n temp = 0;\n WHILE Bit(SRC, temp) = 0\n DO\n temp = temp + 1;\n DEST = temp;\n OD;\n FI;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3197", "text": "Bit scan reverse.\n\n Searches the source operand (second operand) for the most significant\n set bit (1 bit). If a most significant 1 bit is found, its bit index is\n stored in the destination operand (first operand). The source operand\n can be a register or a memory location; the destination operand is a register.\n The bit index is an unsigned offset from bit 0 of the source operand.\n If the contents source operand are 0, the contents of the destination\n operand is undefined::\n\n IF SRC = 0\n THEN\n ZF = 1;\n DEST is undefined;\n ELSE\n ZF = 0;\n temp = OperandSize - 1;\n WHILE Bit(SRC, temp) = 0\n DO\n temp = temp - 1;\n DEST = temp;\n OD;\n FI;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3198", "text": "Bit test and complement.\n\n Selects the bit in a bit string (specified with the first operand, called\n the bit base) at the bit-position designated by the bit offset operand\n (second operand), stores the value of the bit in the CF flag, and complements\n the selected bit in the bit string.\n\n :param cpu: current CPU.\n :param dest: bit base operand.\n :param src: bit offset operand."}
{"_id": "q_3199", "text": "Loads string.\n\n Loads a byte, word, or doubleword from the source operand into the AL, AX, or EAX register, respectively. The\n source operand is a memory location, the address of which is read from the DS:ESI or the DS:SI registers\n (depending on the address-size attribute of the instruction, 32 or 16, respectively). The DS segment may be over-\n ridden with a segment override prefix.\n After the byte, word, or doubleword is transferred from the memory location into the AL, AX, or EAX register, the\n (E)SI register is incremented or decremented automatically according to the setting of the DF flag in the EFLAGS\n register. (If the DF flag is 0, the (E)SI register is incremented; if the DF flag is 1, the ESI register is decremented.)\n The (E)SI register is incremented or decremented by 1 for byte operations, by 2 for word operations, or by 4 for\n doubleword operations.\n\n :param cpu: current CPU.\n :param dest: source operand."}
{"_id": "q_3200", "text": "Moves data from string to string.\n\n Moves the byte, word, or doubleword specified with the second operand (source operand) to the location specified\n with the first operand (destination operand). Both the source and destination operands are located in memory. The\n address of the source operand is read from the DS:ESI or the DS:SI registers (depending on the address-size\n attribute of the instruction, 32 or 16, respectively). The address of the destination operand is read from the ES:EDI\n or the ES:DI registers (again depending on the address-size attribute of the instruction). The DS segment may be\n overridden with a segment override prefix, but the ES segment cannot be overridden.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3201", "text": "Stores String.\n\n Stores a byte, word, or doubleword from the AL, AX, or EAX register,\n respectively, into the destination operand. The destination operand is\n a memory location, the address of which is read from either the ES:EDI\n or the ES:DI registers (depending on the address-size attribute of the\n instruction, 32 or 16, respectively). The ES segment cannot be overridden\n with a segment override prefix.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3202", "text": "The shift arithmetic right.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: count operand."}
{"_id": "q_3203", "text": "Packed shuffle words.\n\n Copies doublewords from source operand (second operand) and inserts them in the destination operand\n (first operand) at locations selected with the order operand (third operand).\n\n :param cpu: current CPU.\n :param op0: destination operand.\n :param op1: source operand.\n :param op3: order operand."}
{"_id": "q_3204", "text": "Packed shuffle doublewords.\n\n Copies doublewords from source operand (second operand) and inserts them in the destination operand\n (first operand) at locations selected with the order operand (third operand).\n\n :param cpu: current CPU.\n :param op0: destination operand.\n :param op1: source operand.\n :param op3: order operand."}
{"_id": "q_3205", "text": "Moves byte mask to general-purpose register.\n\n Creates an 8-bit mask made up of the most significant bit of each byte of the source operand\n (second operand) and stores the result in the low byte or word of the destination operand\n (first operand). The source operand is an MMX(TM) technology or an XXM register; the destination\n operand is a general-purpose register.\n\n :param cpu: current CPU.\n :param op0: destination operand.\n :param op1: source operand."}
{"_id": "q_3206", "text": "Packed shift right logical double quadword.\n\n Shifts the destination operand (first operand) to the right by the number\n of bytes specified in the count operand (second operand). The empty high-order\n bytes are cleared (set to all 0s). If the value specified by the count\n operand is greater than 15, the destination operand is set to all 0s.\n The destination operand is an XMM register. The count operand is an 8-bit\n immediate::\n\n TEMP = SRC;\n if (TEMP > 15) TEMP = 16;\n DEST = DEST >> (temp * 8);\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: count operand."}
{"_id": "q_3207", "text": "Moves with sign-extension.\n\n Copies the contents of the source operand (register or memory location) to the destination\n operand (register) and sign extends the value to 16::\n\n OP0 = SignExtend(OP1);\n\n :param cpu: current CPU.\n :param op0: destination operand.\n :param op1: source operand."}
{"_id": "q_3208", "text": "Converts word to doubleword.\n\n ::\n DX = sign-extend of AX.\n\n :param cpu: current CPU."}
{"_id": "q_3209", "text": "Reads time-stamp counter.\n\n Loads the current value of the processor's time-stamp counter into the\n EDX:EAX registers. The time-stamp counter is contained in a 64-bit\n MSR. The high-order 32 bits of the MSR are loaded into the EDX\n register, and the low-order 32 bits are loaded into the EAX register.\n The processor increments the time-stamp counter MSR every clock cycle\n and resets it to 0 whenever the processor is reset.\n\n :param cpu: current CPU."}
{"_id": "q_3210", "text": "Moves low packed double-precision floating-point value.\n\n Moves a double-precision floating-point value from the source operand (second operand) and the\n destination operand (first operand). The source and destination operands can be an XMM register\n or a 64-bit memory location. This instruction allows double-precision floating-point values to be moved\n to and from the low quadword of an XMM register and memory. It cannot be used for register to register\n or memory to memory moves. When the destination operand is an XMM register, the high quadword of the\n register remains unchanged.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3211", "text": "Moves high packed double-precision floating-point value.\n\n Moves a double-precision floating-point value from the source operand (second operand) and the\n destination operand (first operand). The source and destination operands can be an XMM register\n or a 64-bit memory location. This instruction allows double-precision floating-point values to be moved\n to and from the high quadword of an XMM register and memory. It cannot be used for register to\n register or memory to memory moves. When the destination operand is an XMM register, the low quadword\n of the register remains unchanged.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3212", "text": "Packed subtract.\n\n Performs a SIMD subtract of the packed integers of the source operand (second operand) from the packed\n integers of the destination operand (first operand), and stores the packed integer results in the\n destination operand. The source operand can be an MMX(TM) technology register or a 64-bit memory location,\n or it can be an XMM register or a 128-bit memory location. The destination operand can be an MMX or an XMM\n register.\n The PSUBB instruction subtracts packed byte integers. When an individual result is too large or too small\n to be represented in a byte, the result is wrapped around and the low 8 bits are written to the\n destination element.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3213", "text": "Move quadword.\n\n Copies a quadword from the source operand (second operand) to the destination operand (first operand).\n The source and destination operands can be MMX(TM) technology registers, XMM registers, or 64-bit memory\n locations. This instruction can be used to move a between two MMX registers or between an MMX register\n and a 64-bit memory location, or to move data between two XMM registers or between an XMM register and\n a 64-bit memory location. The instruction cannot be used to transfer data between memory locations.\n When the source operand is an XMM register, the low quadword is moved; when the destination operand is\n an XMM register, the quadword is stored to the low quadword of the register, and the high quadword is\n cleared to all 0s::\n\n MOVQ instruction when operating on MMX registers and memory locations:\n\n DEST = SRC;\n\n MOVQ instruction when source and destination operands are XMM registers:\n\n DEST[63-0] = SRC[63-0];\n\n MOVQ instruction when source operand is XMM register and destination operand is memory location:\n\n DEST = SRC[63-0];\n\n MOVQ instruction when source operand is memory location and destination operand is XMM register:\n\n DEST[63-0] = SRC;\n DEST[127-64] = 0000000000000000H;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3214", "text": "Move Scalar Double-Precision Floating-Point Value\n\n Moves a scalar double-precision floating-point value from the source\n operand (second operand) to the destination operand (first operand).\n The source and destination operands can be XMM registers or 64-bit memory\n locations. This instruction can be used to move a double-precision\n floating-point value to and from the low quadword of an XMM register and\n a 64-bit memory location, or to move a double-precision floating-point\n value between the low quadwords of two XMM registers. The instruction\n cannot be used to transfer data between memory locations.\n When the source and destination operands are XMM registers, the high\n quadword of the destination operand remains unchanged. When the source\n operand is a memory location and destination operand is an XMM registers,\n the high quadword of the destination operand is cleared to all 0s.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."}
{"_id": "q_3215", "text": "Moves a scalar single-precision floating-point value\n\n Moves a scalar single-precision floating-point value from the source operand (second operand)\n to the destination operand (first operand). The source and destination operands can be XMM\n registers or 32-bit memory locations. This instruction can be used to move a single-precision\n floating-point value to and from the low doubleword of an XMM register and a 32-bit memory\n location, or to move a single-precision floating-point value between the low doublewords of\n two XMM registers. The instruction cannot be used to transfer data between memory locations.\n When the source and destination operands are XMM registers, the three high-order doublewords of the\n destination operand remain unchanged. When the source operand is a memory location and destination\n operand is an XMM registers, the three high-order doublewords of the destination operand are cleared to all 0s.\n\n //MOVSS instruction when source and destination operands are XMM registers:\n if(IsXMM(Source) && IsXMM(Destination))\n Destination[0..31] = Source[0..31];\n //Destination[32..127] remains unchanged\n //MOVSS instruction when source operand is XMM register and destination operand is memory location:\n else if(IsXMM(Source) && IsMemory(Destination))\n Destination = Source[0..31];\n //MOVSS instruction when source operand is memory location and destination operand is XMM register:\n else {\n Destination[0..31] = Source;\n Destination[32..127] = 0;\n }"}
{"_id": "q_3216", "text": "Constrain state.\n\n :param manticore.core.smtlib.Bool constraint: Constraint to add"}
{"_id": "q_3217", "text": "Create and return a symbolic buffer of length `nbytes`. The buffer is\n not written into State's memory; write it to the state's memory to\n introduce it into the program state.\n\n :param int nbytes: Length of the new buffer\n :param str label: (keyword arg only) The label to assign to the buffer\n :param bool cstring: (keyword arg only) Whether or not to enforce that the buffer is a cstring\n (i.e. no NULL bytes, except for the last byte). (bool)\n :param taint: Taint identifier of the new buffer\n :type taint: tuple or frozenset\n\n :return: :class:`~manticore.core.smtlib.expression.Expression` representing the buffer."}
{"_id": "q_3218", "text": "Create and return a symbolic value that is `nbits` bits wide. Assign\n the value to a register or write it into the address space to introduce\n it into the program state.\n\n :param int nbits: The bitwidth of the value returned\n :param str label: The label to assign to the value\n :param taint: Taint identifier of this value\n :type taint: tuple or frozenset\n :return: :class:`~manticore.core.smtlib.expression.Expression` representing the value"}
{"_id": "q_3219", "text": "Reads `nbytes` of symbolic data from a buffer in memory at `addr` and attempts to\n concretize it\n\n :param int address: Address of buffer to concretize\n :param int nbytes: Size of buffer to concretize\n :param bool constrain: If True, constrain the buffer to the concretized value\n :return: Concrete contents of buffer\n :rtype: list[int]"}
{"_id": "q_3220", "text": "Check if expression is True and that it can not be False with current constraints"}
{"_id": "q_3221", "text": "Iteratively finds the minimum value for a symbol within given constraints.\n\n :param constraints: constraints that the expression must fulfil\n :param X: a symbol or expression\n :param M: maximum number of iterations allowed"}
{"_id": "q_3222", "text": "Spawns z3 solver process"}
{"_id": "q_3223", "text": "Auxiliary method to reset the smtlib external solver to initial defaults"}
{"_id": "q_3224", "text": "Send a string to the solver.\n\n :param cmd: a SMTLIBv2 command (ex. (check-sat))"}
{"_id": "q_3225", "text": "Reads the response from the solver"}
{"_id": "q_3226", "text": "Check the satisfiability of the current state\n\n :return: whether current state is satisfiable or not."}
{"_id": "q_3227", "text": "Auxiliary method to send an assert"}
{"_id": "q_3228", "text": "Ask the solver for one possible assignment for given expression using current set of constraints.\n The current set of expressions must be sat.\n\n NOTE: This is an internal method: it uses the current solver state (set of constraints!)."}
{"_id": "q_3229", "text": "Check if two potentially symbolic values can be equal"}
{"_id": "q_3230", "text": "Returns a list with all the possible values for the symbol x"}
{"_id": "q_3231", "text": "Ask the solver for one possible result of given expression using given set of constraints."}
{"_id": "q_3232", "text": "Colors the logging level in the logging record"}
{"_id": "q_3233", "text": "Helper for finding the closest NULL or, effectively NULL byte from a starting address.\n\n :param Cpu cpu:\n :param ConstraintSet constrs: Constraints for current `State`\n :param int ptr: Address to start searching for a zero from\n :return: Offset from `ptr` to first byte that is 0 or an `Expression` that must be zero"}
{"_id": "q_3234", "text": "Return all events that all subclasses have so far registered to publish."}
{"_id": "q_3235", "text": "Returns a pstat.Stats instance with profiling results if `run` was called with `should_profile=True`.\n Otherwise, returns `None`."}
{"_id": "q_3236", "text": "Runs analysis.\n\n :param int procs: Number of parallel worker processes\n :param timeout: Analysis timeout, in seconds"}
{"_id": "q_3237", "text": "Enqueue it for processing"}
{"_id": "q_3238", "text": "Dequeue a state with the max priority"}
{"_id": "q_3239", "text": "Fork state on expression concretizations.\n Using policy build a list of solutions for expression.\n For the state on each solution setting the new state with setstate\n\n For example if expression is a Bool it may have 2 solutions. True or False.\n\n Parent\n (expression = ??)\n\n Child1 Child2\n (expression = True) (expression = True)\n setstate(True) setstate(False)\n\n The optional setstate() function is supposed to set the concrete value\n in the child state."}
{"_id": "q_3240", "text": "Entry point of the Executor; called by workers to start analysis."}
{"_id": "q_3241", "text": "Constructor for Decree binary analysis.\n\n :param str path: Path to binary to analyze\n :param str concrete_start: Concrete stdin to use before symbolic input\n :param kwargs: Forwarded to the Manticore constructor\n :return: Manticore instance, initialized with a Decree State\n :rtype: Manticore"}
{"_id": "q_3242", "text": "Invoke all registered generic hooks"}
{"_id": "q_3243", "text": "A helper method used to resolve a symbol name into a memory address when\n injecting hooks for analysis.\n\n :param symbol: function name to be resolved\n :type symbol: string\n\n :param line: if more functions present, optional line number can be included\n :type line: int or None"}
{"_id": "q_3244", "text": "helper method for getting all binary symbols with SANDSHREW_ prepended.\n We do this in order to provide the symbols Manticore should hook on to\n perform main analysis.\n\n :param binary: str for binary to instrospect.\n :rtype list: list of symbols from binary"}
{"_id": "q_3245", "text": "Get a configuration variable group named |name|"}
{"_id": "q_3246", "text": "Save current config state to an yml file stream identified by |f|\n\n :param f: where to write the config file"}
{"_id": "q_3247", "text": "Load an yml-formatted configuration from file stream |f|\n\n :param file f: Where to read the config."}
{"_id": "q_3248", "text": "Load config overrides from the yml file at |path|, or from default paths. If a path\n is provided and it does not exist, raise an exception\n\n Default paths: ./mcore.yml, ./.mcore.yml, ./manticore.yml, ./.manticore.yml."}
{"_id": "q_3249", "text": "Bring in provided config values to the args parser, and import entries to the config\n from all arguments that were actually passed on the command line\n\n :param parser: The arg parser\n :param args: The value that parser.parse_args returned"}
{"_id": "q_3250", "text": "Like add, but can tolerate existing values; also updates the value.\n\n Mostly used for setting fields from imported INI files and modified CLI flags."}
{"_id": "q_3251", "text": "Return the description, or a help string of variable identified by |name|."}
{"_id": "q_3252", "text": "Returns the tuple type signature for the arguments of the contract constructor."}
{"_id": "q_3253", "text": "Returns a copy of the Solidity JSON ABI item for the contract constructor.\n\n The content of the returned dict is described at https://solidity.readthedocs.io/en/latest/abi-spec.html#json_"}
{"_id": "q_3254", "text": "Returns a copy of the Solidity JSON ABI item for the function associated with the selector ``hsh``.\n\n If no normal contract function has the specified selector, a dict describing the default or non-default\n fallback function is returned.\n\n The content of the returned dict is described at https://solidity.readthedocs.io/en/latest/abi-spec.html#json_"}
{"_id": "q_3255", "text": "Returns the tuple type signature for the arguments of the function associated with the selector ``hsh``.\n\n If no normal contract function has the specified selector,\n the empty tuple type signature ``'()'`` is returned."}
{"_id": "q_3256", "text": "Returns the signature of the normal function with the selector ``hsh``,\n or ``None`` if no such function exists.\n\n This function returns ``None`` for any selector that will be dispatched to a fallback function."}
{"_id": "q_3257", "text": "Catches did_map_memory and copies the mapping into Manticore"}
{"_id": "q_3258", "text": "Unmap Unicorn maps when Manticore unmaps them"}
{"_id": "q_3259", "text": "Set memory protections in Unicorn correctly"}
{"_id": "q_3260", "text": "Unicorn hook that transfers control to Manticore so it can execute the syscall"}
{"_id": "q_3261", "text": "Wrapper that runs the _step function in a loop while handling exceptions"}
{"_id": "q_3262", "text": "Copy registers and written memory back into Manticore"}
{"_id": "q_3263", "text": "Copy memory writes from Manticore back into Unicorn in real-time"}
{"_id": "q_3264", "text": "Sync register state from Manticore -> Unicorn"}
{"_id": "q_3265", "text": "Only useful for setting FS right now."}
{"_id": "q_3266", "text": "A decorator for marking functions as deprecated."}
{"_id": "q_3267", "text": "Produce permutations of `lst`, where permutations are mutated by `func`. Used for flipping constraints. highly\n possible that returned constraints can be unsat this does it blindly, without any attention to the constraints\n themselves\n\n Considering lst as a list of constraints, e.g.\n\n [ C1, C2, C3 ]\n\n we'd like to consider scenarios of all possible permutations of flipped constraints, excluding the original list.\n So we'd like to generate:\n\n [ func(C1), C2 , C3 ],\n [ C1 , func(C2), C3 ],\n [ func(C1), func(C2), C3 ],\n [ C1 , C2 , func(C3)],\n .. etc\n\n This is effectively treating the list of constraints as a bitmask of width len(lst) and counting up, skipping the\n 0th element (unmodified array).\n\n The code below yields lists of constraints permuted as above by treating list indeces as bitmasks from 1 to\n 2**len(lst) and applying func to all the set bit offsets."}
{"_id": "q_3268", "text": "solve bytes in |datas| based on"}
{"_id": "q_3269", "text": "Execute a symbolic run that follows a concrete run; return constraints generated\n and the stdin data produced"}
{"_id": "q_3270", "text": "Load `program` and establish program state, such as stack and arguments.\n\n :param program str: The ELF binary to load\n :param argv list: argv array\n :param envp list: envp array"}
{"_id": "q_3271", "text": "ARM kernel helpers\n\n https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt"}
{"_id": "q_3272", "text": "Adds a file descriptor to the current file descriptor list\n\n :rtype: int\n :param f: the file descriptor to add.\n :return: the index of the file descriptor in the file descr. list"}
{"_id": "q_3273", "text": "Openat SystemCall - Similar to open system call except dirfd argument\n when path contained in buf is relative, dirfd is referred to set the relative path\n Special value AT_FDCWD set for dirfd to set path relative to current directory\n\n :param dirfd: directory file descriptor to refer in case of relative path at buf\n :param buf: address of zero-terminated pathname\n :param flags: file access bits\n :param mode: file permission mode"}
{"_id": "q_3274", "text": "Synchronize a file's in-core state with that on disk."}
{"_id": "q_3275", "text": "Wrapper for sys_sigaction"}
{"_id": "q_3276", "text": "An implementation of chroot that does perform some basic error checking,\n but does not actually chroot.\n\n :param path: Path to chroot"}
{"_id": "q_3277", "text": "Wrapper for mmap2"}
{"_id": "q_3278", "text": "Syscall dispatcher."}
{"_id": "q_3279", "text": "Yield CPU.\n This will choose another process from the running list and change\n current running process. May give the same cpu if only one running\n process."}
{"_id": "q_3280", "text": "Wait for file descriptors or timeout.\n Adds the current process in the correspondent waiting list and\n yield the cpu to another running process."}
{"_id": "q_3281", "text": "Awake process if timer has expired"}
{"_id": "q_3282", "text": "Compute total load size of interpreter.\n\n :param ELFFile interp: interpreter ELF .so\n :return: total load size of interpreter, not aligned\n :rtype: int"}
{"_id": "q_3283", "text": "A version of openat that includes a symbolic path and symbolic directory file descriptor\n\n :param dirfd: directory file descriptor\n :param buf: address of zero-terminated pathname\n :param flags: file access bits\n :param mode: file permission mode"}
{"_id": "q_3284", "text": "receive - receive bytes from a file descriptor\n\n The receive system call reads up to count bytes from file descriptor fd to the\n buffer pointed to by buf. If count is zero, receive returns 0 and optionally\n sets *rx_bytes to zero.\n\n :param cpu: current CPU.\n :param fd: a valid file descriptor\n :param buf: a memory buffer\n :param count: max number of bytes to receive\n :param rx_bytes: if valid, points to the actual number of bytes received\n :return: 0 Success\n EBADF fd is not a valid file descriptor or is not open\n EFAULT buf or rx_bytes points to an invalid address."}
{"_id": "q_3285", "text": "deallocate - remove allocations\n The deallocate system call deletes the allocations for the specified\n address range, and causes further references to the addresses within the\n range to generate invalid memory accesses. The region is also\n automatically deallocated when the process is terminated.\n\n The address addr must be a multiple of the page size. The length parameter\n specifies the size of the region to be deallocated in bytes. All pages\n containing a part of the indicated range are deallocated, and subsequent\n references will terminate the process. It is not an error if the indicated\n range does not contain any allocated pages.\n\n The deallocate function is invoked through system call number 6.\n\n :param cpu: current CPU\n :param addr: the starting address to unmap.\n :param size: the size of the portion to unmap.\n :return 0 On success\n EINVAL addr is not page aligned.\n EINVAL length is zero.\n EINVAL any part of the region being deallocated is outside the valid\n address range of the process.\n\n :param cpu: current CPU.\n :return: C{0} on success."}
{"_id": "q_3286", "text": "Yield CPU.\n This will choose another process from the RUNNNIG list and change\n current running process. May give the same cpu if only one running\n process."}
{"_id": "q_3287", "text": "Wait for filedescriptors or timeout.\n Adds the current process to the corresponding waiting list and\n yields the cpu to another running process."}
{"_id": "q_3288", "text": "Symbolic version of Decree.sys_receive"}
{"_id": "q_3289", "text": "Symbolic version of Decree.sys_transmit"}
{"_id": "q_3290", "text": "Synchronization decorator."}
{"_id": "q_3291", "text": "Save an arbitrary, serializable `value` under `key`.\n\n :param str key: A string identifier under which to store the value.\n :param value: A serializable value\n :return:"}
{"_id": "q_3292", "text": "Load an arbitrary value identified by `key`.\n\n :param str key: The key that identifies the value\n :return: The loaded value"}
{"_id": "q_3293", "text": "Return a managed file-like object from which the calling code can read\n previously-serialized data.\n\n :param key:\n :return: A managed stream-like object"}
{"_id": "q_3294", "text": "Yield a file object representing `key`\n\n :param str key: The file to save to\n :param bool binary: Whether we should treat it as binary\n :return:"}
{"_id": "q_3295", "text": "Return just the filenames that match `glob_str` inside the store directory.\n\n :param str glob_str: A glob string, i.e. 'state_*'\n :return: list of matched keys"}
{"_id": "q_3296", "text": "Save a state to storage, return identifier.\n\n :param state: The state to save\n :param int state_id: If not None force the state id potentially overwriting old states\n :return: New state id\n :rtype: int"}
{"_id": "q_3297", "text": "Create an indexed output stream i.e. 'test_00000001.name'\n\n :param name: Identifier for the stream\n :return: A context-managed stream-like object"}
{"_id": "q_3298", "text": "Compare registers from a remote gdb session to current mcore.\n\n :param manticore.core.cpu Cpu: Current cpu\n :param bool should_print: Whether to print values to stdout\n :return: Whether or not any differences were detected\n :rtype: bool"}
{"_id": "q_3299", "text": "Mirror some service calls in manticore. Happens after qemu executed a SVC\n instruction, but before manticore did."}
{"_id": "q_3300", "text": "The entry point of the visitor.\n The exploration algorithm is a DFS post-order traversal\n The implementation used two stacks instead of a recursion\n The final result is store in self.result\n\n :param node: Node to explore\n :type node: Expression\n :param use_fixed_point: if True, it runs _methods until a fixed point is found\n :type use_fixed_point: Bool"}
{"_id": "q_3301", "text": "Overload Visitor._method because we want to stop to iterate over the\n visit_ functions as soon as a valid visit_ function is found"}
{"_id": "q_3302", "text": "a + 0 ==> a\n 0 + a ==> a"}
{"_id": "q_3303", "text": "a | 0 => a\n 0 | a => a\n 0xffffffff & a => 0xffffffff\n a & 0xffffffff => 0xffffffff"}
{"_id": "q_3304", "text": "Build transaction data from function signature and arguments"}
{"_id": "q_3305", "text": "Makes a function hash id from a method signature"}
{"_id": "q_3306", "text": "Translates a python integral or a BitVec into a 32 byte string, MSB first"}
{"_id": "q_3307", "text": "Translates a signed python integral or a BitVec into a 32 byte string, MSB first"}
{"_id": "q_3308", "text": "Make sure an EVM instruction has all of its arguments concretized according to\n provided policies.\n\n Example decoration:\n\n @concretized_args(size='ONE', address='')\n def LOG(self, address, size, *topics):\n ...\n\n The above will make sure that the |size| parameter to LOG is Concretized when symbolic\n according to the 'ONE' policy and concretize |address| with the default policy.\n\n :param policies: A kwargs list of argument names and their respective policies.\n Provide None or '' as policy to use default.\n :return: A function decorator"}
{"_id": "q_3309", "text": "This calculates the amount of extra gas needed for accessing to\n previously unused memory.\n\n :param address: base memory offset\n :param size: size of the memory access"}
{"_id": "q_3310", "text": "Read size byte from bytecode.\n If less than size bytes are available result will be pad with \\x00"}
{"_id": "q_3311", "text": "Push into the stack\n\n ITEM0\n ITEM1\n ITEM2\n sp-> {empty}"}
{"_id": "q_3312", "text": "Read a value from the top of the stack without removing it"}
{"_id": "q_3313", "text": "Revert the stack, gas, pc and memory allocation so it looks like before executing the instruction"}
{"_id": "q_3314", "text": "Integer division operation"}
{"_id": "q_3315", "text": "Signed modulo remainder operation"}
{"_id": "q_3316", "text": "Calculate extra gas fee"}
{"_id": "q_3317", "text": "Extend length of two's complement signed integer"}
{"_id": "q_3318", "text": "Less-than comparison"}
{"_id": "q_3319", "text": "Greater-than comparison"}
{"_id": "q_3320", "text": "Signed greater-than comparison"}
{"_id": "q_3321", "text": "Compute Keccak-256 hash"}
{"_id": "q_3322", "text": "Get input data of current environment"}
{"_id": "q_3323", "text": "Copy input data in current environment to memory"}
{"_id": "q_3324", "text": "Copy an account's code to memory"}
{"_id": "q_3325", "text": "Load word from memory"}
{"_id": "q_3326", "text": "Save byte to memory"}
{"_id": "q_3327", "text": "Load word from storage"}
{"_id": "q_3328", "text": "Save word to storage"}
{"_id": "q_3329", "text": "Conditionally alter the program counter"}
{"_id": "q_3330", "text": "Exchange 1st and 2nd stack items"}
{"_id": "q_3331", "text": "Message-call into this account with alternative account's code"}
{"_id": "q_3332", "text": "Halt execution returning output data"}
{"_id": "q_3333", "text": "Current ongoing human transaction"}
{"_id": "q_3334", "text": "Read a value from a storage slot on the specified account\n\n :param storage_address: an account address\n :param offset: the storage slot to use.\n :type offset: int or BitVec\n :return: the value\n :rtype: int or BitVec"}
{"_id": "q_3335", "text": "Writes a value to a storage slot in specified account\n\n :param storage_address: an account address\n :param offset: the storage slot to use.\n :type offset: int or BitVec\n :param value: the value to write\n :type value: int or BitVec"}
{"_id": "q_3336", "text": "Gets all items in an account storage\n\n :param address: account address\n :return: all items in account storage. items are tuple of (index, value). value can be symbolic\n :rtype: list[(storage_index, storage_value)]"}
{"_id": "q_3337", "text": "Create a fresh 160bit address"}
{"_id": "q_3338", "text": "Toggle between ARM and Thumb mode"}
{"_id": "q_3339", "text": "MRC moves to ARM register from coprocessor.\n\n :param Armv7Operand coprocessor: The name of the coprocessor; immediate\n :param Armv7Operand opcode1: coprocessor specific opcode; 3-bit immediate\n :param Armv7Operand dest: the destination operand: register\n :param Armv7Operand coprocessor_reg_n: the coprocessor register; immediate\n :param Armv7Operand coprocessor_reg_m: the coprocessor register; immediate\n :param Armv7Operand opcode2: coprocessor specific opcode; 3-bit immediate"}
{"_id": "q_3340", "text": "Loads double width data from memory."}
{"_id": "q_3341", "text": "Writes the contents of two registers to memory."}
{"_id": "q_3342", "text": "Address to Register adds an immediate value to the PC value, and writes the result to the destination register.\n\n :param ARMv7Operand dest: Specifies the destination register.\n :param ARMv7Operand src:\n Specifies the label of an instruction or literal data item whose address is to be loaded into\n <Rd>. The assembler calculates the required value of the offset from the Align(PC,4)\n value of the ADR instruction to this label."}
{"_id": "q_3343", "text": "Compare and Branch on Zero compares the value in a register with zero, and conditionally branches forward\n a constant value. It does not affect the condition flags.\n\n :param ARMv7Operand op: Specifies the register that contains the first operand.\n :param ARMv7Operand dest:\n Specifies the label of the instruction that is to be branched to. The assembler calculates the\n required value of the offset from the PC value of the CBZ instruction to this label, then\n selects an encoding that will set imm32 to that offset. Allowed offsets are even numbers in\n the range 0 to 126."}
{"_id": "q_3344", "text": "Get next instruction using the Capstone disassembler\n\n :param str code: binary blob to be disassembled\n :param long pc: program counter"}
{"_id": "q_3345", "text": "Add a constraint to the set\n\n :param constraint: The constraint to add to the set.\n :param check: Currently unused.\n :return:"}
{"_id": "q_3346", "text": "Declare the variable `var`"}
{"_id": "q_3347", "text": "True if expression_var is declared in this constraint set"}
{"_id": "q_3348", "text": "Perform the inverse transformation to encoded data. Will attempt best case reconstruction, which means\n it will return nan for handle_missing and handle_unknown settings that break the bijection. We issue\n warnings when some of those cases occur.\n\n Parameters\n ----------\n X_in : array-like, shape = [n_samples, n_features]\n\n Returns\n -------\n p: array, the same size of X_in"}
{"_id": "q_3349", "text": "Convert basen code as integers.\n\n Parameters\n ----------\n X : DataFrame\n encoded data\n cols : list-like\n Column names in the DataFrame that be encoded\n base : int\n The base of transform\n\n Returns\n -------\n numerical: DataFrame"}
{"_id": "q_3350", "text": "The lambda body to transform the column values"}
{"_id": "q_3351", "text": "Returns names of 'object' columns in the DataFrame."}
{"_id": "q_3352", "text": "Unite target data type into a Series.\n If the target is a Series or a DataFrame, we preserve its index.\n But if the target does not contain index attribute, we use the index from the argument."}
{"_id": "q_3353", "text": "Here we iterate through the datasets and score them with a classifier using different encodings."}
{"_id": "q_3354", "text": "A wrapper around click.secho that disables any coloring being used\n if colors have been disabled."}
{"_id": "q_3355", "text": "Associate a notification template from this job template.\n\n =====API DOCS=====\n Associate a notification template from this job template.\n\n :param job_template: The job template to associate to.\n :type job_template: str\n :param notification_template: The notification template to be associated.\n :type notification_template: str\n :param status: type of notification this notification template should be associated to.\n :type status: str\n :returns: Dictionary of only one key \"changed\", which indicates whether the association succeeded.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3356", "text": "Disassociate a notification template from this job template.\n\n =====API DOCS=====\n Disassociate a notification template from this job template.\n\n :param job_template: The job template to disassociate from.\n :type job_template: str\n :param notification_template: The notification template to be disassociated.\n :type notification_template: str\n :param status: type of notification this notification template should be disassociated from.\n :type status: str\n :returns: Dictionary of only one key \"changed\", which indicates whether the disassociation succeeded.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3357", "text": "Contact Tower and request a configuration update using this job template.\n\n =====API DOCS=====\n Contact Tower and request a provisioning callback using this job template.\n\n :param pk: Primary key of the job template to run provisioning callback against.\n :type pk: int\n :param host_config_key: Key string used to authenticate the callback host.\n :type host_config_key: str\n :param extra_vars: Extra variables that are passed to provisioning callback.\n :type extra_vars: array of str\n :returns: A dictionary of a single key \"changed\", which indicates whether the provisioning callback\n is successful.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3358", "text": "Decorator to aggregate unified_jt-related fields.\n\n Args:\n func: The CURD method to be decorated.\n is_create: Boolean flag showing whether this method is create.\n has_pk: Boolean flag showing whether this method uses pk as argument.\n\n Returns:\n A function with necessary click-related attributes whose keyworded\n arguments are aggregated.\n\n Raises:\n exc.UsageError: Either more than one unified jt fields are\n provided, or none is provided when is_create flag is set."}
{"_id": "q_3359", "text": "Internal method that lies to our `monitor` method by returning\n a scorecard for the workflow job where the standard out\n would have been expected."}
{"_id": "q_3360", "text": "Monkey-patch click's format_options method to support option categorization."}
{"_id": "q_3361", "text": "Return one and exactly one object\n\n =====API DOCS=====\n Return one and exactly one Tower setting.\n\n :param pk: Primary key of the Tower setting to retrieve\n :type pk: int\n :returns: loaded JSON of the retrieved Tower setting object.\n :rtype: dict\n :raises tower_cli.exceptions.NotFound: When no specified Tower setting exists.\n\n =====API DOCS====="}
{"_id": "q_3362", "text": "Return one and exactly one object.\n\n Lookups may be through a primary key, specified as a positional argument, and/or through filters specified\n through keyword arguments.\n\n If the number of results does not equal one, raise an exception.\n\n =====API DOCS=====\n Retrieve one and exactly one object.\n\n :param pk: Primary key of the resource to be read. Tower CLI will only attempt to read *that* object\n if ``pk`` is provided (not ``None``).\n :type pk: int\n :param `**kwargs`: Keyword arguments used to look up resource object to retrieve if ``pk`` is not provided.\n :returns: loaded JSON of the retrieved resource object.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3363", "text": "Disassociate the `other` record from the `me` record."}
{"_id": "q_3364", "text": "Copy an object.\n\n Only the ID is used for the lookup. All provided fields are used to override the old data from the\n copied resource.\n\n =====API DOCS=====\n Copy an object.\n\n :param pk: Primary key of the resource object to be copied\n :param new_name: The new name to give the resource if deep copying via the API\n :type pk: int\n :param `**kwargs`: Keyword arguments of fields whose given value will override the original value.\n :returns: loaded JSON of the copied new resource object.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3365", "text": "Internal utility function to return standard out. Requires the pk of a unified job."}
{"_id": "q_3366", "text": "Print out the standard out of a unified job to the command line or output file.\n For Projects, print the standard out of most recent update.\n For Inventory Sources, print standard out of most recent sync.\n For Jobs, print the job's standard out.\n For Workflow Jobs, print a status table of its jobs.\n\n =====API DOCS=====\n Print out the standard out of a unified job to the command line or output file.\n For Projects, print the standard out of most recent update.\n For Inventory Sources, print standard out of most recent sync.\n For Jobs, print the job's standard out.\n For Workflow Jobs, print a status table of its jobs.\n\n :param pk: Primary key of the job resource object to be monitored.\n :type pk: int\n :param start_line: Line at which to start printing job output\n :param end_line: Line at which to end printing job output\n :param outfile: Alternative file than stdout to write job stdout to.\n :type outfile: file\n :param `**kwargs`: Keyword arguments used to look up job resource object to monitor if ``pk`` is\n not provided.\n :returns: A dictionary containing changed=False\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3367", "text": "Stream the standard output from a job, project update, or inventory udpate.\n\n =====API DOCS=====\n Stream the standard output from a job run to stdout.\n\n :param pk: Primary key of the job resource object to be monitored.\n :type pk: int\n :param parent_pk: Primary key of the unified job template resource object whose latest job run will be\n monitored if ``pk`` is not set.\n :type parent_pk: int\n :param timeout: Number in seconds after which this method will time out.\n :type timeout: float\n :param interval: Polling interval to refresh content from Tower.\n :type interval: float\n :param outfile: Alternative file than stdout to write job stdout to.\n :type outfile: file\n :param `**kwargs`: Keyword arguments used to look up job resource object to monitor if ``pk`` is\n not provided.\n :returns: A dictionary combining the JSON output of the finished job resource object, as well as\n two extra fields: \"changed\", a flag indicating if the job resource object is finished\n as expected; \"id\", an integer which is the primary key of the job resource object being\n monitored.\n :rtype: dict\n :raises tower_cli.exceptions.Timeout: When monitor time reaches time out.\n :raises tower_cli.exceptions.JobFailure: When the job being monitored runs into failure.\n\n =====API DOCS====="}
{"_id": "q_3368", "text": "Print the current job status. This is used to check a running job. You can look up the job with\n the same parameters used for a get request.\n\n =====API DOCS=====\n Retrieve the current job status.\n\n :param pk: Primary key of the resource to retrieve status from.\n :type pk: int\n :param detail: Flag that if set, return the full JSON of the job resource rather than a status summary.\n :type detail: bool\n :param `**kwargs`: Keyword arguments used to look up resource object to retrieve status from if ``pk``\n is not provided.\n :returns: full loaded JSON of the specified unified job if ``detail`` flag is on; trimed JSON containing\n only \"elapsed\", \"failed\" and \"status\" fields of the unified job if ``detail`` flag is off.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3369", "text": "Cancel a currently running job.\n\n Fails with a non-zero exit status if the job cannot be canceled.\n You must provide either a pk or parameters in the job's identity.\n\n =====API DOCS=====\n Cancel a currently running job.\n\n :param pk: Primary key of the job resource to restart.\n :type pk: int\n :param fail_if_not_running: Flag that if set, raise exception if the job resource cannot be canceled.\n :type fail_if_not_running: bool\n :param `**kwargs`: Keyword arguments used to look up job resource object to restart if ``pk`` is not\n provided.\n :returns: A dictionary of two keys: \"status\", which is \"canceled\", and \"changed\", which indicates if\n the job resource has been successfully canceled.\n :rtype: dict\n :raises tower_cli.exceptions.TowerCLIError: When the job resource cannot be canceled and\n ``fail_if_not_running`` flag is on.\n =====API DOCS====="}
{"_id": "q_3370", "text": "Relaunch a stopped job.\n\n Fails with a non-zero exit status if the job cannot be relaunched.\n You must provide either a pk or parameters in the job's identity.\n\n =====API DOCS=====\n Relaunch a stopped job resource.\n\n :param pk: Primary key of the job resource to relaunch.\n :type pk: int\n :param `**kwargs`: Keyword arguments used to look up job resource object to relaunch if ``pk`` is not\n provided.\n :returns: A dictionary combining the JSON output of the relaunched job resource object, as well\n as an extra field \"changed\", a flag indicating if the job resource object is status-changed\n as expected.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3371", "text": "Update all related inventory sources of the given inventory.\n\n Note global option --format is not available here, as the output would always be JSON-formatted.\n\n =====API DOCS=====\n Update all related inventory sources of the given inventory.\n\n :param pk: Primary key of the given inventory.\n :type pk: int\n :param `**kwargs`: Keyword arguments list of available fields used for searching resource objects.\n :returns: A JSON object of update status of the given inventory.\n :rtype: dict\n =====API DOCS====="}
{"_id": "q_3372", "text": "Do extra processing so we can display the actor field as\n a top-level field"}
{"_id": "q_3373", "text": "Log the given output to stderr if and only if we are in\n verbose mode.\n\n If we are not in verbose mode, this is a no-op."}
{"_id": "q_3374", "text": "Hook for ResourceMeta class to call when initializing model class.\n Saves fields obtained from resource class backlinks"}
{"_id": "q_3375", "text": "Returns a callable which becomes the associate or disassociate\n method for the related field.\n Method can be overridden to add additional functionality, but\n `_produce_method` may also need to be subclassed to decorate\n it appropriately."}
{"_id": "q_3376", "text": "Create a new label.\n\n There are two types of label creation: isolatedly creating a new label and creating a new label under\n a job template. Here the two types are discriminated by whether to provide --job-template option.\n\n Fields in the resource's `identity` tuple are used for a lookup; if a match is found, then no-op (unless\n `force_on_exists` is set) but do not fail (unless `fail_on_found` is set).\n\n =====API DOCS=====\n Create a label.\n\n :param job_template: Primary key or name of the job template for the created label to associate to.\n :type job_template: str\n :param fail_on_found: Flag that if set, the operation fails if an object matching the unique criteria\n already exists.\n :type fail_on_found: bool\n :param force_on_exists: Flag that if set, then if a match is found on unique fields, other fields will\n be updated to the provided values.; If unset, a match causes the request to be\n a no-op.\n :type force_on_exists: bool\n :param `**kwargs`: Keyword arguments which, all together, will be used as POST body to create the\n resource object.\n :returns: A dictionary combining the JSON output of the created resource, as well as two extra fields:\n \"changed\", a flag indicating if the resource is created successfully; \"id\", an integer which\n is the primary key of the created object.\n :rtype: dict\n :raises tower_cli.exceptions.TowerCLIError: When the label already exists and ``fail_on_found`` flag is on.\n\n =====API DOCS====="}
{"_id": "q_3377", "text": "Echo a setting to the CLI."}
{"_id": "q_3378", "text": "Read or write tower-cli configuration.\n\n `tower config` saves the given setting to the appropriate Tower CLI;\n either the user's ~/.tower_cli.cfg file, or the /etc/tower/tower_cli.cfg\n file if --global is used.\n\n Writing to /etc/tower/tower_cli.cfg is likely to require heightened\n permissions (in other words, sudo)."}
{"_id": "q_3379", "text": "Export assets from Tower.\n\n 'tower receive' exports one or more assets from a Tower instance\n\n For all of the possible assets types the TEXT can either be the assets name\n (or username for the case of a user) or the keyword all. Specifying all\n will export all of the assets of that type."}
{"_id": "q_3380", "text": "Modify an already existing.\n\n To edit the project's organizations, see help for organizations.\n\n Fields in the resource's `identity` tuple can be used in lieu of a\n primary key for a lookup; in such a case, only other fields are\n written.\n\n To modify unique fields, you must use the primary key for the lookup.\n\n =====API DOCS=====\n Modify an already existing project.\n\n :param pk: Primary key of the resource to be modified.\n :type pk: int\n :param create_on_missing: Flag that if set, a new object is created if ``pk`` is not set and objects\n matching the appropriate unique criteria is not found.\n :type create_on_missing: bool\n :param `**kwargs`: Keyword arguments which, all together, will be used as PATCH body to modify the\n resource object. if ``pk`` is not set, key-value pairs of ``**kwargs`` which are\n also in resource's identity will be used to lookup existing reosource.\n :returns: A dictionary combining the JSON output of the modified resource, as well as two extra fields:\n \"changed\", a flag indicating if the resource is successfully updated; \"id\", an integer which\n is the primary key of the updated object.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3381", "text": "Print the status of the most recent update.\n\n =====API DOCS=====\n Print the status of the most recent update.\n\n :param pk: Primary key of the resource to retrieve status from.\n :type pk: int\n :param detail: Flag that if set, return the full JSON of the job resource rather than a status summary.\n :type detail: bool\n :param `**kwargs`: Keyword arguments used to look up resource object to retrieve status from if ``pk``\n is not provided.\n :returns: full loaded JSON of the specified unified job if ``detail`` flag is on; trimed JSON containing\n only \"elapsed\", \"failed\" and \"status\" fields of the unified job if ``detail`` flag is off.\n :rtype: dict\n =====API DOCS====="}
{"_id": "q_3382", "text": "Match against the appropriate choice value using the superclass\n implementation, and then return the actual choice."}
{"_id": "q_3383", "text": "Return the appropriate integer value. If a non-integer is\n provided, attempt a name-based lookup and return the primary key."}
{"_id": "q_3384", "text": "Remove a failure node link.\n The resulatant 2 nodes will both become root nodes.\n\n =====API DOCS=====\n Remove a failure node link.\n\n :param parent: Primary key of parent node to disassociate failure node from.\n :type parent: int\n :param child: Primary key of child node to be disassociated.\n :type child: int\n :returns: Dictionary of only one key \"changed\", which indicates whether the disassociation succeeded.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3385", "text": "Converts a set of CLI input arguments, `in_data`, into\n request data and an endpoint that can be used to look\n up a role or list of roles.\n\n Also changes the format of `type` in data to what the server\n expects for the role model, as it exists in the database."}
{"_id": "q_3386", "text": "Add or remove columns from the output."}
{"_id": "q_3387", "text": "Populates columns and sets display attribute as needed.\n Operates on data."}
{"_id": "q_3388", "text": "Return a list of roles.\n\n =====API DOCS=====\n Retrieve a list of objects.\n\n :param all_pages: Flag that if set, collect all pages of content from the API when returning results.\n :type all_pages: bool\n :param page: The page to show. Ignored if all_pages is set.\n :type page: int\n :param query: Contains 2-tuples used as query parameters to filter resulting resource objects.\n :type query: list\n :param `**kwargs`: Keyword arguments list of available fields used for searching resource objects.\n :returns: A JSON object containing details of all resource objects returned by Tower backend.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3389", "text": "Get information about a role.\n\n =====API DOCS=====\n Retrieve one and exactly one object.\n\n :param pk: Primary key of the resource to be read. Tower CLI will only attempt to read *that* object\n if ``pk`` is provided (not ``None``).\n :type pk: int\n :param `**kwargs`: Keyword arguments used to look up resource object to retrieve if ``pk`` is not provided.\n :returns: loaded JSON of the retrieved resource object.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3390", "text": "Investigate two lists of workflow TreeNodes and categorize them.\n\n There will be three types of nodes after categorization:\n 1. Nodes that only exists in the new list. These nodes will later be\n created recursively.\n 2. Nodes that only exists in the old list. These nodes will later be\n deleted recursively.\n 3. Node pairs that makes an exact match. These nodes will be further\n investigated.\n\n Corresponding nodes of old and new lists will be distinguished by their\n unified_job_template value. A special case is that both the old and the new\n lists contain one type of node, say A, and at least one of them contains\n duplicates. In this case all A nodes in the old list will be categorized as\n to-be-deleted and all A nodes in the new list will be categorized as\n to-be-created."}
{"_id": "q_3391", "text": "Takes the list results from the API in `node_results` and\n translates this data into a dictionary organized in a\n human-readable heirarchial structure"}
{"_id": "q_3392", "text": "Returns a dictionary that represents the node network of the\n workflow job template"}
{"_id": "q_3393", "text": "Disassociate a notification template from this workflow.\n\n =====API DOCS=====\n Disassociate a notification template from this workflow job template.\n\n :param job_template: The workflow job template to disassociate from.\n :type job_template: str\n :param notification_template: The notification template to be disassociated.\n :type notification_template: str\n :param status: type of notification this notification template should be disassociated from.\n :type status: str\n :returns: Dictionary of only one key \"changed\", which indicates whether the disassociation succeeded.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3394", "text": "Create a group.\n\n =====API DOCS=====\n Create a group.\n\n :param parent: Primary key or name of the group which will be the parent of created group.\n :type parent: str\n :param fail_on_found: Flag that if set, the operation fails if an object matching the unique criteria\n already exists.\n :type fail_on_found: bool\n :param force_on_exists: Flag that if set, then if a match is found on unique fields, other fields will\n be updated to the provided values.; If unset, a match causes the request to be\n a no-op.\n :type force_on_exists: bool\n :param `**kwargs`: Keyword arguments which, all together, will be used as POST body to create the\n resource object.\n :returns: A dictionary combining the JSON output of the created resource, as well as two extra fields:\n \"changed\", a flag indicating if the resource is created successfully; \"id\", an integer which\n is the primary key of the created object.\n :rtype: dict\n :raises tower_cli.exceptions.UsageError: When inventory is not provided in ``**kwargs`` and ``parent``\n is not provided.\n\n =====API DOCS====="}
{"_id": "q_3395", "text": "Return a list of groups.\n\n =====API DOCS=====\n Retrieve a list of groups.\n\n :param root: Flag that if set, only root groups of a specific inventory will be listed.\n :type root: bool\n :param parent: Primary key or name of the group whose child groups will be listed.\n :type parent: str\n :param all_pages: Flag that if set, collect all pages of content from the API when returning results.\n :type all_pages: bool\n :param page: The page to show. Ignored if all_pages is set.\n :type page: int\n :param query: Contains 2-tuples used as query parameters to filter resulting resource objects.\n :type query: list\n :param `**kwargs`: Keyword arguments list of available fields used for searching resource objects.\n :returns: A JSON object containing details of all resource objects returned by Tower backend.\n :rtype: dict\n :raises tower_cli.exceptions.UsageError: When ``root`` flag is on and ``inventory`` is not present in\n ``**kwargs``.\n\n =====API DOCS====="}
{"_id": "q_3396", "text": "Associate this group with the specified group.\n\n =====API DOCS=====\n Associate this group with the specified group.\n\n :param group: Primary key or name of the child group to associate.\n :type group: str\n :param parent: Primary key or name of the parent group to associate to.\n :type parent: str\n :param inventory: Primary key or name of the inventory the association should happen in.\n :type inventory: str\n :returns: Dictionary of only one key \"changed\", which indicates whether the association succeeded.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3397", "text": "Similar to the Ansible function of the same name, parses file\n with a key=value pattern and stores information in a dictionary,\n but not as fully featured as the corresponding Ansible code."}
{"_id": "q_3398", "text": "Expand PyYAML's built-in dumper to support parsing OrderedDict. Return\n a string as parse result of the original data structure, which includes\n OrderedDict.\n\n Args:\n data: the data structure to be dumped(parsed) which is supposed to\n contain OrderedDict.\n Dumper: the yaml serializer to be expanded and used.\n kws: extra key-value arguments to be passed to yaml.dump."}
{"_id": "q_3399", "text": "Remove None-valued and configuration-related keyworded arguments"}
{"_id": "q_3400", "text": "Combine configuration-related keyworded arguments into\n notification_configuration."}
{"_id": "q_3401", "text": "Create a notification template.\n\n All required configuration-related fields (required according to\n notification_type) must be provided.\n\n There are two types of notification template creation: isolatedly\n creating a new notification template and creating a new notification\n template under a job template. Here the two types are discriminated by\n whether to provide --job-template option. --status option controls\n more specific, job-run-status-related association.\n\n Fields in the resource's `identity` tuple are used for a lookup;\n if a match is found, then no-op (unless `force_on_exists` is set) but\n do not fail (unless `fail_on_found` is set).\n\n =====API DOCS=====\n Create an object.\n\n :param fail_on_found: Flag that if set, the operation fails if an object matching the unique criteria\n already exists.\n :type fail_on_found: bool\n :param force_on_exists: Flag that if set, then if a match is found on unique fields, other fields will\n be updated to the provided values.; If unset, a match causes the request to be\n a no-op.\n :type force_on_exists: bool\n :param `**kwargs`: Keyword arguments which, all together, will be used as POST body to create the\n resource object.\n :returns: A dictionary combining the JSON output of the created resource, as well as two extra fields:\n \"changed\", a flag indicating if the resource is created successfully; \"id\", an integer which\n is the primary key of the created object.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3402", "text": "Modify an existing notification template.\n\n Not all required configuration-related fields (required according to\n notification_type) should be provided.\n\n Fields in the resource's `identity` tuple can be used in lieu of a\n primary key for a lookup; in such a case, only other fields are\n written.\n\n To modify unique fields, you must use the primary key for the lookup.\n\n =====API DOCS=====\n Modify an already existing object.\n\n :param pk: Primary key of the resource to be modified.\n :type pk: int\n :param create_on_missing: Flag that if set, a new object is created if ``pk`` is not set and objects\n matching the appropriate unique criteria is not found.\n :type create_on_missing: bool\n :param `**kwargs`: Keyword arguments which, all together, will be used as PATCH body to modify the\n resource object. if ``pk`` is not set, key-value pairs of ``**kwargs`` which are\n also in resource's identity will be used to lookup existing reosource.\n :returns: A dictionary combining the JSON output of the modified resource, as well as two extra fields:\n \"changed\", a flag indicating if the resource is successfully updated; \"id\", an integer which\n is the primary key of the updated object.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3403", "text": "Return one and exactly one notification template.\n\n Note here configuration-related fields like\n 'notification_configuration' and 'channels' will not be\n used even provided.\n\n Lookups may be through a primary key, specified as a positional\n argument, and/or through filters specified through keyword arguments.\n\n If the number of results does not equal one, raise an exception.\n\n =====API DOCS=====\n Retrieve one and exactly one object.\n\n :param pk: Primary key of the resource to be read. Tower CLI will only attempt to read *that* object\n if ``pk`` is provided (not ``None``).\n :type pk: int\n :param `**kwargs`: Keyword arguments used to look up resource object to retrieve if ``pk`` is not provided.\n :returns: loaded JSON of the retrieved resource object.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3404", "text": "Read tower-cli config values from the environment if present, being\n careful not to override config values that were explicitly passed in."}
{"_id": "q_3405", "text": "Read the configuration from the given file.\n\n If the file lacks any section header, add a [general] section\n header that encompasses the whole thing."}
{"_id": "q_3406", "text": "Launch a new ad-hoc command.\n\n Runs a user-defined command from Ansible Tower, immediately starts it,\n and returns back an ID in order for its status to be monitored.\n\n =====API DOCS=====\n Launch a new ad-hoc command.\n\n :param monitor: Flag that if set, immediately calls ``monitor`` on the newly launched command rather\n than exiting with a success.\n :type monitor: bool\n :param wait: Flag that if set, monitor the status of the job, but do not print while job is in progress.\n :type wait: bool\n :param timeout: If provided with ``monitor`` flag set, this attempt will time out after the given number\n of seconds.\n :type timeout: int\n :param `**kwargs`: Fields needed to create and launch an ad hoc command.\n :returns: Result of subsequent ``monitor`` call if ``monitor`` flag is on; Result of subsequent ``wait``\n call if ``wait`` flag is on; dictionary of \"id\" and \"changed\" if none of the two flags are on.\n :rtype: dict\n :raises tower_cli.exceptions.TowerCLIError: When ad hoc commands are not available in Tower backend.\n\n =====API DOCS====="}
{"_id": "q_3407", "text": "Given a method with a docstring, convert the docstring\n to more CLI appropriate wording, and also disambiguate the\n word \"object\" on the base class docstrings."}
{"_id": "q_3408", "text": "Given a method, return a method that runs the internal\n method and echos the result."}
{"_id": "q_3409", "text": "Echos only the id"}
{"_id": "q_3410", "text": "Retrieve the appropriate method from the Resource,\n decorate it as a click command, and return that method."}
{"_id": "q_3411", "text": "Update the given inventory source.\n\n =====API DOCS=====\n Update the given inventory source.\n\n :param inventory_source: Primary key or name of the inventory source to be updated.\n :type inventory_source: str\n :param monitor: Flag that if set, immediately calls ``monitor`` on the newly launched inventory update\n rather than exiting with a success.\n :type monitor: bool\n :param wait: Flag that if set, monitor the status of the inventory update, but do not print while it is\n in progress.\n :type wait: bool\n :param timeout: If provided with ``monitor`` flag set, this attempt will time out after the given number\n of seconds.\n :type timeout: int\n :param `**kwargs`: Fields used to override underlyingl inventory source fields when creating and launching\n an inventory update.\n :returns: Result of subsequent ``monitor`` call if ``monitor`` flag is on; Result of subsequent ``wait``\n call if ``wait`` flag is on; dictionary of \"status\" if none of the two flags are on.\n :rtype: dict\n :raises tower_cli.exceptions.BadRequest: When the inventory source cannot be updated.\n\n =====API DOCS====="}
{"_id": "q_3412", "text": "Return a list of hosts.\n\n =====API DOCS=====\n Retrieve a list of hosts.\n\n :param group: Primary key or name of the group whose hosts will be listed.\n :type group: str\n :param all_pages: Flag that if set, collect all pages of content from the API when returning results.\n :type all_pages: bool\n :param page: The page to show. Ignored if all_pages is set.\n :type page: int\n :param query: Contains 2-tuples used as query parameters to filter resulting resource objects.\n :type query: list\n :param `**kwargs`: Keyword arguments list of available fields used for searching resource objects.\n :returns: A JSON object containing details of all resource objects returned by Tower backend.\n :rtype: dict\n\n =====API DOCS====="}
{"_id": "q_3413", "text": "Extra format methods for multi methods that adds all the commands\n after the options."}
{"_id": "q_3414", "text": "Return a list of commands present in the commands and resources\n folders, but not subcommands."}
{"_id": "q_3415", "text": "Returns a list of multi-commands for each resource type."}
{"_id": "q_3416", "text": "Returns a list of global commands, realted to CLI\n configuration or system management in general."}
{"_id": "q_3417", "text": "Given a command identified by its name, import the appropriate\n module and return the decorated command.\n\n Resources are automatically commands, but if both a resource and\n a command are defined, the command takes precedence."}
{"_id": "q_3418", "text": "Adds the decorators for all types of unified job templates,\n and if the non-unified type is specified, converts it into the\n unified_job_template kwarg."}
{"_id": "q_3419", "text": "Translate a Mopidy search query to a Spotify search query"}
{"_id": "q_3420", "text": "Parse Retry-After header from response if it is set."}
{"_id": "q_3421", "text": "Generates a state string to be used in authorizations."}
{"_id": "q_3422", "text": "Parse token from the URI fragment, used by MobileApplicationClients.\n\n :param authorization_response: The full URL of the redirect back to you\n :return: A token dict"}
{"_id": "q_3423", "text": "Fetch a new access token using a refresh token.\n\n :param token_url: The token endpoint, must be HTTPS.\n :param refresh_token: The refresh_token to use.\n :param body: Optional application/x-www-form-urlencoded body to add the\n include in the token request. Prefer kwargs over body.\n :param auth: An auth tuple or method as accepted by `requests`.\n :param timeout: Timeout of the request in seconds.\n :param headers: A dict of headers to be used by `requests`.\n :param verify: Verify SSL certificate.\n :param proxies: The `proxies` argument will be passed to `requests`.\n :param kwargs: Extra parameters to include in the token request.\n :return: A token dict"}
{"_id": "q_3424", "text": "Boolean that indicates whether this session has an OAuth token\n or not. If `self.authorized` is True, you can reasonably expect\n OAuth-protected requests to the resource to succeed. If\n `self.authorized` is False, you need the user to go through the OAuth\n authentication dance before OAuth-protected requests to the resource\n will succeed."}
{"_id": "q_3425", "text": "Extract parameters from the post authorization redirect response URL.\n\n :param url: The full URL that resulted from the user being redirected\n back from the OAuth provider to you, the client.\n :returns: A dict of parameters extracted from the URL.\n\n >>> redirect_response = 'https://127.0.0.1/callback?oauth_token=kjerht2309uf&oauth_token_secret=lsdajfh923874&oauth_verifier=w34o8967345'\n >>> oauth_session = OAuth1Session('client-key', client_secret='secret')\n >>> oauth_session.parse_authorization_response(redirect_response)\n {\n 'oauth_token: 'kjerht2309u',\n 'oauth_token_secret: 'lsdajfh923874',\n 'oauth_verifier: 'w34o8967345',\n }"}
{"_id": "q_3426", "text": "When being redirected we should always strip Authorization\n header, since nonce may not be reused as per OAuth spec."}
{"_id": "q_3427", "text": "Validate new property value before setting it.\n\n value -- New value"}
{"_id": "q_3428", "text": "Get the property description.\n\n Returns a dictionary describing the property."}
{"_id": "q_3429", "text": "Set the current value of the property.\n\n value -- the value to set"}
{"_id": "q_3430", "text": "Get the thing at the given index.\n\n idx -- the index"}
{"_id": "q_3431", "text": "Initialize the handler.\n\n things -- list of Things managed by this server\n hosts -- list of allowed hostnames"}
{"_id": "q_3432", "text": "Set the default headers for all requests."}
{"_id": "q_3433", "text": "Validate Host header."}
{"_id": "q_3434", "text": "Handle a GET request, including websocket requests.\n\n thing_id -- ID of the thing this request is for"}
{"_id": "q_3435", "text": "Handle an incoming message.\n\n message -- message to handle"}
{"_id": "q_3436", "text": "Set a new value for this thing.\n\n value -- value to set"}
{"_id": "q_3437", "text": "Notify observers of a new value.\n\n value -- new value"}
{"_id": "q_3438", "text": "Return the thing state as a Thing Description.\n\n Returns the state as a dictionary."}
{"_id": "q_3439", "text": "Set the prefix of any hrefs associated with this thing.\n\n prefix -- the prefix"}
{"_id": "q_3440", "text": "Get the thing's properties as a dictionary.\n\n Returns the properties as a dictionary, i.e. name -> description."}
{"_id": "q_3441", "text": "Get the thing's actions as an array.\n\n action_name -- Optional action name to get descriptions for\n\n Returns the action descriptions."}
{"_id": "q_3442", "text": "Get the thing's events as an array.\n\n event_name -- Optional event name to get descriptions for\n\n Returns the event descriptions."}
{"_id": "q_3443", "text": "Add a property to this thing.\n\n property_ -- property to add"}
{"_id": "q_3444", "text": "Remove a property from this thing.\n\n property_ -- property to remove"}
{"_id": "q_3445", "text": "Get a property's value.\n\n property_name -- the property to get the value of\n\n Returns the properties value, if found, else None."}
{"_id": "q_3446", "text": "Get a mapping of all properties and their values.\n\n Returns a dictionary of property_name -> value."}
{"_id": "q_3447", "text": "Get an action.\n\n action_name -- name of the action\n action_id -- ID of the action\n\n Returns the requested action if found, else None."}
{"_id": "q_3448", "text": "Add a new event and notify subscribers.\n\n event -- the event that occurred"}
{"_id": "q_3449", "text": "Add an available event.\n\n name -- name of the event\n metadata -- event metadata, i.e. type, description, etc., as a dict"}
{"_id": "q_3450", "text": "Remove an existing action.\n\n action_name -- name of the action\n action_id -- ID of the action\n\n Returns a boolean indicating the presence of the action."}
{"_id": "q_3451", "text": "Remove a websocket subscriber.\n\n ws -- the websocket"}
{"_id": "q_3452", "text": "Remove a websocket subscriber from an event.\n\n name -- name of the event\n ws -- the websocket"}
{"_id": "q_3453", "text": "Notify all subscribers of an action status change.\n\n action -- the action whose status changed"}
{"_id": "q_3454", "text": "Notify all subscribers of an event.\n\n event -- the event that occurred"}
{"_id": "q_3455", "text": "Returns data with different cfgstr values that were previously computed\n with this cacher.\n\n Example:\n >>> from ubelt.util_cache import Cacher\n >>> # Ensure that some data exists\n >>> known_fnames = set()\n >>> cacher = Cacher('versioned_data', cfgstr='1')\n >>> cacher.ensure(lambda: 'data1')\n >>> known_fnames.add(cacher.get_fpath())\n >>> cacher = Cacher('versioned_data', cfgstr='2')\n >>> cacher.ensure(lambda: 'data2')\n >>> known_fnames.add(cacher.get_fpath())\n >>> # List previously computed configs for this type\n >>> from os.path import basename\n >>> cacher = Cacher('versioned_data', cfgstr='2')\n >>> exist_fpaths = set(cacher.existing_versions())\n >>> exist_fnames = list(map(basename, exist_fpaths))\n >>> print(exist_fnames)\n >>> assert exist_fpaths == known_fnames\n\n ['versioned_data_1.pkl', 'versioned_data_2.pkl']"}
{"_id": "q_3456", "text": "Removes the saved cache and metadata from disk"}
{"_id": "q_3457", "text": "Like load, but returns None if the load fails due to a cache miss.\n\n Args:\n on_error (str): How to handle non-io errors errors. Either raise,\n which re-raises the exception, or clear which deletes the cache\n and returns None."}
{"_id": "q_3458", "text": "Loads the data\n\n Raises:\n IOError - if the data is unable to be loaded. This could be due to\n a cache miss or because the cache is disabled.\n\n Example:\n >>> from ubelt.util_cache import * # NOQA\n >>> # Setting the cacher as enabled=False turns it off\n >>> cacher = Cacher('test_disabled_load', '', enabled=True)\n >>> cacher.save('data')\n >>> assert cacher.load() == 'data'\n >>> cacher.enabled = False\n >>> assert cacher.tryload() is None"}
{"_id": "q_3459", "text": "r\"\"\"\n Wraps around a function. A cfgstr must be stored in the base cacher.\n\n Args:\n func (callable): function that will compute data on cache miss\n *args: passed to func\n **kwargs: passed to func\n\n Example:\n >>> from ubelt.util_cache import * # NOQA\n >>> def func():\n >>> return 'expensive result'\n >>> fname = 'test_cacher_ensure'\n >>> cfgstr = 'func params'\n >>> cacher = Cacher(fname, cfgstr)\n >>> cacher.clear()\n >>> data1 = cacher.ensure(func)\n >>> data2 = cacher.ensure(func)\n >>> assert data1 == 'expensive result'\n >>> assert data1 == data2\n >>> cacher.clear()"}
{"_id": "q_3460", "text": "Returns the stamp certificate if it exists"}
{"_id": "q_3461", "text": "Get the hash of the each product file"}
{"_id": "q_3462", "text": "Check to see if a previously existing stamp is still valid and if the\n expected result of that computation still exists.\n\n Args:\n cfgstr (str, optional): override the default cfgstr if specified\n product (PathLike or Sequence[PathLike], optional): override the\n default product if specified"}
{"_id": "q_3463", "text": "Recertify that the product has been recomputed by writing a new\n certificate to disk."}
{"_id": "q_3464", "text": "Returns true of the redirect is a terminal.\n\n Notes:\n Needed for IPython.embed to work properly when this class is used\n to override stdout / stderr."}
{"_id": "q_3465", "text": "Gets the encoding of the `redirect` IO object\n\n Doctest:\n >>> redirect = io.StringIO()\n >>> assert TeeStringIO(redirect).encoding is None\n >>> assert TeeStringIO(None).encoding is None\n >>> assert TeeStringIO(sys.stdout).encoding is sys.stdout.encoding\n >>> redirect = io.TextIOWrapper(io.StringIO())\n >>> assert TeeStringIO(redirect).encoding is redirect.encoding"}
{"_id": "q_3466", "text": "Write to this and the redirected stream"}
{"_id": "q_3467", "text": "Returns path for user-specific data files\n\n Returns:\n PathLike : path to the data dir used by the current operating system"}
{"_id": "q_3468", "text": "Returns a directory which should be writable for any application\n This should be used for persistent configuration files.\n\n Returns:\n PathLike : path to the cahce dir used by the current operating system"}
{"_id": "q_3469", "text": "Calls `get_app_cache_dir` but ensures the directory exists.\n\n Args:\n appname (str): the name of the application\n *args: any other subdirectories may be specified\n\n SeeAlso:\n get_app_cache_dir\n\n Example:\n >>> import ubelt as ub\n >>> dpath = ub.ensure_app_cache_dir('ubelt')\n >>> assert exists(dpath)"}
{"_id": "q_3470", "text": "Locate a command.\n\n Search your local filesystem for an executable and return the first\n matching file with executable permission.\n\n Args:\n name (str): globstr of matching filename\n\n multi (bool): if True return all matches instead of just the first.\n Defaults to False.\n\n path (str or Iterable[PathLike]): overrides the system PATH variable.\n\n Returns:\n PathLike or List[PathLike] or None: returns matching executable(s).\n\n SeeAlso:\n shutil.which - which is available in Python 3.3+.\n\n Notes:\n This is essentially the `which` UNIX command\n\n References:\n https://stackoverflow.com/questions/377017/test-if-executable-exists-in-python/377028#377028\n https://docs.python.org/dev/library/shutil.html#shutil.which\n\n Example:\n >>> find_exe('ls')\n >>> find_exe('ping')\n >>> assert find_exe('which') == find_exe(find_exe('which'))\n >>> find_exe('which', multi=True)\n >>> find_exe('ping', multi=True)\n >>> find_exe('cmake', multi=True)\n >>> find_exe('nvcc', multi=True)\n >>> find_exe('noexist', multi=True)\n\n Example:\n >>> assert not find_exe('noexist', multi=False)\n >>> assert find_exe('ping', multi=False)\n >>> assert not find_exe('noexist', multi=True)\n >>> assert find_exe('ping', multi=True)\n\n Benchmark:\n >>> # xdoctest: +IGNORE_WANT\n >>> import ubelt as ub\n >>> import shutil\n >>> for timer in ub.Timerit(100, bestof=10, label='ub.find_exe'):\n >>> ub.find_exe('which')\n >>> for timer in ub.Timerit(100, bestof=10, label='shutil.which'):\n >>> shutil.which('which')\n Timed best=58.71 \u00b5s, mean=59.64 \u00b1 0.96 \u00b5s for ub.find_exe\n Timed best=72.75 \u00b5s, mean=73.07 \u00b1 0.22 \u00b5s for shutil.which"}
{"_id": "q_3471", "text": "Returns the user's home directory.\n If `username` is None, this is the directory for the current user.\n\n Args:\n username (str): name of a user on the system\n\n Returns:\n PathLike: userhome_dpath: path to the home directory\n\n Example:\n >>> import getpass\n >>> username = getpass.getuser()\n >>> assert userhome() == expanduser('~')\n >>> assert userhome(username) == expanduser('~')"}
{"_id": "q_3472", "text": "Inverse of `os.path.expanduser`\n\n Args:\n path (PathLike): path in system file structure\n home (str): symbol used to replace the home path. Defaults to '~', but\n you might want to use '$HOME' or '%USERPROFILE%' instead.\n\n Returns:\n PathLike: path: shortened path replacing the home directory with a tilde\n\n CommandLine:\n xdoctest -m ubelt.util_path compressuser\n\n Example:\n >>> path = expanduser('~')\n >>> assert path != '~'\n >>> assert compressuser(path) == '~'\n >>> assert compressuser(path + '1') == path + '1'\n >>> assert compressuser(path + '/1') == join('~', '1')\n >>> assert compressuser(path + '/1', '$HOME') == join('$HOME', '1')"}
{"_id": "q_3473", "text": "Normalizes a string representation of a path and does shell-like expansion.\n\n Args:\n path (PathLike): string representation of a path\n real (bool): if True, all symbolic links are followed. (default: False)\n\n Returns:\n PathLike : normalized path\n\n Note:\n This function is similar to the composition of expanduser, expandvars,\n normpath, and (realpath if `real` else abspath). However, on windows\n backslashes are then replaced with forward slashes to offer a\n consistent unix-like experience across platforms.\n\n On windows expanduser will expand environment variables formatted as\n %name%, whereas on unix, this will not occur.\n\n CommandLine:\n python -m ubelt.util_path truepath\n\n Example:\n >>> import ubelt as ub\n >>> assert ub.truepath('~/foo') == join(ub.userhome(), 'foo')\n >>> assert ub.truepath('~/foo') == ub.truepath('~/foo/bar/..')\n >>> assert ub.truepath('~/foo', real=True) == ub.truepath('~/foo')"}
{"_id": "q_3474", "text": "r\"\"\"\n Ensures that directory will exist. Creates new dir with sticky bits by\n default\n\n Args:\n dpath (PathLike): dir to ensure. Can also be a tuple to send to join\n mode (int): octal mode of directory (default 0o1777)\n verbose (int): verbosity (default 0)\n\n Returns:\n PathLike: path: the ensured directory\n\n Notes:\n This function is not thread-safe in Python2\n\n Example:\n >>> from ubelt.util_platform import * # NOQA\n >>> import ubelt as ub\n >>> cache_dpath = ub.ensure_app_cache_dir('ubelt')\n >>> dpath = join(cache_dpath, 'ensuredir')\n >>> if exists(dpath):\n ... os.rmdir(dpath)\n >>> assert not exists(dpath)\n >>> ub.ensuredir(dpath)\n >>> assert exists(dpath)\n >>> os.rmdir(dpath)"}
{"_id": "q_3475", "text": "pip install requirements-parser\n fname='requirements.txt'"}
{"_id": "q_3476", "text": "Parse the package dependencies listed in a requirements file but strips\n specific versioning information.\n\n TODO:\n perhaps use https://github.com/davidfischer/requirements-parser instead\n\n CommandLine:\n python -c \"import setup; print(setup.parse_requirements())\""}
{"_id": "q_3477", "text": "Injects a function into an object instance as a bound method\n\n The main use case of this function is for monkey patching. While monkey\n patching is sometimes necessary it should generally be avoided. Thus, we\n simply remind the developer that there might be a better way.\n\n Args:\n self (object): instance to inject a function into\n func (func): the function to inject (must contain an arg for self)\n name (str): name of the method. optional. If not specified the name\n of the function is used.\n\n Example:\n >>> class Foo(object):\n >>> def bar(self):\n >>> return 'bar'\n >>> def baz(self):\n >>> return 'baz'\n >>> self = Foo()\n >>> assert self.bar() == 'bar'\n >>> assert not hasattr(self, 'baz')\n >>> inject_method(self, baz)\n >>> assert not hasattr(Foo, 'baz'), 'should only change one instance'\n >>> assert self.baz() == 'baz'\n >>> inject_method(self, baz, 'bar')\n >>> assert self.bar() == 'baz'"}
{"_id": "q_3478", "text": "change file timestamps\n\n Works like the touch unix utility\n\n Args:\n fpath (PathLike): name of the file\n mode (int): file permissions (python3 and unix only)\n dir_fd (file): optional directory file descriptor. If specified, fpath\n is interpreted as relative to this descriptor (python 3 only).\n verbose (int): verbosity\n **kwargs : extra args passed to `os.utime` (python 3 only).\n\n Returns:\n PathLike: path to the file\n\n References:\n https://stackoverflow.com/questions/1158076/implement-touch-using-python\n\n Example:\n >>> import ubelt as ub\n >>> dpath = ub.ensure_app_cache_dir('ubelt')\n >>> fpath = join(dpath, 'touch_file')\n >>> assert not exists(fpath)\n >>> ub.touch(fpath)\n >>> assert exists(fpath)\n >>> os.unlink(fpath)"}
{"_id": "q_3479", "text": "Removes a file or recursively removes a directory.\n If a path does not exist, then this is does nothing.\n\n Args:\n path (PathLike): file or directory to remove\n verbose (bool): if True prints what is being done\n\n SeeAlso:\n send2trash - A cross-platform Python package for sending files\n to the trash instead of irreversibly deleting them.\n https://github.com/hsoft/send2trash\n\n Doctest:\n >>> import ubelt as ub\n >>> base = ub.ensure_app_cache_dir('ubelt', 'delete_test')\n >>> dpath1 = ub.ensuredir(join(base, 'dir'))\n >>> ub.ensuredir(join(base, 'dir', 'subdir'))\n >>> ub.touch(join(base, 'dir', 'to_remove1.txt'))\n >>> fpath1 = join(base, 'dir', 'subdir', 'to_remove3.txt')\n >>> fpath2 = join(base, 'dir', 'subdir', 'to_remove2.txt')\n >>> ub.touch(fpath1)\n >>> ub.touch(fpath2)\n >>> assert all(map(exists, (dpath1, fpath1, fpath2)))\n >>> ub.delete(fpath1)\n >>> assert all(map(exists, (dpath1, fpath2)))\n >>> assert not exists(fpath1)\n >>> ub.delete(dpath1)\n >>> assert not any(map(exists, (dpath1, fpath1, fpath2)))\n\n Doctest:\n >>> import ubelt as ub\n >>> dpath = ub.ensure_app_cache_dir('ubelt', 'delete_test2')\n >>> dpath1 = ub.ensuredir(join(dpath, 'dir'))\n >>> fpath1 = ub.touch(join(dpath1, 'to_remove.txt'))\n >>> assert exists(fpath1)\n >>> ub.delete(dpath)\n >>> assert not exists(fpath1)"}
{"_id": "q_3480", "text": "Joins string-ified items with separators newlines and container-braces."}
{"_id": "q_3481", "text": "Create a string representation for each item in a list."}
{"_id": "q_3482", "text": "Registers a custom formatting function with ub.repr2"}
{"_id": "q_3483", "text": "Returns an appropriate function to format `data` if one has been\n registered."}
{"_id": "q_3484", "text": "Convert a string-based key into a hasher class\n\n Notes:\n In terms of speed on 64bit systems, sha1 is the fastest followed by md5\n and sha512. The slowest algorithm is sha256. If xxhash is installed\n the fastest algorithm is xxh64.\n\n Example:\n >>> assert _rectify_hasher(NoParam) is DEFAULT_HASHER\n >>> assert _rectify_hasher('sha1') is hashlib.sha1\n >>> assert _rectify_hasher('sha256') is hashlib.sha256\n >>> assert _rectify_hasher('sha512') is hashlib.sha512\n >>> assert _rectify_hasher('md5') is hashlib.md5\n >>> assert _rectify_hasher(hashlib.sha1) is hashlib.sha1\n >>> assert _rectify_hasher(hashlib.sha1())().name == 'sha1'\n >>> import pytest\n >>> assert pytest.raises(KeyError, _rectify_hasher, '42')\n >>> #assert pytest.raises(TypeError, _rectify_hasher, object)\n >>> if xxhash:\n >>> assert _rectify_hasher('xxh64') is xxhash.xxh64\n >>> assert _rectify_hasher('xxh32') is xxhash.xxh32"}
{"_id": "q_3485", "text": "transforms base shorthand into the full list representation\n\n Example:\n >>> assert _rectify_base(NoParam) is DEFAULT_ALPHABET\n >>> assert _rectify_base('hex') is _ALPHABET_16\n >>> assert _rectify_base('abc') is _ALPHABET_26\n >>> assert _rectify_base(10) is _ALPHABET_10\n >>> assert _rectify_base(['1', '2']) == ['1', '2']\n >>> import pytest\n >>> assert pytest.raises(TypeError, _rectify_base, 'uselist')"}
{"_id": "q_3486", "text": "r\"\"\"\n Extracts the sequence of bytes that would be hashed by hash_data\n\n Example:\n >>> data = [2, (3, 4)]\n >>> result1 = (b''.join(_hashable_sequence(data, types=False)))\n >>> result2 = (b''.join(_hashable_sequence(data, types=True)))\n >>> assert result1 == b'_[_\\x02_,__[_\\x03_,_\\x04_,__]__]_'\n >>> assert result2 == b'_[_INT\\x02_,__[_INT\\x03_,_INT\\x04_,__]__]_'"}
{"_id": "q_3487", "text": "Converts `data` into a byte representation and calls update on the hasher\n `hashlib.HASH` algorithm.\n\n Args:\n hasher (HASH): instance of a hashlib algorithm\n data (object): ordered data with structure\n types (bool): include type prefixes in the hash\n\n Example:\n >>> hasher = hashlib.sha512()\n >>> data = [1, 2, ['a', 2, 'c']]\n >>> _update_hasher(hasher, data)\n >>> print(hasher.hexdigest()[0:8])\n e2c67675\n\n 2ba8d82b"}
{"_id": "q_3488", "text": "r\"\"\"\n Packs a long hexstr into a shorter length string with a larger base.\n\n Args:\n hexstr (str): string of hexidecimal symbols to convert\n base (list): symbols of the conversion base\n\n Example:\n >>> print(_convert_hexstr_base('ffffffff', _ALPHABET_26))\n nxmrlxv\n >>> print(_convert_hexstr_base('0', _ALPHABET_26))\n 0\n >>> print(_convert_hexstr_base('-ffffffff', _ALPHABET_26))\n -nxmrlxv\n >>> print(_convert_hexstr_base('aafffff1', _ALPHABET_16))\n aafffff1\n\n Sympy:\n >>> import sympy as sy\n >>> # Determine the length savings with lossless conversion\n >>> consts = dict(hexbase=16, hexlen=256, baselen=27)\n >>> symbols = sy.symbols('hexbase, hexlen, baselen, newlen')\n >>> haexbase, hexlen, baselen, newlen = symbols\n >>> eqn = sy.Eq(16 ** hexlen, baselen ** newlen)\n >>> newlen_ans = sy.solve(eqn, newlen)[0].subs(consts).evalf()\n >>> print('newlen_ans = %r' % (newlen_ans,))\n >>> # for a 26 char base we can get 216\n >>> print('Required length for lossless conversion len2 = %r' % (len2,))\n >>> def info(base, len):\n ... bits = base ** len\n ... print('base = %r' % (base,))\n ... print('len = %r' % (len,))\n ... print('bits = %r' % (bits,))\n >>> info(16, 256)\n >>> info(27, 16)\n >>> info(27, 64)\n >>> info(27, 216)"}
{"_id": "q_3489", "text": "Registers a function to generate a hash for data of the appropriate\n types. This can be used to register custom classes. Internally this is\n used to define how to hash non-builtin objects like ndarrays and uuids.\n\n The registered function should return a tuple of bytes. First a small\n prefix hinting at the data type, and second the raw bytes that can be\n hashed.\n\n Args:\n hash_types (class or tuple of classes):\n\n Returns:\n func: closure to be used as the decorator\n\n Example:\n >>> # xdoctest: +SKIP\n >>> # Skip this doctest because we dont want tests to modify\n >>> # the global state.\n >>> import ubelt as ub\n >>> import pytest\n >>> class MyType(object):\n ... def __init__(self, id):\n ... self.id = id\n >>> data = MyType(1)\n >>> # Custom types wont work with ub.hash_data by default\n >>> with pytest.raises(TypeError):\n ... ub.hash_data(data)\n >>> # You can register your functions with ubelt's internal\n >>> # hashable_extension registery.\n >>> @ub.util_hash._HASHABLE_EXTENSIONS.register(MyType)\n >>> def hash_my_type(data):\n ... return b'mytype', six.b(ub.hash_data(data.id))\n >>> # TODO: allow hash_data to take an new instance of\n >>> # HashableExtensions, so we dont have to modify the global\n >>> # ubelt state when we run tests.\n >>> my_instance = MyType(1)\n >>> ub.hash_data(my_instance)"}
{"_id": "q_3490", "text": "Returns an appropriate function to hash `data` if one has been\n registered.\n\n Raises:\n TypeError : if data has no registered hash methods\n\n Example:\n >>> import ubelt as ub\n >>> import pytest\n >>> if not ub.modname_to_modpath('numpy'):\n ... raise pytest.skip('numpy is optional')\n >>> self = HashableExtensions()\n >>> self._register_numpy_extensions()\n >>> self._register_builtin_class_extensions()\n\n >>> import numpy as np\n >>> data = np.array([1, 2, 3])\n >>> self.lookup(data[0])\n\n >>> class Foo(object):\n >>> def __init__(f):\n >>> f.attr = 1\n >>> data = Foo()\n >>> assert pytest.raises(TypeError, self.lookup, data)\n\n >>> # If ub.hash_data doesnt support your object,\n >>> # then you can register it.\n >>> @self.register(Foo)\n >>> def _hashfoo(data):\n >>> return b'FOO', data.attr\n >>> func = self.lookup(data)\n >>> assert func(data)[1] == 1\n\n >>> data = uuid.uuid4()\n >>> self.lookup(data)"}
{"_id": "q_3491", "text": "Numpy extensions are builtin"}
{"_id": "q_3492", "text": "Register hashing extensions for a selection of classes included in\n python stdlib.\n\n Example:\n >>> data = uuid.UUID('7e9d206b-dc02-4240-8bdb-fffe858121d0')\n >>> print(hash_data(data, base='abc', hasher='sha512', types=True)[0:8])\n cryarepd\n >>> data = OrderedDict([('a', 1), ('b', 2), ('c', [1, 2, 3]),\n >>> (4, OrderedDict())])\n >>> print(hash_data(data, base='abc', hasher='sha512', types=True)[0:8])\n qjspicvv\n\n gpxtclct"}
{"_id": "q_3493", "text": "Reads output from a process in a separate thread"}
{"_id": "q_3494", "text": "make an iso8601 timestamp\n\n Args:\n method (str): type of timestamp\n\n Example:\n >>> stamp = timestamp()\n >>> print('stamp = {!r}'.format(stamp))\n stamp = ...-...-...T..."}
{"_id": "q_3495", "text": "Imports a module via its path\n\n Args:\n modpath (PathLike): path to the module on disk or within a zipfile.\n\n Returns:\n module: the imported module\n\n References:\n https://stackoverflow.com/questions/67631/import-module-given-path\n\n Notes:\n If the module is part of a package, the package will be imported first.\n These modules may cause problems when reloading via IPython magic\n\n This can import a module from within a zipfile. To do this modpath\n should specify the path to the zipfile and the path to the module\n within that zipfile separated by a colon or pathsep.\n E.g. `/path/to/archive.zip:mymodule.py`\n\n Warning:\n It is best to use this with paths that will not conflict with\n previously existing modules.\n\n If the modpath conflicts with a previously existing module name. And\n the target module does imports of its own relative to this conflicting\n path. In this case, the module that was loaded first will win.\n\n For example if you try to import '/foo/bar/pkg/mod.py' from the folder\n structure:\n - foo/\n +- bar/\n +- pkg/\n + __init__.py\n |- mod.py\n |- helper.py\n\n If there exists another module named `pkg` already in sys.modules\n and mod.py does something like `from . import helper`, Python will\n assume helper belongs to the `pkg` module already in sys.modules.\n This can cause a NameError or worse --- a incorrect helper module.\n\n Example:\n >>> import xdoctest\n >>> modpath = xdoctest.__file__\n >>> module = import_module_from_path(modpath)\n >>> assert module is xdoctest\n\n Example:\n >>> # Test importing a module from within a zipfile\n >>> import zipfile\n >>> from xdoctest import utils\n >>> from os.path import join, expanduser\n >>> dpath = expanduser('~/.cache/xdoctest')\n >>> dpath = utils.ensuredir(dpath)\n >>> #dpath = utils.TempDir().ensure()\n >>> # Write to an external module named bar\n >>> external_modpath = join(dpath, 'bar.py')\n >>> open(external_modpath, 'w').write('testvar = 1')\n >>> internal = 'folder/bar.py'\n >>> # Move the external bar module into a zipfile\n >>> zippath = join(dpath, 'myzip.zip')\n >>> with zipfile.ZipFile(zippath, 'w') as myzip:\n >>> myzip.write(external_modpath, internal)\n >>> # Import the bar module from within the zipfile\n >>> modpath = zippath + ':' + internal\n >>> modpath = zippath + os.path.sep + internal\n >>> module = import_module_from_path(modpath)\n >>> assert module.__name__ == os.path.normpath('folder/bar')\n >>> assert module.testvar == 1\n\n Doctest:\n >>> import pytest\n >>> with pytest.raises(IOError):\n >>> import_module_from_path('does-not-exist')\n >>> with pytest.raises(IOError):\n >>> import_module_from_path('does-not-exist.zip/')"}
{"_id": "q_3496", "text": "syspath version of modname_to_modpath\n\n Args:\n modname (str): name of module to find\n sys_path (List[PathLike], default=None):\n if specified overrides `sys.path`\n exclude (List[PathLike], default=None):\n list of directory paths. if specified prevents these directories\n from being searched.\n\n Notes:\n This is much slower than the pkgutil mechanisms.\n\n CommandLine:\n python -m xdoctest.static_analysis _syspath_modname_to_modpath\n\n Example:\n >>> print(_syspath_modname_to_modpath('xdoctest.static_analysis'))\n ...static_analysis.py\n >>> print(_syspath_modname_to_modpath('xdoctest'))\n ...xdoctest\n >>> print(_syspath_modname_to_modpath('_ctypes'))\n ..._ctypes...\n >>> assert _syspath_modname_to_modpath('xdoctest', sys_path=[]) is None\n >>> assert _syspath_modname_to_modpath('xdoctest.static_analysis', sys_path=[]) is None\n >>> assert _syspath_modname_to_modpath('_ctypes', sys_path=[]) is None\n >>> assert _syspath_modname_to_modpath('this', sys_path=[]) is None\n\n Example:\n >>> # test what happens when the module is not visible in the path\n >>> modname = 'xdoctest.static_analysis'\n >>> modpath = _syspath_modname_to_modpath(modname)\n >>> exclude = [split_modpath(modpath)[0]]\n >>> found = _syspath_modname_to_modpath(modname, exclude=exclude)\n >>> # this only works if installed in dev mode, pypi fails\n >>> assert found is None, 'should not have found {}'.format(found)"}
{"_id": "q_3497", "text": "Finds the path to a python module from its name.\n\n Determines the path to a python module without directly import it\n\n Converts the name of a module (__name__) to the path (__file__) where it is\n located without importing the module. Returns None if the module does not\n exist.\n\n Args:\n modname (str): module filepath\n hide_init (bool): if False, __init__.py will be returned for packages\n hide_main (bool): if False, and hide_init is True, __main__.py will be\n returned for packages, if it exists.\n sys_path (list): if specified overrides `sys.path` (default None)\n\n Returns:\n str: modpath - path to the module, or None if it doesn't exist\n\n CommandLine:\n python -m xdoctest.static_analysis modname_to_modpath:0\n pytest /home/joncrall/code/xdoctest/xdoctest/static_analysis.py::modname_to_modpath:0\n\n Example:\n >>> modname = 'xdoctest.__main__'\n >>> modpath = modname_to_modpath(modname, hide_main=False)\n >>> assert modpath.endswith('__main__.py')\n >>> modname = 'xdoctest'\n >>> modpath = modname_to_modpath(modname, hide_init=False)\n >>> assert modpath.endswith('__init__.py')\n >>> modpath = basename(modname_to_modpath('_ctypes'))\n >>> assert 'ctypes' in modpath"}
{"_id": "q_3498", "text": "Determines importable name from file path\n\n Converts the path to a module (__file__) to the importable python name\n (__name__) without importing the module.\n\n The filename is converted to a module name, and parent directories are\n recursively included until a directory without an __init__.py file is\n encountered.\n\n Args:\n modpath (str): module filepath\n hide_init (bool): removes the __init__ suffix (default True)\n hide_main (bool): removes the __main__ suffix (default False)\n check (bool): if False, does not raise an error if modpath is a dir\n and does not contain an __init__ file.\n relativeto (str, optional): if specified, all checks are ignored and\n this is considered the path to the root module.\n\n Returns:\n str: modname\n\n Raises:\n ValueError: if check is True and the path does not exist\n\n CommandLine:\n xdoctest -m xdoctest.static_analysis modpath_to_modname\n\n Example:\n >>> from xdoctest import static_analysis\n >>> modpath = static_analysis.__file__.replace('.pyc', '.py')\n >>> modpath = modpath.replace('.pyc', '.py')\n >>> modname = modpath_to_modname(modpath)\n >>> assert modname == 'xdoctest.static_analysis'\n\n Example:\n >>> import xdoctest\n >>> assert modpath_to_modname(xdoctest.__file__.replace('.pyc', '.py')) == 'xdoctest'\n >>> assert modpath_to_modname(dirname(xdoctest.__file__.replace('.pyc', '.py'))) == 'xdoctest'\n\n Example:\n >>> modpath = modname_to_modpath('_ctypes')\n >>> modname = modpath_to_modname(modpath)\n >>> assert modname == '_ctypes'"}
{"_id": "q_3499", "text": "Determines if a key is specified on the command line\n\n Args:\n key (str or tuple): string or tuple of strings. Each key should be\n prefixed with two hyphens (i.e. `--`)\n argv (Optional[list]): overrides `sys.argv` if specified\n\n Returns:\n bool: flag : True if the key (or any of the keys) was specified\n\n Example:\n >>> import ubelt as ub\n >>> argv = ['--spam', '--eggs', 'foo']\n >>> assert ub.argflag('--eggs', argv=argv) is True\n >>> assert ub.argflag('--ans', argv=argv) is False\n >>> assert ub.argflag('foo', argv=argv) is True\n >>> assert ub.argflag(('bar', '--spam'), argv=argv) is True"}
{"_id": "q_3500", "text": "Horizontally concatenates strings preserving indentation\n\n Concatenates a list of objects ensuring that the next item in the list is\n all the way to the right of any previous items.\n\n Args:\n args (List[str]): strings to concatenate\n sep (str): separator (defaults to '')\n\n CommandLine:\n python -m ubelt.util_str hzcat\n\n Example1:\n >>> import ubelt as ub\n >>> B = ub.repr2([[1, 2], [3, 457]], nl=1, cbr=True, trailsep=False)\n >>> C = ub.repr2([[5, 6], [7, 8]], nl=1, cbr=True, trailsep=False)\n >>> args = ['A = ', B, ' * ', C]\n >>> print(ub.hzcat(args))\n A = [[1, 2], * [[5, 6],\n [3, 457]] [7, 8]]\n\n Example2:\n >>> from ubelt.util_str import *\n >>> import ubelt as ub\n >>> import unicodedata\n >>> aa = unicodedata.normalize('NFD', '\u00e1') # a unicode char with len2\n >>> B = ub.repr2([['\u03b8', aa], [aa, aa, aa]], nl=1, si=True, cbr=True, trailsep=False)\n >>> C = ub.repr2([[5, 6], [7, '\u03b8']], nl=1, si=True, cbr=True, trailsep=False)\n >>> args = ['A', '=', B, '*', C]\n >>> print(ub.hzcat(args, sep='\uff5c'))\n A\uff5c=\uff5c[[\u03b8, \u00e1], \uff5c*\uff5c[[5, 6],\n \uff5c \uff5c [\u00e1, \u00e1, \u00e1]]\uff5c \uff5c [7, \u03b8]]"}
{"_id": "q_3501", "text": "Create a symbolic link.\n\n This will work on linux or windows, however windows does have some corner\n cases. For more details see notes in `ubelt._win32_links`.\n\n Args:\n path (PathLike): path to real file or directory\n link_path (PathLike): path to desired location for symlink\n overwrite (bool): overwrite existing symlinks.\n This will not overwrite real files on systems with proper symlinks.\n However, on older versions of windows junctions are\n indistinguishable from real files, so we cannot make this\n guarantee. (default = False)\n verbose (int): verbosity level (default=0)\n\n Returns:\n PathLike: link path\n\n CommandLine:\n python -m ubelt.util_links symlink:0\n\n Example:\n >>> import ubelt as ub\n >>> dpath = ub.ensure_app_cache_dir('ubelt', 'test_symlink0')\n >>> real_path = join(dpath, 'real_file.txt')\n >>> link_path = join(dpath, 'link_file.txt')\n >>> [ub.delete(p) for p in [real_path, link_path]]\n >>> ub.writeto(real_path, 'foo')\n >>> result = symlink(real_path, link_path)\n >>> assert ub.readfrom(result) == 'foo'\n >>> [ub.delete(p) for p in [real_path, link_path]]\n\n Example:\n >>> import ubelt as ub\n >>> from os.path import dirname\n >>> dpath = ub.ensure_app_cache_dir('ubelt', 'test_symlink1')\n >>> ub.delete(dpath)\n >>> ub.ensuredir(dpath)\n >>> _dirstats(dpath)\n >>> real_dpath = ub.ensuredir((dpath, 'real_dpath'))\n >>> link_dpath = ub.augpath(real_dpath, base='link_dpath')\n >>> real_path = join(dpath, 'afile.txt')\n >>> link_path = join(dpath, 'afile.txt')\n >>> [ub.delete(p) for p in [real_path, link_path]]\n >>> ub.writeto(real_path, 'foo')\n >>> result = symlink(real_dpath, link_dpath)\n >>> assert ub.readfrom(link_path) == 'foo', 'read should be same'\n >>> ub.writeto(link_path, 'bar')\n >>> _dirstats(dpath)\n >>> assert ub.readfrom(link_path) == 'bar', 'very bad bar'\n >>> assert ub.readfrom(real_path) == 'bar', 'changing link did not change real'\n >>> ub.writeto(real_path, 'baz')\n >>> _dirstats(dpath)\n >>> assert ub.readfrom(real_path) == 'baz', 'very bad baz'\n >>> assert ub.readfrom(link_path) == 'baz', 'changing real did not change link'\n >>> ub.delete(link_dpath, verbose=1)\n >>> _dirstats(dpath)\n >>> assert not exists(link_dpath), 'link should not exist'\n >>> assert exists(real_path), 'real path should exist'\n >>> _dirstats(dpath)\n >>> ub.delete(dpath, verbose=1)\n >>> _dirstats(dpath)\n >>> assert not exists(real_path)"}
{"_id": "q_3502", "text": "Transforms function args into a key that can be used by the cache\n\n CommandLine:\n xdoctest -m ubelt.util_memoize _make_signature_key\n\n Example:\n >>> args = (4, [1, 2])\n >>> kwargs = {'a': 'b'}\n >>> key = _make_signature_key(args, kwargs)\n >>> print('key = {!r}'.format(key))\n >>> # Some mutable types cannot be handled by ub.hash_data\n >>> import pytest\n >>> import six\n >>> if six.PY2:\n >>> import collections as abc\n >>> else:\n >>> from collections import abc\n >>> with pytest.raises(TypeError):\n >>> _make_signature_key((4, [1, 2], {1: 2, 'a': 'b'}), kwargs={})\n >>> class Dummy(abc.MutableSet):\n >>> def __contains__(self, item): return None\n >>> def __iter__(self): return iter([])\n >>> def __len__(self): return 0\n >>> def add(self, item, loc): return None\n >>> def discard(self, item): return None\n >>> with pytest.raises(TypeError):\n >>> _make_signature_key((Dummy(),), kwargs={})"}
{"_id": "q_3503", "text": "r\"\"\"\n Colorizes text a single color using ansii tags.\n\n Args:\n text (str): text to colorize\n color (str): may be one of the following: yellow, blink, lightgray,\n underline, darkyellow, blue, darkblue, faint, fuchsia, black,\n white, red, brown, turquoise, bold, darkred, darkgreen, reset,\n standout, darkteal, darkgray, overline, purple, green, teal, fuscia\n\n Returns:\n str: text : colorized text.\n If pygments is not installed plain text is returned.\n\n CommandLine:\n python -c \"import pygments.console; print(sorted(pygments.console.codes.keys()))\"\n python -m ubelt.util_colors color_text\n\n Example:\n >>> text = 'raw text'\n >>> import pytest\n >>> import ubelt as ub\n >>> if ub.modname_to_modpath('pygments'):\n >>> # Colors text only if pygments is installed\n >>> assert color_text(text, 'red') == '\\x1b[31;01mraw text\\x1b[39;49;00m'\n >>> assert color_text(text, None) == 'raw text'\n >>> else:\n >>> # Otherwise text passes through unchanged\n >>> assert color_text(text, 'red') == 'raw text'\n >>> assert color_text(text, None) == 'raw text'"}
{"_id": "q_3504", "text": "Generates unique items in the order they appear.\n\n Args:\n items (Iterable): list of items\n\n key (Callable, optional): custom normalization function.\n If specified returns items where `key(item)` is unique.\n\n Yields:\n object: a unique item from the input sequence\n\n CommandLine:\n python -m utool.util_list --exec-unique_ordered\n\n Example:\n >>> import ubelt as ub\n >>> items = [4, 6, 6, 0, 6, 1, 0, 2, 2, 1]\n >>> unique_items = list(ub.unique(items))\n >>> assert unique_items == [4, 6, 0, 1, 2]\n\n Example:\n >>> import ubelt as ub\n >>> items = ['A', 'a', 'b', 'B', 'C', 'c', 'D', 'e', 'D', 'E']\n >>> unique_items = list(ub.unique(items, key=six.text_type.lower))\n >>> assert unique_items == ['A', 'b', 'C', 'D', 'e']\n >>> unique_items = list(ub.unique(items))\n >>> assert unique_items == ['A', 'a', 'b', 'B', 'C', 'c', 'D', 'e', 'E']"}
{"_id": "q_3505", "text": "Returns indices corresponding to the first instance of each unique item.\n\n Args:\n items (Sequence): indexable collection of items\n\n key (Callable, optional): custom normalization function.\n If specified returns items where `key(item)` is unique.\n\n Yields:\n int : indices of the unique items\n\n Example:\n >>> items = [0, 2, 5, 1, 1, 0, 2, 4]\n >>> indices = list(argunique(items))\n >>> assert indices == [0, 1, 2, 3, 7]\n >>> indices = list(argunique(items, key=lambda x: x % 2 == 0))\n >>> assert indices == [0, 2]"}
{"_id": "q_3506", "text": "Returns a list of booleans corresponding to the first instance of each\n unique item.\n\n Args:\n items (Sequence): indexable collection of items\n\n key (Callable, optional): custom normalization function.\n If specified returns items where `key(item)` is unique.\n\n Returns:\n List[bool] : flags the items that are unique\n\n Example:\n >>> import ubelt as ub\n >>> items = [0, 2, 1, 1, 0, 9, 2]\n >>> flags = unique_flags(items)\n >>> assert flags == [True, True, True, False, False, True, False]\n >>> flags = unique_flags(items, key=lambda x: x % 2 == 0)\n >>> assert flags == [True, False, True, False, False, False, False]"}
{"_id": "q_3507", "text": "Constructs a list of booleans where an item is True if its position is in\n `indices` otherwise it is False.\n\n Args:\n indices (list): list of integer indices\n\n maxval (int): length of the returned list. If not specified\n this is inferred from `indices`\n\n Note:\n In the future the arg `maxval` may change its name to `shape`\n\n Returns:\n list: mask: list of booleans. mask[idx] is True if idx in indices\n\n Example:\n >>> import ubelt as ub\n >>> indices = [0, 1, 4]\n >>> mask = ub.boolmask(indices, maxval=6)\n >>> assert mask == [True, True, False, False, True, False]\n >>> mask = ub.boolmask(indices)\n >>> assert mask == [True, True, False, False, True]"}
{"_id": "q_3508", "text": "Determine if all items in a sequence are the same\n\n Args:\n iterable (Iterable): items to determine if they are all the same\n\n eq (Callable, optional): function to determine equality\n (default: operator.eq)\n\n Example:\n >>> allsame([1, 1, 1, 1])\n True\n >>> allsame([])\n True\n >>> allsame([0, 1])\n False\n >>> iterable = iter([0, 1, 1, 1])\n >>> next(iterable)\n >>> allsame(iterable)\n True\n >>> allsame(range(10))\n False\n >>> allsame(range(10), lambda a, b: True)\n True"}
{"_id": "q_3509", "text": "Returns the indices that would sort a indexable object.\n\n This is similar to `numpy.argsort`, but it is written in pure python and\n works on both lists and dictionaries.\n\n Args:\n indexable (Iterable or Mapping): indexable to sort by\n\n key (Callable, optional): customizes the ordering of the indexable\n\n reverse (bool, optional): if True returns in descending order\n\n Returns:\n list: indices: list of indices such that sorts the indexable\n\n Example:\n >>> import ubelt as ub\n >>> # argsort works on dicts by returning keys\n >>> dict_ = {'a': 3, 'b': 2, 'c': 100}\n >>> indices = ub.argsort(dict_)\n >>> assert list(ub.take(dict_, indices)) == sorted(dict_.values())\n >>> # argsort works on lists by returning indices\n >>> indexable = [100, 2, 432, 10]\n >>> indices = ub.argsort(indexable)\n >>> assert list(ub.take(indexable, indices)) == sorted(indexable)\n >>> # Can use iterators, but be careful. It exhausts them.\n >>> indexable = reversed(range(100))\n >>> indices = ub.argsort(indexable)\n >>> assert indices[0] == 99\n >>> # Can use key just like sorted\n >>> indexable = [[0, 1, 2], [3, 4], [5]]\n >>> indices = ub.argsort(indexable, key=len)\n >>> assert indices == [2, 1, 0]\n >>> # Can use reverse just like sorted\n >>> indexable = [0, 2, 1]\n >>> indices = ub.argsort(indexable, reverse=True)\n >>> assert indices == [1, 2, 0]"}
{"_id": "q_3510", "text": "Zips elementwise pairs between items1 and items2 into a dictionary. Values\n from items2 can be broadcast onto items1.\n\n Args:\n items1 (Iterable): full sequence\n items2 (Iterable): can either be a sequence of one item or a sequence\n of equal length to `items1`\n cls (Type[dict]): dictionary type to use. Defaults to dict, but could\n be ordered dict instead.\n\n Returns:\n dict: similar to dict(zip(items1, items2))\n\n Example:\n >>> assert dzip([1, 2, 3], [4]) == {1: 4, 2: 4, 3: 4}\n >>> assert dzip([1, 2, 3], [4, 4, 4]) == {1: 4, 2: 4, 3: 4}\n >>> assert dzip([], [4]) == {}"}
{"_id": "q_3511", "text": "r\"\"\"\n Groups a list of items by group id.\n\n Args:\n items (Iterable): a list of items to group\n groupids (Iterable or Callable): a corresponding list of item groupids\n or a function mapping an item to a groupid.\n\n Returns:\n dict: groupid_to_items: maps a groupid to a list of items\n\n CommandLine:\n python -m ubelt.util_dict group_items\n\n Example:\n >>> import ubelt as ub\n >>> items = ['ham', 'jam', 'spam', 'eggs', 'cheese', 'banana']\n >>> groupids = ['protein', 'fruit', 'protein', 'protein', 'dairy', 'fruit']\n >>> groupid_to_items = ub.group_items(items, groupids)\n >>> print(ub.repr2(groupid_to_items, nl=0))\n {'dairy': ['cheese'], 'fruit': ['jam', 'banana'], 'protein': ['ham', 'spam', 'eggs']}"}
{"_id": "q_3512", "text": "Builds a histogram of items, counting the number of time each item appears\n in the input.\n\n Args:\n item_list (Iterable): hashable items (usually containing duplicates)\n weight_list (Iterable): corresponding weights for each item\n ordered (bool): if True the result is ordered by frequency\n labels (Iterable, optional): expected labels (default None)\n Allows this function to pre-initialize the histogram.\n If specified the frequency of each label is initialized to\n zero and item_list can only contain items specified in labels.\n\n Returns:\n dict : dictionary where the keys are items in item_list, and the values\n are the number of times the item appears in item_list.\n\n CommandLine:\n python -m ubelt.util_dict dict_hist\n\n Example:\n >>> import ubelt as ub\n >>> item_list = [1, 2, 39, 900, 1232, 900, 1232, 2, 2, 2, 900]\n >>> hist = ub.dict_hist(item_list)\n >>> print(ub.repr2(hist, nl=0))\n {1: 1, 2: 4, 39: 1, 900: 3, 1232: 2}\n\n Example:\n >>> import ubelt as ub\n >>> item_list = [1, 2, 39, 900, 1232, 900, 1232, 2, 2, 2, 900]\n >>> hist1 = ub.dict_hist(item_list)\n >>> hist2 = ub.dict_hist(item_list, ordered=True)\n >>> try:\n >>> hist3 = ub.dict_hist(item_list, labels=[])\n >>> except KeyError:\n >>> pass\n >>> else:\n >>> raise AssertionError('expected key error')\n >>> #result = ub.repr2(hist_)\n >>> weight_list = [1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1]\n >>> hist4 = ub.dict_hist(item_list, weight_list=weight_list)\n >>> print(ub.repr2(hist1, nl=0))\n {1: 1, 2: 4, 39: 1, 900: 3, 1232: 2}\n >>> print(ub.repr2(hist4, nl=0))\n {1: 1, 2: 4, 39: 1, 900: 1, 1232: 0}"}
{"_id": "q_3513", "text": "Find all duplicate items in a list.\n\n Search for all items that appear more than `k` times and return a mapping\n from each (k)-duplicate item to the positions it appeared in.\n\n Args:\n items (Iterable): hashable items possibly containing duplicates\n k (int): only return items that appear at least `k` times (default=2)\n key (Callable, optional): Returns indices where `key(items[i])`\n maps to a particular value at least k times.\n\n Returns:\n dict: maps each duplicate item to the indices at which it appears\n\n CommandLine:\n python -m ubelt.util_dict find_duplicates\n\n Example:\n >>> import ubelt as ub\n >>> items = [0, 0, 1, 2, 3, 3, 0, 12, 2, 9]\n >>> duplicates = ub.find_duplicates(items)\n >>> print('items = %r' % (items,))\n >>> print('duplicates = %r' % (duplicates,))\n >>> assert duplicates == {0: [0, 1, 6], 2: [3, 8], 3: [4, 5]}\n >>> assert ub.find_duplicates(items, 3) == {0: [0, 1, 6]}\n\n Example:\n >>> import ubelt as ub\n >>> items = [0, 0, 1, 2, 3, 3, 0, 12, 2, 9]\n >>> # note: k can be 0\n >>> duplicates = ub.find_duplicates(items, k=0)\n >>> print(ub.repr2(duplicates, nl=0))\n {0: [0, 1, 6], 1: [2], 2: [3, 8], 3: [4, 5], 9: [9], 12: [7]}\n\n Example:\n >>> import ubelt as ub\n >>> items = [10, 11, 12, 13, 14, 15, 16]\n >>> duplicates = ub.find_duplicates(items, key=lambda x: x // 2)\n >>> print(ub.repr2(duplicates, nl=0))\n {5: [0, 1], 6: [2, 3], 7: [4, 5]}"}
{"_id": "q_3514", "text": "Constructs a dictionary that contains keys common between all inputs.\n The returned values will only belong to the first dictionary.\n\n Args:\n *args : a sequence of dictionaries (or sets of keys)\n\n Returns:\n Dict | OrderedDict :\n OrderedDict if the first argument is an OrderedDict, otherwise dict\n\n Notes:\n This function can be used as an alternative to `dict_subset` where any\n key not in the dictionary is ignored. See the following example:\n\n >>> dict_isect({'a': 1, 'b': 2, 'c': 3}, ['a', 'c', 'd'])\n {'a': 1, 'c': 3}\n\n Example:\n >>> dict_isect({'a': 1, 'b': 1}, {'b': 2, 'c': 2})\n {'b': 1}\n >>> dict_isect(odict([('a', 1), ('b', 2)]), odict([('c', 3)]))\n OrderedDict()\n >>> dict_isect()\n {}"}
{"_id": "q_3515", "text": "applies a function to each of the keys in a dictionary\n\n Args:\n func (callable): a function or indexable object\n dict_ (dict): a dictionary\n\n Returns:\n newdict: transformed dictionary\n\n CommandLine:\n python -m ubelt.util_dict map_vals\n\n Example:\n >>> import ubelt as ub\n >>> dict_ = {'a': [1, 2, 3], 'b': []}\n >>> func = len\n >>> newdict = ub.map_vals(func, dict_)\n >>> assert newdict == {'a': 3, 'b': 0}\n >>> print(newdict)\n >>> # Can also use indexables as `func`\n >>> dict_ = {'a': 0, 'b': 1}\n >>> func = [42, 21]\n >>> newdict = ub.map_vals(func, dict_)\n >>> assert newdict == {'a': 42, 'b': 21}\n >>> print(newdict)"}
{"_id": "q_3516", "text": "r\"\"\"\n Swaps the keys and values in a dictionary.\n\n Args:\n dict_ (dict): dictionary to invert\n unique_vals (bool): if False, inverted keys are returned in a set.\n The default is True.\n\n Returns:\n dict: inverted\n\n Notes:\n The must values be hashable.\n\n If the original dictionary contains duplicate values, then only one of\n the corresponding keys will be returned and the others will be\n discarded. This can be prevented by setting `unique_vals=True`,\n causing the inverted keys to be returned in a set.\n\n CommandLine:\n python -m ubelt.util_dict invert_dict\n\n Example:\n >>> import ubelt as ub\n >>> dict_ = {'a': 1, 'b': 2}\n >>> inverted = ub.invert_dict(dict_)\n >>> assert inverted == {1: 'a', 2: 'b'}\n\n Example:\n >>> import ubelt as ub\n >>> dict_ = ub.odict([(2, 'a'), (1, 'b'), (0, 'c'), (None, 'd')])\n >>> inverted = ub.invert_dict(dict_)\n >>> assert list(inverted.keys())[0] == 'a'\n\n Example:\n >>> import ubelt as ub\n >>> dict_ = {'a': 1, 'b': 0, 'c': 0, 'd': 0, 'f': 2}\n >>> inverted = ub.invert_dict(dict_, unique_vals=False)\n >>> assert inverted == {0: {'b', 'c', 'd'}, 1: {'a'}, 2: {'f'}}"}
{"_id": "q_3517", "text": "Recursively casts a AutoDict into a regular dictionary. All nested\n AutoDict values are also converted.\n\n Returns:\n dict: a copy of this dict without autovivification\n\n Example:\n >>> from ubelt.util_dict import AutoDict\n >>> auto = AutoDict()\n >>> auto[1] = 1\n >>> auto['n1'] = AutoDict()\n >>> static = auto.to_dict()\n >>> assert not isinstance(static, AutoDict)\n >>> assert not isinstance(static['n1'], AutoDict)"}
{"_id": "q_3518", "text": "Perform a real symbolic link if possible. However, on most versions of\n windows you need special privledges to create a real symlink. Therefore, we\n try to create a symlink, but if that fails we fallback to using a junction.\n\n AFAIK, the main difference between symlinks and junctions are that symlinks\n can reference relative or absolute paths, where as junctions always\n reference absolute paths. Not 100% on this though. Windows is weird.\n\n Note that junctions will not register as links via `islink`, but I\n believe real symlinks will."}
{"_id": "q_3519", "text": "Creates real symlink. This will only work in versions greater than Windows\n Vista. Creating real symlinks requires admin permissions or at least\n specially enabled symlink permissions. On Windows 10 enabling developer\n mode should give you these permissions."}
{"_id": "q_3520", "text": "Determines if a path is a win32 junction\n\n CommandLine:\n python -m ubelt._win32_links _win32_is_junction\n\n Example:\n >>> # xdoc: +REQUIRES(WIN32)\n >>> import ubelt as ub\n >>> root = ub.ensure_app_cache_dir('ubelt', 'win32_junction')\n >>> ub.delete(root)\n >>> ub.ensuredir(root)\n >>> dpath = join(root, 'dpath')\n >>> djunc = join(root, 'djunc')\n >>> ub.ensuredir(dpath)\n >>> _win32_junction(dpath, djunc)\n >>> assert _win32_is_junction(djunc) is True\n >>> assert _win32_is_junction(dpath) is False\n >>> assert _win32_is_junction('notafile') is False"}
{"_id": "q_3521", "text": "Returns the location that the junction points, raises ValueError if path is\n not a junction.\n\n CommandLine:\n python -m ubelt._win32_links _win32_read_junction\n\n Example:\n >>> # xdoc: +REQUIRES(WIN32)\n >>> import ubelt as ub\n >>> root = ub.ensure_app_cache_dir('ubelt', 'win32_junction')\n >>> ub.delete(root)\n >>> ub.ensuredir(root)\n >>> dpath = join(root, 'dpath')\n >>> djunc = join(root, 'djunc')\n >>> ub.ensuredir(dpath)\n >>> _win32_junction(dpath, djunc)\n >>> path = djunc\n >>> pointed = _win32_read_junction(path)\n >>> print('pointed = {!r}'.format(pointed))"}
{"_id": "q_3522", "text": "rmtree for win32 that treats junctions like directory symlinks.\n The junction removal portion may not be safe on race conditions.\n\n There is a known issue that prevents shutil.rmtree from\n deleting directories with junctions.\n https://bugs.python.org/issue31226"}
{"_id": "q_3523", "text": "Test if two hard links point to the same location\n\n CommandLine:\n python -m ubelt._win32_links _win32_is_hardlinked\n\n Example:\n >>> # xdoc: +REQUIRES(WIN32)\n >>> import ubelt as ub\n >>> root = ub.ensure_app_cache_dir('ubelt', 'win32_hardlink')\n >>> ub.delete(root)\n >>> ub.ensuredir(root)\n >>> fpath1 = join(root, 'fpath1')\n >>> fpath2 = join(root, 'fpath2')\n >>> ub.touch(fpath1)\n >>> ub.touch(fpath2)\n >>> fjunc1 = _win32_junction(fpath1, join(root, 'fjunc1'))\n >>> fjunc2 = _win32_junction(fpath2, join(root, 'fjunc2'))\n >>> assert _win32_is_hardlinked(fjunc1, fpath1)\n >>> assert _win32_is_hardlinked(fjunc2, fpath2)\n >>> assert not _win32_is_hardlinked(fjunc2, fpath1)\n >>> assert not _win32_is_hardlinked(fjunc1, fpath2)"}
{"_id": "q_3524", "text": "Using the windows cmd shell to get information about a directory"}
{"_id": "q_3525", "text": "Returns generators that double with each value returned\n Config includes optional start value"}
{"_id": "q_3526", "text": "Retrieve the adjacency matrix from the nx.DiGraph or numpy array."}
{"_id": "q_3527", "text": "Apply causal discovery on observational data using CCDr.\n\n Args:\n data (pandas.DataFrame): DataFrame containing the data\n\n Returns:\n networkx.DiGraph: Solution given by the CCDR algorithm."}
{"_id": "q_3528", "text": "Save data to the csv format by default, in two separate files.\n\n Optional keyword arguments can be passed to pandas."}
{"_id": "q_3529", "text": "Launch an R script, starting from a template and replacing text in file\n before execution.\n\n Args:\n template (str): path to the template of the R script\n arguments (dict): Arguments that modify the template's placeholders\n with arguments\n output_function (function): Function to execute **after** the execution\n of the R script, and its output is returned by this function. Used\n traditionally as a function to retrieve the results of the\n execution.\n verbose (bool): Sets the verbosity of the R subprocess.\n debug (bool): If True, the generated scripts are not deleted.\n\n Return:\n Returns the output of the ``output_function`` if not `None`\n else `True` or `False` depending on whether the execution was\n successful."}
{"_id": "q_3530", "text": "Execute a subprocess to check the package's availability.\n\n Args:\n package (str): Name of the package to be tested.\n\n Returns:\n bool: `True` if the package is available, `False` otherwise"}
{"_id": "q_3531", "text": "Perform the independence test.\n\n :param a: input data\n :param b: input data\n :type a: array-like, numerical data\n :type b: array-like, numerical data\n :return: dependency statistic (1=Highly dependent, 0=Not dependent)\n :rtype: float"}
{"_id": "q_3532", "text": "Evaluate a graph taking account of the hardware."}
{"_id": "q_3533", "text": "Generate according to the topological order of the graph."}
{"_id": "q_3534", "text": "Use CGNN to create a graph from scratch. All the possible structures\n are tested, which leads to a super exponential complexity. It would be\n preferable to start from a graph skeleton for large graphs.\n\n Args:\n data (pandas.DataFrame): Observational data on which causal\n discovery has to be performed.\n Returns:\n networkx.DiGraph: Solution given by CGNN."}
{"_id": "q_3535", "text": "Modify and improve a directed acyclic graph solution using CGNN.\n\n Args:\n data (pandas.DataFrame): Observational data on which causal\n discovery has to be performed.\n dag (nx.DiGraph): Graph that provides the initial solution,\n on which the CGNN algorithm will be applied.\n alg (str): Exploration heuristic to use, among [\"HC\", \"HCr\",\n \"tabu\", \"EHC\"]\n Returns:\n networkx.DiGraph: Solution given by CGNN."}
{"_id": "q_3536", "text": "Orient the undirected graph using GNN and apply CGNN to improve the graph.\n\n Args:\n data (pandas.DataFrame): Observational data on which causal\n discovery has to be performed.\n umg (nx.Graph): Graph that provides the skeleton, on which the GNN\n then the CGNN algorithm will be applied.\n alg (str): Exploration heuristic to use, among [\"HC\", \"HCr\",\n \"tabu\", \"EHC\"]\n Returns:\n networkx.DiGraph: Solution given by CGNN.\n \n .. note::\n GNN (``cdt.causality.pairwise.GNN``) is first used to orient the\n undirected graph and output a DAG before applying CGNN."}
{"_id": "q_3537", "text": "Evaluate the entropy of the input variable.\n\n :param x: input variable 1D\n :return: entropy of x"}
{"_id": "q_3538", "text": "Evaluate a pair using the IGCI model.\n\n :param a: Input variable 1D\n :param b: Input variable 1D\n :param kwargs: {refMeasure: Scaling method (gaussian, integral or None),\n estimator: method used to evaluate the pairs (entropy or integral)}\n :return: Return value of the IGCI model >0 if a->b otherwise if return <0"}
{"_id": "q_3539", "text": "Train the model.\n\n Args:\n x_tr (pd.DataFrame): CEPC format dataframe containing the pairs\n y_tr (pd.DataFrame or np.ndarray): labels associated to the pairs"}
{"_id": "q_3540", "text": "Predict the causal score using a trained RCC model\n\n Args:\n x (numpy.array or pandas.DataFrame or pandas.Series): First variable or dataset.\n args (numpy.array): second variable (optional depending on the 1st argument).\n\n Returns:\n float: Causation score (Value : 1 if a->b and -1 if b->a)"}
{"_id": "q_3541", "text": "For one variable, predict its neighbours.\n\n Args:\n df_features (pandas.DataFrame):\n df_target (pandas.Series):\n nh (int): number of hidden units\n idx (int): (optional) for printing purposes\n dropout (float): probability of dropout (between 0 and 1)\n activation_function (torch.nn.Module): activation function of the NN\n lr (float): learning rate of Adam\n l1 (float): L1 penalization coefficient\n batch_size (int): batch size, defaults to full-batch\n train_epochs (int): number of train epochs\n test_epochs (int): number of test epochs\n device (str): cuda or cpu device (defaults to ``cdt.SETTINGS.default_device``)\n verbose (bool): verbosity (defaults to ``cdt.SETTINGS.verbose``)\n nb_runs (int): number of bootstrap runs\n\n Returns:\n list: scores of each feature relatively to the target"}
{"_id": "q_3542", "text": "Build a skeleton using a pairwise independence criterion.\n\n Args:\n data (pandas.DataFrame): Raw data table\n\n Returns:\n networkx.Graph: Undirected graph representing the skeleton."}
{"_id": "q_3543", "text": "Run GIES on an undirected graph.\n\n Args:\n data (pandas.DataFrame): DataFrame containing the data\n graph (networkx.Graph): Skeleton of the graph to orient\n\n Returns:\n networkx.DiGraph: Solution given by the GIES algorithm."}
{"_id": "q_3544", "text": "Feed-forward through the network."}
{"_id": "q_3545", "text": "Execute SAM on a dataset given a skeleton or not.\n\n Args:\n data (pandas.DataFrame): Observational data for estimation of causal relationships by SAM\n skeleton (numpy.ndarray): A priori knowledge about the causal relationships as an adjacency matrix.\n Can be fed either directed or undirected links.\n nruns (int): Number of runs to be made for causal estimation.\n Recommended: >=12 for optimal performance.\n njobs (int): Numbers of jobs to be run in Parallel.\n Recommended: 1 if no GPU available, 2*number of GPUs else.\n gpus (int): Number of available GPUs for the algorithm.\n verbose (bool): verbose mode\n plot (bool): Plot losses interactively. Not recommended if nruns>1\n plot_generated_pair (bool): plots a generated pair interactively. Not recommended if nruns>1\n Returns:\n networkx.DiGraph: Graph estimated by SAM, where A[i,j] is the term\n of the ith variable for the jth generator."}
{"_id": "q_3546", "text": "Infer causal relationships between 2 variables using the CDS statistic\n\n Args:\n a (numpy.ndarray): Variable 1\n b (numpy.ndarray): Variable 2\n\n Returns:\n float: Causation score (Value : 1 if a->b and -1 if b->a)"}
{"_id": "q_3547", "text": "Prediction method for pairwise causal inference using the ANM model.\n\n Args:\n a (numpy.ndarray): Variable 1\n b (numpy.ndarray): Variable 2\n\n Returns:\n float: Causation score (Value : 1 if a->b and -1 if b->a)"}
{"_id": "q_3548", "text": "Compute the fitness score of the ANM model in the x->y direction.\n\n Args:\n a (numpy.ndarray): Variable seen as cause\n b (numpy.ndarray): Variable seen as effect\n\n Returns:\n float: ANM fit score"}
{"_id": "q_3549", "text": "Predict the graph skeleton.\n\n Args:\n data (pandas.DataFrame): observational data\n alpha (float): regularization parameter\n max_iter (int): maximum number of iterations\n\n Returns:\n networkx.Graph: Graph skeleton"}
{"_id": "q_3550", "text": "Autoset GPU parameters using CUDA_VISIBLE_DEVICES variables.\n\n Return default config if variable not set.\n :param set_var: Variable to set. Must be of type ConfigSettings"}
{"_id": "q_3551", "text": "Generic predict method, chooses which subfunction to use for a more\n suited.\n\n Depending on the type of `x` and of `*args`, this function process to execute\n different functions in the priority order:\n\n 1. If ``args[0]`` is a ``networkx.(Di)Graph``, then ``self.orient_graph`` is executed.\n 2. If ``args[0]`` exists, then ``self.predict_proba`` is executed.\n 3. If ``x`` is a ``pandas.DataFrame``, then ``self.predict_dataset`` is executed.\n 4. If ``x`` is a ``pandas.Series``, then ``self.predict_proba`` is executed.\n\n Args:\n x (numpy.array or pandas.DataFrame or pandas.Series): First variable or dataset.\n args (numpy.array or networkx.Graph): graph or second variable.\n\n Returns:\n pandas.Dataframe or networkx.Digraph: predictions output"}
{"_id": "q_3552", "text": "Generic dataset prediction function.\n\n Runs the score independently on all pairs.\n\n Args:\n x (pandas.DataFrame): a CEPC format Dataframe.\n kwargs (dict): additional arguments for the algorithms\n\n Returns:\n pandas.DataFrame: a Dataframe with the predictions."}
{"_id": "q_3553", "text": "Run the algorithm on a directed_graph.\n\n Args:\n data (pandas.DataFrame): DataFrame containing the data\n graph (networkx.DiGraph): Skeleton of the graph to orient\n\n Returns:\n networkx.DiGraph: Solution on the given skeleton.\n\n .. warning::\n The algorithm is ran on the skeleton of the given graph."}
{"_id": "q_3554", "text": "Compute the gaussian kernel on a 1D vector."}
{"_id": "q_3555", "text": "Init a noise variable."}
{"_id": "q_3556", "text": "Runs Jarfo independently on all pairs.\n\n Args:\n x (pandas.DataFrame): a CEPC format Dataframe.\n kwargs (dict): additional arguments for the algorithms\n\n Returns:\n pandas.DataFrame: a Dataframe with the predictions."}
{"_id": "q_3557", "text": "Use Jarfo to predict the causal direction of a pair of vars.\n\n Args:\n a (numpy.ndarray): Variable 1\n b (numpy.ndarray): Variable 2\n idx (int): (optional) index number for printing purposes\n\n Returns:\n float: Causation score (Value : 1 if a->b and -1 if b->a)"}
{"_id": "q_3558", "text": "Implementation of the ARACNE algorithm.\n\n Args:\n mat (numpy.ndarray): matrix, if it is a square matrix, the program assumes\n it is a relevance matrix where mat(i,j) represents the similarity content\n between nodes i and j. Elements of matrix should be\n non-negative.\n\n Returns:\n mat_nd (numpy.ndarray): Output deconvolved matrix (direct dependency matrix). Its components\n represent direct edge weights of observed interactions.\n\n .. note::\n Ref: ARACNE: An Algorithm for the Reconstruction of Gene Regulatory Networks in a Mammalian Cellular Context\n Adam A Margolin, Ilya Nemenman, Katia Basso, Chris Wiggins, Gustavo Stolovitzky, Riccardo Dalla Favera and Andrea Califano\n DOI: https://doi.org/10.1186/1471-2105-7-S1-S7"}
{"_id": "q_3559", "text": "Apply deconvolution to a networkx graph.\n\n Args:\n g (networkx.Graph): Graph to apply deconvolution to\n alg (str): Algorithm to use ('aracne', 'clr', 'nd')\n kwargs (dict): extra options for algorithms\n\n Returns:\n networkx.Graph: graph with undirected links removed."}
{"_id": "q_3560", "text": "Input a graph and output a DAG.\n\n The heuristic is to reverse the edge with the lowest score of the cycle\n if possible, else remove it.\n\n Args:\n g (networkx.DiGraph): Graph to modify to output a DAG\n\n Returns:\n networkx.DiGraph: DAG made out of the input graph."}
{"_id": "q_3561", "text": "Returns the weighted average and standard deviation.\n\n values, weights -- numpy ndarrays with the same shape."}
{"_id": "q_3562", "text": "Pass data through the net structure.\n\n :param x: input data: shape (:,1)\n :type x: torch.Variable\n :return: output of the shallow net\n :rtype: torch.Variable"}
{"_id": "q_3563", "text": "Run the GNN on a pair x,y of FloatTensor data."}
{"_id": "q_3564", "text": "Run multiple times GNN to estimate the causal direction.\n\n Args:\n a (np.ndarray): Variable 1\n b (np.ndarray): Variable 2\n nb_runs (int): number of runs to execute per batch (before testing for significance with t-test).\n nb_jobs (int): number of runs to execute in parallel. (Initialized with ``cdt.SETTINGS.NB_JOBS``)\n gpu (bool): use gpu (Initialized with ``cdt.SETTINGS.GPU``)\n idx (int): (optional) index of the pair, for printing purposes\n verbose (bool): verbosity (Initialized with ``cdt.SETTINGS.verbose``)\n ttest_threshold (float): threshold to stop the boostraps before ``nb_max_runs`` if the difference is significant\n nb_max_runs (int): Max number of bootstraps\n train_epochs (int): Number of epochs during which the model is going to be trained\n test_epochs (int): Number of epochs during which the model is going to be tested\n\n Returns:\n float: Causal score of the pair (Value : 1 if a->b and -1 if b->a)"}
{"_id": "q_3565", "text": "Passing data through the network.\n\n :param x: 2d tensor containing both (x,y) Variables\n :return: output of the net"}
{"_id": "q_3566", "text": "Updates all rows that match the filter."}
{"_id": "q_3567", "text": "Creates multiple new records in the database.\n\n This allows specifying custom conflict behavior using .on_conflict().\n If no special behavior was specified, this uses the normal Django create(..)\n\n Arguments:\n rows:\n An array of dictionaries, where each dictionary\n describes the fields to insert.\n\n return_model (default: False):\n If model instances should be returned rather than\n just dicts.\n\n Returns:\n A list of either the dicts of the rows inserted, including the pk or\n the models of the rows inserted with defaults for any fields not specified"}
{"_id": "q_3568", "text": "Creates a new record in the database.\n\n This allows specifying custom conflict behavior using .on_conflict().\n If no special behavior was specified, this uses the normal Django create(..)\n\n Arguments:\n fields:\n The fields of the row to create.\n\n Returns:\n The primary key of the record that was created."}
{"_id": "q_3569", "text": "Creates a new record in the database and then gets\n the entire row.\n\n This allows specifying custom conflict behavior using .on_conflict().\n If no special behavior was specified, this uses the normal Django create(..)\n\n Arguments:\n fields:\n The fields of the row to create.\n\n Returns:\n The model instance representing the row that was created."}
{"_id": "q_3570", "text": "Verifies whether this field is gonna modify something\n on its own.\n\n \"Magical\" means that a field modifies the field value\n during the pre_save.\n\n Arguments:\n model_instance:\n The model instance the field is defined on.\n\n field:\n The field to get of whether the field is\n magical.\n\n is_insert:\n Pretend whether this is an insert?\n\n Returns:\n True when this field modifies something."}
{"_id": "q_3571", "text": "Gets the fields to use in an upsert.\n\n This some nice magic. We'll split the fields into\n a group of \"insert fields\" and \"update fields\":\n\n INSERT INTO bla (\"val1\", \"val2\") ON CONFLICT DO UPDATE SET val1 = EXCLUDED.val1\n\n ^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^\n insert_fields update_fields\n\n Often, fields appear in both lists. But, for example,\n a :see:DateTime field with `auto_now_add=True` set, will\n only appear in \"insert_fields\", since it won't be set\n on existing rows.\n\n Other than that, the user specificies a list of fields\n in the upsert() call. That migt not be all fields. The\n user could decide to leave out optional fields. If we\n end up doing an update, we don't want to overwrite\n those non-specified fields.\n\n We cannot just take the list of fields the user\n specifies, because as mentioned, some fields\n make modifications to the model on their own.\n\n We'll have to detect which fields make modifications\n and include them in the list of insert/update fields."}
{"_id": "q_3572", "text": "When a model gets created or updated."}
{"_id": "q_3573", "text": "Compiles the HStore value into SQL.\n\n Compiles expressions contained in the values\n of HStore entries as well.\n\n Given a dictionary like:\n\n dict(key1='val1', key2='val2')\n\n The resulting SQL will be:\n\n hstore(hstore('key1', 'val1'), hstore('key2', 'val2'))"}
{"_id": "q_3574", "text": "Adds an extra condition to an existing JOIN.\n\n This allows you to for example do:\n\n INNER JOIN othertable ON (mytable.id = othertable.other_id AND [extra conditions])\n\n This does not work if nothing else in your query doesn't already generate the\n initial join in the first place."}
{"_id": "q_3575", "text": "Sets the values to be used in this query.\n\n Insert fields are fields that are definitely\n going to be inserted, and if an existing row\n is found, are going to be overwritten with the\n specified value.\n\n Update fields are fields that should be overwritten\n in case an update takes place rather than an insert.\n If we're dealing with a INSERT, these will not be used.\n\n Arguments:\n objs:\n The objects to apply this query to.\n\n insert_fields:\n The fields to use in the INSERT statement\n\n update_fields:\n The fields to only use in the UPDATE statement."}
{"_id": "q_3576", "text": "Creates a REQUIRED CONSTRAINT for the specified hstore key."}
{"_id": "q_3577", "text": "Renames an existing REQUIRED CONSTRAINT for the specified\n hstore key."}
{"_id": "q_3578", "text": "Drops a REQUIRED CONSTRAINT for the specified hstore key."}
{"_id": "q_3579", "text": "Gets the name for a CONSTRAINT that applies\n to a single hstore key.\n\n Arguments:\n table:\n The name of the table the field is\n a part of.\n\n field:\n The hstore field to create a\n UNIQUE INDEX for.\n\n key:\n The name of the hstore key\n to create the name for.\n\n Returns:\n The name for the UNIQUE index."}
{"_id": "q_3580", "text": "Creates the actual SQL used when applying the migration."}
{"_id": "q_3581", "text": "Ran to prepare the configured database.\n\n This is where we enable the `hstore` extension\n if it wasn't enabled yet."}
{"_id": "q_3582", "text": "Override the base class so it doesn't cast all values\n to strings.\n\n psqlextra supports expressions in hstore fields, so casting\n all values to strings is a bad idea."}
{"_id": "q_3583", "text": "Rewrites a formed SQL INSERT query to include\n the ON CONFLICT clause.\n\n Arguments:\n sql:\n The SQL INSERT query to rewrite.\n\n params:\n The parameters passed to the query.\n\n returning:\n What to put in the `RETURNING` clause\n of the resulting query.\n\n Returns:\n A tuple of the rewritten SQL query and new params."}
{"_id": "q_3584", "text": "Rewrites a formed SQL INSERT query to include\n the ON CONFLICT DO UPDATE clause."}
{"_id": "q_3585", "text": "Rewrites a formed SQL INSERT query to include\n the ON CONFLICT DO NOTHING clause."}
{"_id": "q_3586", "text": "Builds the `conflict_target` for the ON CONFLICT\n clause."}
{"_id": "q_3587", "text": "Formats a field's name for usage in SQL.\n\n Arguments:\n field_name:\n The field name to format.\n\n Returns:\n The specified field name formatted for\n usage in SQL."}
{"_id": "q_3588", "text": "Formats a field's value for usage in SQL.\n\n Arguments:\n field_name:\n The name of the field to format\n the value of.\n\n Returns:\n The field's value formatted for usage\n in SQL."}
{"_id": "q_3589", "text": "Creates a UNIQUE constraint for the specified hstore keys."}
{"_id": "q_3590", "text": "Renames an existing UNIQUE constraint for the specified\n hstore keys."}
{"_id": "q_3591", "text": "Gets the name for a UNIQUE INDEX that applies\n to one or more keys in a hstore field.\n\n Arguments:\n table:\n The name of the table the field is\n a part of.\n\n field:\n The hstore field to create a\n UNIQUE INDEX for.\n\n key:\n The name of the hstore key\n to create the name for.\n\n This can also be a tuple\n of multiple names.\n\n Returns:\n The name for the UNIQUE index."}
{"_id": "q_3592", "text": "Iterates over the keys marked as \"unique\"\n in the specified field.\n\n Arguments:\n field:\n The field of which key's to\n iterate over."}
{"_id": "q_3593", "text": "Adds an extra condition to this join.\n\n Arguments:\n field:\n The field that the condition will apply to.\n\n value:\n The value to compare."}
{"_id": "q_3594", "text": "Compiles this JOIN into a SQL string."}
{"_id": "q_3595", "text": "Approximate the 95% confidence interval for Student's T distribution.\n\n Given the degrees of freedom, returns an approximation to the 95%\n confidence interval for the Student's T distribution.\n\n Args:\n df: An integer, the number of degrees of freedom.\n\n Returns:\n A float."}
{"_id": "q_3596", "text": "Find the pooled sample variance for two samples.\n\n Args:\n sample1: one sample.\n sample2: the other sample.\n\n Returns:\n Pooled sample variance, as a float."}
{"_id": "q_3597", "text": "Determine whether two samples differ significantly.\n\n This uses a Student's two-sample, two-tailed t-test with alpha=0.95.\n\n Args:\n sample1: one sample.\n sample2: the other sample.\n\n Returns:\n (significant, t_score) where significant is a bool indicating whether\n the two samples differ significantly; t_score is the score from the\n two-sample T test."}
{"_id": "q_3598", "text": "Return a topological sorting of nodes in a graph.\n\n roots - list of root nodes to search from\n getParents - function which returns the parents of a given node"}
{"_id": "q_3599", "text": "N-Queens solver.\n\n Args:\n queen_count: the number of queens to solve for. This is also the\n board size.\n\n Yields:\n Solutions to the problem. Each yielded value is looks like\n (3, 8, 2, 1, 4, ..., 6) where each number is the column position for the\n queen, and the index into the tuple indicates the row."}
{"_id": "q_3600", "text": "uct tree search"}
{"_id": "q_3601", "text": "random play until both players pass"}
{"_id": "q_3602", "text": "Filters out benchmarks not supported by both Pythons.\n\n Args:\n benchmarks: a set() of benchmark names\n bench_funcs: dict mapping benchmark names to functions\n python: the interpereter commands (as lists)\n\n Returns:\n The filtered set of benchmark names"}
{"_id": "q_3603", "text": "Recursively expand name benchmark names.\n\n Args:\n bm_name: string naming a benchmark or benchmark group.\n\n Yields:\n Names of actual benchmarks, with all group names fully expanded."}
{"_id": "q_3604", "text": "Initialize the strings we'll run the regexes against.\n\n The strings used in the benchmark are prefixed and suffixed by\n strings that are repeated n times.\n\n The sequence n_values contains the values for n.\n If n_values is None the values of n from the original benchmark\n are used.\n\n The generated list of strings is cached in the string_tables\n variable, which is indexed by n.\n\n Returns:\n A list of string prefix/suffix lengths."}
{"_id": "q_3605", "text": "Returns the domain of the B-Spline"}
{"_id": "q_3606", "text": "Fetch the messages.\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3607", "text": "Fetch the entries from the url.\n\n The method retrieves all entries from a RSS url\n\n :param category: the category of items to fetch\n\n :returns: a generator of entries"}
{"_id": "q_3608", "text": "Fetch the entries\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3609", "text": "Returns the RSS argument parser."}
{"_id": "q_3610", "text": "Fetch the bugs from the repository.\n\n The method retrieves, from a Bugzilla repository, the bugs\n updated since the given date.\n\n :param category: the category of items to fetch\n :param from_date: obtain bugs updated since this date\n\n :returns: a generator of bugs"}
{"_id": "q_3611", "text": "Get issue notes"}
{"_id": "q_3612", "text": "Get merge versions"}
{"_id": "q_3613", "text": "Get the merge requests from pagination"}
{"_id": "q_3614", "text": "Get the merge versions from pagination"}
{"_id": "q_3615", "text": "Get the notes from pagination"}
{"_id": "q_3616", "text": "Get emojis of a note"}
{"_id": "q_3617", "text": "Initialize rate limit information"}
{"_id": "q_3618", "text": "Returns the GitLab argument parser."}
{"_id": "q_3619", "text": "Fetch the messages from the channel.\n\n This method fetches the messages stored on the channel that were\n sent since the given date.\n\n :param category: the category of items to fetch\n :param from_date: obtain messages sent since this date\n\n :returns: a generator of messages"}
{"_id": "q_3620", "text": "Fetch the number of members in a conversation, which is a supertype for public and\n private ones, DM and group DM.\n\n :param conversation: the ID of the conversation"}
{"_id": "q_3621", "text": "Fetch user info."}
{"_id": "q_3622", "text": "Returns the Slack argument parser."}
{"_id": "q_3623", "text": "Extracts and coverts the update time from a Bugzilla item.\n\n The timestamp is extracted from 'delta_ts' field. This date is\n converted to UNIX timestamp format. Due Bugzilla servers ignore\n the timezone on HTTP requests, it will be ignored during the\n conversion, too.\n\n :param item: item generated by the backend\n\n :returns: a UNIX timestamp"}
{"_id": "q_3624", "text": "Parse a Bugilla bugs details XML stream.\n\n This method returns a generator which parses the given XML,\n producing an iterator of dictionaries. Each dictionary stores\n the information related to a parsed bug.\n\n If the given XML is invalid or does not contains any bug, the\n method will raise a ParseError exception.\n\n :param raw_xml: XML string to parse\n\n :returns: a generator of parsed bugs\n\n :raises ParseError: raised when an error occurs parsing\n the given XML stream"}
{"_id": "q_3625", "text": "Logout from the server."}
{"_id": "q_3626", "text": "Get metadata information in XML format."}
{"_id": "q_3627", "text": "Get a summary of bugs in CSV format.\n\n :param from_date: retrieve bugs that where updated from that date"}
{"_id": "q_3628", "text": "Get the information of a list of bugs in XML format.\n\n :param bug_ids: list of bug identifiers"}
{"_id": "q_3629", "text": "Get the activity of a bug in HTML format.\n\n :param bug_id: bug identifier"}
{"_id": "q_3630", "text": "Fetch the events from the server.\n\n This method fetches those events of a group stored on the server\n that were updated since the given date. Data comments and rsvps\n are included within each event.\n\n :param category: the category of items to fetch\n :param from_date: obtain events updated since this date\n :param to_date: obtain events updated before this date\n :param filter_classified: remove classified fields from the resulting items\n\n :returns: a generator of events"}
{"_id": "q_3631", "text": "Fetch the events\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3632", "text": "Fetch the rsvps of a given event."}
{"_id": "q_3633", "text": "Fetch an Askbot HTML question body.\n\n The method fetchs the HTML question retrieving the\n question body of the item question received\n\n :param question: item with the question itself\n\n :returns: a list of HTML page/s for the question"}
{"_id": "q_3634", "text": "Fetch all the comments of an Askbot question and answers.\n\n The method fetchs the list of every comment existing in a question and\n its answers.\n\n :param question: item with the question itself\n\n :returns: a list of comments with the ids as hashes"}
{"_id": "q_3635", "text": "Build an Askbot HTML response.\n\n The method puts together all the information regarding a question\n\n :param html_question: array of HTML raw pages\n :param question: question object from the API\n :param comments: list of comments to add\n\n :returns: a dict item with the parsed question information"}
{"_id": "q_3636", "text": "Retrieve a question page using the API.\n\n :param page: page to retrieve"}
{"_id": "q_3637", "text": "Retrieve a raw HTML question and all it's information.\n\n :param question_id: question identifier\n :param page: page to retrieve"}
{"_id": "q_3638", "text": "Retrieve a list of comments by a given id.\n\n :param object_id: object identifiere"}
{"_id": "q_3639", "text": "Parse the question info container of a given HTML question.\n\n The method parses the information available in the question information\n container. The container can have up to 2 elements: the first one\n contains the information related with the user who generated the question\n and the date (if any). The second one contains the date of the updated,\n and the user who updated it (if not the same who generated the question).\n\n :param html_question: raw HTML question element\n\n :returns: an object with the parsed information"}
{"_id": "q_3640", "text": "Parse the answers of a given HTML question.\n\n The method parses the answers related with a given HTML question,\n as well as all the comments related to the answer.\n\n :param html_question: raw HTML question element\n\n :returns: a list with the answers"}
{"_id": "q_3641", "text": "Parse number of answer pages to paginate over them.\n\n :param html_question: raw HTML question element\n\n :returns: an integer with the number of pages"}
{"_id": "q_3642", "text": "Parse the user information of a given HTML container.\n\n The method parses all the available user information in the container.\n If the class \"user-info\" exists, the method will get all the available\n information in the container. If not, if a class \"tip\" exists, it will be\n a wiki post with no user associated. Else, it can be an empty container.\n\n :param update_info: beautiful soup answer container element\n\n :returns: an object with the parsed information"}
{"_id": "q_3643", "text": "Specific fetch for gerrit 2.8 version.\n\n Get open and closed reviews in different queries.\n Take the newer review from both lists and iterate."}
{"_id": "q_3644", "text": "Return the Gerrit server version."}
{"_id": "q_3645", "text": "Get the reviews starting from last_item."}
{"_id": "q_3646", "text": "Execute gerrit command against the archive"}
{"_id": "q_3647", "text": "Execute gerrit command with retry if it fails"}
{"_id": "q_3648", "text": "Get data associated to an issue"}
{"_id": "q_3649", "text": "Get attachments of an issue"}
{"_id": "q_3650", "text": "Get activities on an issue"}
{"_id": "q_3651", "text": "Get data associated to an user"}
{"_id": "q_3652", "text": "Get the user data by URL"}
{"_id": "q_3653", "text": "Get the issue data by its ID"}
{"_id": "q_3654", "text": "Get a collection list of a given issue"}
{"_id": "q_3655", "text": "Build URL project"}
{"_id": "q_3656", "text": "Fetch the groupsio paginated subscriptions for a given token\n\n :param per_page: number of subscriptions per page\n\n :returns: an iterator of subscriptions"}
{"_id": "q_3657", "text": "Fetch requests from groupsio API"}
{"_id": "q_3658", "text": "Generate a UUID based on the given parameters.\n\n The UUID will be the SHA1 of the concatenation of the values\n from the list. The separator bewteedn these values is ':'.\n Each value must be a non-empty string, otherwise, the function\n will raise an exception.\n\n :param *args: list of arguments used to generate the UUID\n\n :returns: a universal unique identifier\n\n :raises ValueError: when anyone of the values is not a string,\n is empty or `None`."}
{"_id": "q_3659", "text": "Fetch items from an archive manager.\n\n Generator to get the items of a category (previously fetched\n by the given backend class) from an archive manager. Only those\n items archived after the given date will be returned.\n\n The parameters needed to initialize `backend` and get the\n items are given using `backend_args` dict parameter.\n\n :param backend_class: backend class to retrive items\n :param backend_args: dict of arguments needed to retrieve the items\n :param manager: archive manager where the items will be retrieved\n :param category: category of the items to retrieve\n :param archived_after: return items archived after this date\n\n :returns: a generator of archived items"}
{"_id": "q_3660", "text": "Find available backends.\n\n Look for the Perceval backends and commands under `top_package`\n and its sub-packages. When `top_package` defines a namespace,\n backends under that same namespace will be found too.\n\n :param top_package: package storing backends\n\n :returns: a tuple with two dicts: one with `Backend` classes and one\n with `BackendCommand` classes"}
{"_id": "q_3661", "text": "Fetch items from the repository.\n\n The method retrieves items from a repository.\n\n To removed classified fields from the resulting items, set\n the parameter `filter_classified`. Take into account this\n parameter is incompatible with archiving items. Raw client\n data are archived before any other process. Therefore,\n classified data are stored within the archive. To prevent\n from possible data leaks or security issues when users do\n not need these fields, archiving and filtering are not\n compatible.\n\n :param category: the category of the items fetched\n :param filter_classified: remove classified fields from the resulting items\n :param kwargs: a list of other parameters (e.g., from_date, offset, etc.\n specific for each backend)\n\n :returns: a generator of items\n\n :raises BackendError: either when the category is not valid or\n 'filter_classified' and 'archive' are active at the same time."}
{"_id": "q_3662", "text": "Fetch the questions from an archive.\n\n It returns the items stored within an archive. If this method is called but\n no archive was provided, the method will raise a `ArchiveError` exception.\n\n :returns: a generator of items\n\n :raises ArchiveError: raised when an error occurs accessing an archive"}
{"_id": "q_3663", "text": "Remove classified or confidential data from an item.\n\n It removes those fields that contain data considered as classified.\n Classified fields are defined in `CLASSIFIED_FIELDS` class attribute.\n\n :param item: fields will be removed from this item\n\n :returns: the same item but with confidential data filtered"}
{"_id": "q_3664", "text": "Parse a list of arguments.\n\n Parse argument strings needed to run a backend command. The result\n will be a `argparse.Namespace` object populated with the values\n obtained after the validation of the parameters.\n\n :param args: argument strings\n\n :result: an object with the parsed values"}
{"_id": "q_3665", "text": "Activate archive arguments parsing"}
{"_id": "q_3666", "text": "Activate output arguments parsing"}
{"_id": "q_3667", "text": "Fetch and write items.\n\n This method runs the backend to fetch the items from the given\n origin. Items are converted to JSON objects and written to the\n defined output.\n\n If `fetch-archive` parameter was given as an argument during\n the inizialization of the instance, the items will be retrieved\n using the archive manager."}
{"_id": "q_3668", "text": "Initialize archive based on the parsed parameters"}
{"_id": "q_3669", "text": "Extracts the update time from a MBox item.\n\n The timestamp used is extracted from 'Date' field in its\n several forms. This date is converted to UNIX timestamp\n format.\n\n :param item: item generated by the backend\n\n :returns: a UNIX timestamp"}
{"_id": "q_3670", "text": "Fetch and parse the messages from a mailing list"}
{"_id": "q_3671", "text": "Copy the contents of a mbox to a temporary file"}
{"_id": "q_3672", "text": "Fetch the commits\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3673", "text": "Returns the Git argument parser."}
{"_id": "q_3674", "text": "Parse the Git log stream."}
{"_id": "q_3675", "text": "Clone a Git repository.\n\n Make a bare copy of the repository stored in `uri` into `dirpath`.\n The repository would be either local or remote.\n\n :param uri: URI of the repository\n :param dirtpath: directory where the repository will be cloned\n\n :returns: a `GitRepository` class having cloned the repository\n\n :raises RepositoryError: when an error occurs cloning the given\n repository"}
{"_id": "q_3676", "text": "Count the objects of a repository.\n\n The method returns the total number of objects (packed and unpacked)\n available on the repository.\n\n :raises RepositoryError: when an error occurs counting the objects\n of a repository"}
{"_id": "q_3677", "text": "Check if the repo is in a detached state.\n\n The repository is in a detached state when HEAD is not a symbolic\n reference.\n\n :returns: whether the repository is detached or not\n\n :raises RepositoryError: when an error occurs checking the state\n of the repository"}
{"_id": "q_3678", "text": "Keep the repository in sync.\n\n This method will synchronize the repository with its 'origin',\n fetching newest objects and updating references. It uses low\n level commands which allow to keep track of which things\n have changed in the repository.\n\n The method also returns a list of hashes related to the new\n commits fetched during the process.\n\n :returns: list of new commits\n\n :raises RepositoryError: when an error occurs synchronizing\n the repository"}
{"_id": "q_3679", "text": "Read the commit log from the repository.\n\n The method returns the Git log of the repository using the\n following options:\n\n git log --raw --numstat --pretty=fuller --decorate=full\n --all --reverse --topo-order --parents -M -C -c\n --remotes=origin\n\n When `from_date` is given, it gets the commits equal or older\n than that date. This date is given in a datetime object.\n\n The list of branches is a list of strings, with the names of the\n branches to fetch. If the list of branches is empty, no commit\n is fetched. If the list of branches is None, all commits\n for all branches will be fetched.\n\n :param from_date: fetch commits newer than a specific\n date (inclusive)\n :param branches: names of branches to fetch from (default: None)\n :param encoding: encode the log using this format\n\n :returns: a generator where each item is a line from the log\n\n :raises EmptyRepositoryError: when the repository is empty and\n the action cannot be performed\n :raises RepositoryError: when an error occurs fetching the log"}
{"_id": "q_3680", "text": "Show the data of a set of commits.\n\n The method returns the output of Git show command for a\n set of commits using the following options:\n\n git show --raw --numstat --pretty=fuller --decorate=full\n --parents -M -C -c [<commit>...<commit>]\n\n When the list of commits is empty, the command will return\n data about the last commit, like the default behaviour of\n `git show`.\n\n :param commits: list of commits to show data\n :param encoding: encode the output using this format\n\n :returns: a generator where each item is a line from the show output\n\n :raises EmptyRepositoryError: when the repository is empty and\n the action cannot be performed\n :raises RepositoryError: when an error occurs fetching the show output"}
{"_id": "q_3681", "text": "Update references removing old ones."}
{"_id": "q_3682", "text": "Get the current list of local or remote refs."}
{"_id": "q_3683", "text": "Reads self.proc.stderr.\n\n Usually, this should be read in a thread, to prevent blocking\n the read from stdout of the stderr buffer is filled, and this\n function is not called becuase the program is busy in the\n stderr reading loop.\n\n Reads self.proc.stderr (self.proc is the subprocess running\n the git command), and reads / writes self.failed_message\n (the message sent to stderr when git fails, usually one line)."}
{"_id": "q_3684", "text": "Run a command.\n\n Execute `cmd` command in the directory set by `cwd`. Environment\n variables can be set using the `env` dictionary. The output\n data is returned as encoded bytes.\n\n Commands which their returning status codes are non-zero will\n be treated as failed. Error codes considered as valid can be\n ignored giving them in the `ignored_error_codes` list.\n\n :returns: the output of the command as encoded bytes\n\n :raises RepositoryError: when an error occurs running the command"}
{"_id": "q_3685", "text": "Fetch the tweets from the server.\n\n This method fetches tweets from the TwitterSearch API published in the last seven days.\n\n :param category: the category of items to fetch\n :param since_id: if not null, it returns results with an ID greater than the specified ID\n :param max_id: when it is set or if not None, it returns results with an ID less than the specified ID\n :param geocode: if enabled, returns tweets by users located at latitude,longitude,\"mi\"|\"km\"\n :param lang: if enabled, restricts tweets to the given language, given by an ISO 639-1 code\n :param include_entities: if disabled, it excludes entities node\n :param tweets_type: type of tweets returned. Default is \u201cmixed\u201d, others are \"recent\" and \"popular\"\n\n :returns: a generator of tweets"}
{"_id": "q_3686", "text": "Fetch the tweets\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3687", "text": "Returns the Twitter argument parser."}
{"_id": "q_3688", "text": "Fetch data from Google API.\n\n The method retrieves a list of hits for some\n given keywords using the Google API.\n\n :param category: the category of items to fetch\n\n :returns: a generator of data"}
{"_id": "q_3689", "text": "Fetch Google hit items\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3690", "text": "Parse the hits returned by the Google Search API"}
{"_id": "q_3691", "text": "Get repo info about stars, watchers and forks"}
{"_id": "q_3692", "text": "Get issue reactions"}
{"_id": "q_3693", "text": "Get reactions on issue comments"}
{"_id": "q_3694", "text": "Get issue assignees"}
{"_id": "q_3695", "text": "Get pull request requested reviewers"}
{"_id": "q_3696", "text": "Get pull request commit hashes"}
{"_id": "q_3697", "text": "Get pull review comment reactions"}
{"_id": "q_3698", "text": "Get reactions of an issue"}
{"_id": "q_3699", "text": "Fetch the issues from the repository.\n\n The method retrieves, from a GitHub repository, the issues\n updated since the given date.\n\n :param from_date: obtain issues updated since this date\n\n :returns: a generator of issues"}
{"_id": "q_3700", "text": "Get pull requested reviewers"}
{"_id": "q_3701", "text": "Get pull request commits"}
{"_id": "q_3702", "text": "Get reactions of a review comment"}
{"_id": "q_3703", "text": "Get the user information and update the user cache"}
{"_id": "q_3704", "text": "Return array of all tokens remaining API points"}
{"_id": "q_3705", "text": "Check if we need to switch GitHub API tokens"}
{"_id": "q_3706", "text": "Update rate limits data for the current token"}
{"_id": "q_3707", "text": "Init metadata information.\n\n Metatada is composed by basic information needed to identify\n where archived data came from and how it can be retrieved\n and built into Perceval items.\n\n :param: origin: identifier of the repository\n :param: backend_name: name of the backend\n :param: backend_version: version of the backend\n :param: category: category of the items fetched\n :param: backend_params: dict representation of the fetch parameters\n\n raises ArchiveError: when an error occurs initializing the metadata"}
{"_id": "q_3708", "text": "Store a raw item in this archive.\n\n The method will store `data` content in this archive. The unique\n identifier for that item will be generated using the rest of the\n parameters.\n\n :param uri: request URI\n :param payload: request payload\n :param headers: request headers\n :param data: data to store in this archive\n\n :raises ArchiveError: when an error occurs storing the given data"}
{"_id": "q_3709", "text": "Retrieve a raw item from the archive.\n\n The method will return the `data` content corresponding to the\n hascode derived from the given parameters.\n\n :param uri: request URI\n :param payload: request payload\n :param headers: request headers\n\n :returns: the archived data\n\n :raises ArchiveError: when an error occurs retrieving data"}
{"_id": "q_3710", "text": "Create a brand new archive.\n\n Call this method to create a new and empty archive. It will initialize\n the storage file in the path defined by `archive_path`.\n\n :param archive_path: absolute path where the archive file will be created\n\n :raises ArchiveError: when the archive file already exists"}
{"_id": "q_3711", "text": "Generate a SHA1 based on the given arguments.\n\n Hashcodes created by this method will used as unique identifiers\n for the raw items or resources stored by this archive.\n\n :param uri: URI to the resource\n :param payload: payload of the request needed to fetch the resource\n :param headers: headers of the request needed to fetch the resource\n\n :returns: a SHA1 hash code"}
{"_id": "q_3712", "text": "Check whether the archive is valid or not.\n\n This method will check if tables were created and if they\n contain valid data."}
{"_id": "q_3713", "text": "Fetch the number of rows in a table"}
{"_id": "q_3714", "text": "Remove an archive.\n\n This method deletes from the filesystem the archive stored\n in `archive_path`.\n\n :param archive_path: path to the archive\n\n :raises ArchiveManangerError: when an error occurs removing the\n archive"}
{"_id": "q_3715", "text": "Search archives.\n\n Get the archives which store data based on the given parameters.\n These parameters define which the origin was (`origin`), how data\n was fetched (`backend_name`) and data type ('category').\n Only those archives created on or after `archived_after` will be\n returned.\n\n The method returns a list with the file paths to those archives.\n The list is sorted by the date of creation of each archive.\n\n :param origin: data origin\n :param backend_name: backed used to fetch data\n :param category: type of the items fetched by the backend\n :param archived_after: get archives created on or after this date\n\n :returns: a list with archive names which match the search criteria"}
{"_id": "q_3716", "text": "Search archives using filters."}
{"_id": "q_3717", "text": "Check if filename is a compressed file supported by the tool.\n\n This function uses magic numbers (first four bytes) to determine\n the type of the file. Supported types are 'gz' and 'bz2'. When\n the filetype is not supported, the function returns `None`.\n\n :param filepath: path to the file\n\n :returns: 'gz' or 'bz2'; `None` if the type is not supported"}
{"_id": "q_3718", "text": "Generate a months range.\n\n Generator of months starting on `from_date` util `to_date`. Each\n returned item is a tuple of two datatime objects like in (month, month+1).\n Thus, the result will follow the sequence:\n ((fd, fd+1), (fd+1, fd+2), ..., (td-2, td-1), (td-1, td))\n\n :param from_date: generate dates starting on this month\n :param to_date: generate dates until this month\n\n :result: a generator of months range"}
{"_id": "q_3719", "text": "Convert an email message into a dictionary.\n\n This function transforms an `email.message.Message` object\n into a dictionary. Headers are stored as key:value pairs\n while the body of the message is stored inside `body` key.\n Body may have two other keys inside, 'plain', for plain body\n messages and 'html', for HTML encoded messages.\n\n The returned dictionary has the type `requests.structures.CaseInsensitiveDict`\n due to same headers with different case formats can appear in\n the same message.\n\n :param msg: email message of type `email.message.Message`\n\n :returns : dictionary of type `requests.structures.CaseInsensitiveDict`\n\n :raises ParseError: when an error occurs transforming the message\n to a dictionary"}
{"_id": "q_3720", "text": "Remove control and invalid characters from an xml stream.\n\n Looks for invalid characters and subtitutes them with whitespaces.\n This solution is based on these two posts: Olemis Lang's reponse\n on StackOverflow (http://stackoverflow.com/questions/1707890) and\n lawlesst's on GitHub Gist (https://gist.github.com/lawlesst/4110923),\n that is based on the previous answer.\n\n :param xml: XML stream\n\n :returns: a purged XML stream"}
{"_id": "q_3721", "text": "Convert a XML stream into a dictionary.\n\n This function transforms a xml stream into a dictionary. The\n attributes are stored as single elements while child nodes are\n stored into lists. The text node is stored using the special\n key '__text__'.\n\n This code is based on Winston Ewert's solution to this problem.\n See http://codereview.stackexchange.com/questions/10400/convert-elementtree-to-dict\n for more info. The code was licensed as cc by-sa 3.0.\n\n :param raw_xml: XML stream\n\n :returns: a dict with the XML data\n\n :raises ParseError: raised when an error occurs parsing the given\n XML stream"}
{"_id": "q_3722", "text": "Parse a Redmine issues JSON stream.\n\n The method parses a JSON stream and returns a list iterator.\n Each item is a dictionary that contains the issue parsed data.\n\n :param raw_json: JSON string to parse\n\n :returns: a generator of parsed issues"}
{"_id": "q_3723", "text": "Get the information of the given issue.\n\n :param issue_id: issue identifier"}
{"_id": "q_3724", "text": "Get the information of the given user.\n\n :param user_id: user identifier"}
{"_id": "q_3725", "text": "Call to get a resource.\n\n :param method: resource to get\n :param params: dict with the HTTP parameters needed to get\n the given resource"}
{"_id": "q_3726", "text": "Fetch data from a Docker Hub repository.\n\n The method retrieves, from a repository stored in Docker Hub,\n its data which includes number of pulls, stars, description,\n among other data.\n\n :param category: the category of items to fetch\n\n :returns: a generator of data"}
{"_id": "q_3727", "text": "Fetch the Dockher Hub items\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3728", "text": "Add extra information for custom fields.\n\n :param custom_fields: set of custom fields with the extra information\n :param fields: fields of the issue where to add the extra information\n\n :returns: an set of items with the extra information mapped"}
{"_id": "q_3729", "text": "Retrieve all the items from a given date.\n\n :param url: endpoint API url\n :param from_date: obtain items updated since this date\n :param expand_fields: if True, it includes the expand fields in the payload"}
{"_id": "q_3730", "text": "Retrieve all the issues from a given date.\n\n :param from_date: obtain issues updated since this date"}
{"_id": "q_3731", "text": "Retrieve all the comments of a given issue.\n\n :param issue_id: ID of the issue"}
{"_id": "q_3732", "text": "Retrieve all the fields available."}
{"_id": "q_3733", "text": "Retrieve all the questions from a given date.\n\n :param from_date: obtain questions updated since this date"}
{"_id": "q_3734", "text": "Returns the StackExchange argument parser."}
{"_id": "q_3735", "text": "Fetch the pages\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3736", "text": "Get the max date in unixtime format from reviews."}
{"_id": "q_3737", "text": "Retrieve recent pages from all namespaces starting from rccontinue."}
{"_id": "q_3738", "text": "Fetch the messages the bot can read from the server.\n\n The method retrieves, from the Telegram server, the messages\n sent with an offset equal or greater than the given.\n\n A list of chats, groups and channels identifiers can be set\n using the parameter `chats`. When it is set, only those\n messages sent to any of these will be returned. An empty list\n will return no messages.\n\n :param category: the category of items to fetch\n :param offset: obtain messages from this offset\n :param chats: list of chat names used to filter messages\n\n :returns: a generator of messages\n\n :raises ValueError: when `chats` is an empty list"}
{"_id": "q_3739", "text": "Check if a message can be filtered based in a list of chats.\n\n This method returns `True` when the message was sent to a chat\n of the given list. It also returns `True` when chats is `None`.\n\n :param message: Telegram message\n :param chats: list of chat, groups and channels identifiers\n\n :returns: `True` when the message can be filtered; otherwise,\n it returns `False`"}
{"_id": "q_3740", "text": "Fetch the articles\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3741", "text": "NNTP metadata.\n\n This method takes items, overriding `metadata` decorator,\n to add extra information related to NNTP.\n\n :param item: an item fetched by a backend\n :param filter_classified: sets if classified fields were filtered"}
{"_id": "q_3742", "text": "Parse a NNTP article.\n\n This method parses a NNTP article stored in a string object\n and returns an dictionary.\n\n :param raw_article: NNTP article string\n\n :returns: a dictionary of type `requests.structures.CaseInsensitiveDict`\n\n :raises ParseError: when an error is found parsing the article"}
{"_id": "q_3743", "text": "Fetch NNTP data from the server or from the archive\n\n :param method: the name of the command to execute\n :param args: the arguments required by the command"}
{"_id": "q_3744", "text": "Fetch article data\n\n :param article_id: id of the article to fetch"}
{"_id": "q_3745", "text": "Fetch data from NNTP\n\n :param method: the name of the command to execute\n :param args: the arguments required by the command"}
{"_id": "q_3746", "text": "Fetch data from the archive\n\n :param method: the name of the command to execute\n :param args: the arguments required by the command"}
{"_id": "q_3747", "text": "Create a http session and initialize the retry object."}
{"_id": "q_3748", "text": "The fetching process sleeps until the rate limit is restored or\n raises a RateLimitError exception if sleep_for_rate flag is disabled."}
{"_id": "q_3749", "text": "Parse a Supybot IRC stream.\n\n Returns an iterator of dicts. Each dicts contains information\n about the date, type, nick and body of a single log entry.\n\n :returns: iterator of parsed lines\n\n :raises ParseError: when an invalid line is found parsing the given\n stream"}
{"_id": "q_3750", "text": "Parse timestamp section"}
{"_id": "q_3751", "text": "Parse message section"}
{"_id": "q_3752", "text": "Fetch the topics\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3753", "text": "Parse a topics page stream.\n\n The result of parsing process is a generator of tuples. Each\n tuple contains de identifier of the topic, the last date\n when it was updated and whether is pinned or not.\n\n :param raw_json: JSON stream to parse\n\n :returns: a generator of parsed bugs"}
{"_id": "q_3754", "text": "Retrieve the post whit `post_id` identifier.\n\n :param post_id: identifier of the post to retrieve"}
{"_id": "q_3755", "text": "Fetch the tasks\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_3756", "text": "Parse a Phabricator tasks JSON stream.\n\n The method parses a JSON stream and returns a list iterator.\n Each item is a dictionary that contains the task parsed data.\n\n :param raw_json: JSON string to parse\n\n :returns: a generator of parsed tasks"}
{"_id": "q_3757", "text": "Parse a Phabricator users JSON stream.\n\n The method parses a JSON stream and returns a list iterator.\n Each item is a dictionary that contais the user parsed data.\n\n :param raw_json: JSON string to parse\n\n :returns: a generator of parsed users"}
{"_id": "q_3758", "text": "Retrieve tasks.\n\n :param from_date: retrieve tasks that where updated from that date;\n dates are converted epoch time."}
{"_id": "q_3759", "text": "Retrieve tasks transactions.\n\n :param phids: list of tasks identifiers"}
{"_id": "q_3760", "text": "Extracts the identifier from a Confluence item.\n\n This identifier will be the mix of two fields because a\n historical content does not have any unique identifier.\n In this case, 'id' and 'version' values are combined because\n it should not be possible to have two equal version numbers\n for the same content. The value to return will follow the\n pattern: <content>#v<version> (i.e 28979#v10)."}
{"_id": "q_3761", "text": "Parse the result property, extracting the value\n and unit of measure"}
{"_id": "q_3762", "text": "Return a capabilities url"}
{"_id": "q_3763", "text": "Get and parse a WFS capabilities document, returning an\n instance of WFSCapabilitiesInfoset\n\n Parameters\n ----------\n url : string\n The URL to the WFS capabilities document.\n timeout : number\n A timeout value (in seconds) for the request."}
{"_id": "q_3764", "text": "Parse a WFS capabilities document, returning an\n instance of WFSCapabilitiesInfoset\n\n string should be an XML capabilities document"}
{"_id": "q_3765", "text": "helper function to build a WFS 3.0 URL\n\n @type path: string\n @param path: path of WFS URL\n\n @returns: fully constructed URL path"}
{"_id": "q_3766", "text": "Consruct fiona schema based on given elements\n\n :param list Element: list of elements\n :param dict nsmap: namespace map\n\n :return dict: schema"}
{"_id": "q_3767", "text": "Get url for describefeaturetype request\n\n :return str: url"}
{"_id": "q_3768", "text": "use ComplexDataInput with a reference to a document"}
{"_id": "q_3769", "text": "A URL that can be used to open the page.\n\n The URL is formatted from :py:attr:`URL_TEMPLATE`, which is then\n appended to :py:attr:`base_url` unless the template results in an\n absolute URL.\n\n :return: URL that can be used to open the page.\n :rtype: str"}
{"_id": "q_3770", "text": "Open the page.\n\n Navigates to :py:attr:`seed_url` and calls :py:func:`wait_for_page_to_load`.\n\n :return: The current page object.\n :rtype: :py:class:`Page`\n :raises: UsageError"}
{"_id": "q_3771", "text": "Root element for the page region.\n\n Page regions should define a root element either by passing this on\n instantiation or by defining a :py:attr:`_root_locator` attribute. To\n reduce the chances of hitting :py:class:`~selenium.common.exceptions.StaleElementReferenceException`\n or similar you should use :py:attr:`_root_locator`, as this is looked up every\n time the :py:attr:`root` property is accessed."}
{"_id": "q_3772", "text": "Finds an element on the page.\n\n :param strategy: Location strategy to use. See :py:class:`~selenium.webdriver.common.by.By` or :py:attr:`~pypom.splinter_driver.ALLOWED_STRATEGIES`.\n :param locator: Location of target element.\n :type strategy: str\n :type locator: str\n :return: An element.\n :rytpe: :py:class:`~selenium.webdriver.remote.webelement.WebElement` or :py:class:`~splinter.driver.webdriver.WebDriverElement`"}
{"_id": "q_3773", "text": "Finds elements on the page.\n\n :param strategy: Location strategy to use. See :py:class:`~selenium.webdriver.common.by.By` or :py:attr:`~pypom.splinter_driver.ALLOWED_STRATEGIES`.\n :param locator: Location of target elements.\n :type strategy: str\n :type locator: str\n :return: List of :py:class:`~selenium.webdriver.remote.webelement.WebElement` or :py:class:`~splinter.element_list.ElementList`\n :rtype: list"}
{"_id": "q_3774", "text": "Checks whether an element is present.\n\n :param strategy: Location strategy to use. See :py:class:`~selenium.webdriver.common.by.By` or :py:attr:`~pypom.splinter_driver.ALLOWED_STRATEGIES`.\n :param locator: Location of target element.\n :type strategy: str\n :type locator: str\n :return: ``True`` if element is present, else ``False``.\n :rtype: bool"}
{"_id": "q_3775", "text": "Checks whether an element is displayed.\n\n :param strategy: Location strategy to use. See :py:class:`~selenium.webdriver.common.by.By` or :py:attr:`~pypom.splinter_driver.ALLOWED_STRATEGIES`.\n :param locator: Location of target element.\n :type strategy: str\n :type locator: str\n :return: ``True`` if element is displayed, else ``False``.\n :rtype: bool"}
{"_id": "q_3776", "text": "Register driver adapter used by page object"}
{"_id": "q_3777", "text": "Get the list of TV genres.\n\n Args:\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3778", "text": "Get the cast and crew information for a specific movie id.\n\n Args:\n append_to_response: (optional) Comma separated, any movie method.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3779", "text": "Get the plot keywords for a specific movie id.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3780", "text": "Get the release dates and certification for a specific movie id.\n\n Args:\n append_to_response: (optional) Comma separated, any movie method.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3781", "text": "Get the translations for a specific movie id.\n\n Args:\n append_to_response: (optional) Comma separated, any movie method.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3782", "text": "Get the similar movies for a specific movie id.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n append_to_response: (optional) Comma separated, any movie method.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3783", "text": "Get the reviews for a particular movie id.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n append_to_response: (optional) Comma separated, any movie method.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3784", "text": "Get the list of upcoming movies. This list refreshes every day.\n The maximum number of items this list will include is 100.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3785", "text": "Get the list of movies playing in theatres. This list refreshes\n every day. The maximum number of items this list will include is 100.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3786", "text": "Get the list of popular movies on The Movie Database. This list\n refreshes every day.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3787", "text": "Get the list of top rated movies. By default, this list will only\n include movies that have 10 or more votes. This list refreshes every\n day.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3788", "text": "This method lets users get the status of whether or not the movie has\n been rated or added to their favourite or watch lists. A valid session\n id is required.\n\n Args:\n session_id: see Authentication.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3789", "text": "This method lets users rate a movie. A valid session id or guest\n session id is required.\n\n Args:\n session_id: see Authentication.\n guest_session_id: see Authentication.\n value: Rating value.\n\n Returns:\n A dict representation of the JSON returned from the API."}
{"_id": "q_3790", "text": "Get the movie credits for a specific person id.\n\n Args:\n language: (optional) ISO 639-1 code.\n append_to_response: (optional) Comma separated, any person method.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3791", "text": "Get the detailed information about a particular credit record. This is \n currently only supported with the new credit model found in TV. These \n ids can be found from any TV credit response as well as the tv_credits \n and combined_credits methods for people.\n\n The episodes object returns a list of episodes and are generally going \n to be guest stars. The season array will return a list of season \n numbers. Season credits are credits that were marked with the \n \"add to every season\" option in the editing interface and are \n assumed to be \"season regulars\".\n\n Args:\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3792", "text": "Discover TV shows by different types of data like average rating, \n number of votes, genres, the network they aired on and air dates.\n\n Args:\n page: (optional) Minimum 1, maximum 1000.\n language: (optional) ISO 639-1 code.\n sort_by: (optional) Available options are 'vote_average.desc', \n 'vote_average.asc', 'first_air_date.desc', \n 'first_air_date.asc', 'popularity.desc', 'popularity.asc'\n first_air_year: (optional) Filter the results release dates to \n matches that include this value. Expected value \n is a year.\n vote_count.gte or vote_count_gte: (optional) Only include TV shows \n that are equal to,\n or have vote count higher than this value. Expected\n value is an integer.\n vote_average.gte or vote_average_gte: (optional) Only include TV \n shows that are equal \n to, or have a higher average rating than this \n value. Expected value is a float.\n with_genres: (optional) Only include TV shows with the specified \n genres. Expected value is an integer (the id of a \n genre). Multiple valued can be specified. Comma \n separated indicates an 'AND' query, while a \n pipe (|) separated value indicates an 'OR'.\n with_networks: (optional) Filter TV shows to include a specific \n network. Expected value is an integer (the id of a\n network). They can be comma separated to indicate an\n 'AND' query.\n first_air_date.gte or first_air_date_gte: (optional) The minimum \n release to include. \n Expected format is 'YYYY-MM-DD'.\n first_air_date.lte or first_air_date_lte: (optional) The maximum \n release to include. \n Expected format is 'YYYY-MM-DD'.\n \n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3793", "text": "Get the system wide configuration info.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3794", "text": "Get the list of supported certifications for movies.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3795", "text": "Get the basic information for an account.\n\n Call this method first, before calling other Account methods.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3796", "text": "Generate a session id for user based authentication.\n\n A session id is required in order to use any of the write methods.\n\n Args:\n request_token: The token you generated for the user to approve.\n The token needs to be approved before being\n used here.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3797", "text": "Generate a guest session id.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3798", "text": "Get a list of rated moview for a specific guest session id.\n\n Args:\n page: (optional) Minimum 1, maximum 1000.\n sort_by: (optional) 'created_at.asc' | 'created_at.desc'\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3799", "text": "Delete movies from a list that the user created.\n\n A valid session id is required.\n\n Args:\n media_id: A movie id.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3800", "text": "Get the similar TV series for a specific TV series id.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n append_to_response: (optional) Comma separated, any TV method.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3801", "text": "Get the list of TV shows that are currently on the air. This query\n looks for any TV show that has an episode with an air date in the\n next 7 days.\n\n Args:\n page: (optional) Minimum 1, maximum 1000.\n language: (optional) ISO 639 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3802", "text": "Get the primary information about a TV season by its season number.\n\n Args:\n language: (optional) ISO 639 code.\n append_to_response: (optional) Comma separated, any TV series\n method.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3803", "text": "Get the external ids that we have stored for a TV season by season\n number.\n\n Args:\n language: (optional) ISO 639 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3804", "text": "Get the primary information about a TV episode by combination of a\n season and episode number.\n\n Args:\n language: (optional) ISO 639 code.\n append_to_response: (optional) Comma separated, any TV series\n method.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3805", "text": "Get the TV episode credits by combination of season and episode number.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3806", "text": "Get the external ids for a TV episode by combination of a season and\n episode number.\n\n Args:\n language: (optional) ISO 639 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3807", "text": "Search for movies by title.\n\n Args:\n query: CGI escpaed string.\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n include_adult: (optional) Toggle the inclusion of adult titles. \n Expected value is True or False.\n year: (optional) Filter the results release dates to matches that \n include this value.\n primary_release_year: (optional) Filter the results so that only \n the primary release dates have this value.\n search_type: (optional) By default, the search type is 'phrase'. \n This is almost guaranteed the option you will want. \n It's a great all purpose search type and by far the \n most tuned for every day querying. For those wanting \n more of an \"autocomplete\" type search, set this \n option to 'ngram'.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3808", "text": "Search for people by name.\n\n Args:\n query: CGI escpaed string.\n page: (optional) Minimum value of 1. Expected value is an integer.\n include_adult: (optional) Toggle the inclusion of adult titles. \n Expected value is True or False.\n search_type: (optional) By default, the search type is 'phrase'. \n This is almost guaranteed the option you will want. \n It's a great all purpose search type and by far the \n most tuned for every day querying. For those wanting \n more of an \"autocomplete\" type search, set this \n option to 'ngram'.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3809", "text": "Search for companies by name.\n\n Args:\n query: CGI escpaed string.\n page: (optional) Minimum value of 1. Expected value is an integer.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3810", "text": "Search the movie, tv show and person collections with a single query.\n\n Args:\n query: CGI escpaed string.\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n include_adult: (optional) Toggle the inclusion of adult titles.\n Expected value is True or False.\n\n Returns:\n A dict respresentation of the JSON returned from the API."}
{"_id": "q_3811", "text": "r'-?\\d+"}
{"_id": "q_3812", "text": "r'\\}"}
{"_id": "q_3813", "text": "r'<<\\S+\\r?\\n"}
{"_id": "q_3814", "text": "Initialize the parse table at install time"}
{"_id": "q_3815", "text": "Add another row of data from a test suite"}
{"_id": "q_3816", "text": "Return an instance of SuiteFile, ResourceFile, SuiteFolder\n\n Exactly which is returned depends on whether it's a file or\n folder, and if a file, the contents of the file. If there is a\n testcase table, this will return an instance of SuiteFile,\n otherwise it will return an instance of ResourceFile."}
{"_id": "q_3817", "text": "The general idea is to do a quick parse, creating a list of\n tables. Each table is nothing more than a list of rows, with\n each row being a list of cells. Additional parsing such as\n combining rows into statements is done on demand. This first\n pass is solely to read in the plain text and organize it by table."}
{"_id": "q_3818", "text": "Return 'suite' or 'resource' or None\n\n This will return 'suite' if a testcase table is found;\n It will return 'resource' if at least one robot table\n is found. If no tables are found it will return None"}
{"_id": "q_3819", "text": "Generator which returns all keywords in the suite"}
{"_id": "q_3820", "text": "Regurgitate the tables and rows"}
{"_id": "q_3821", "text": "Generator which returns all of the statements in all of the variables tables"}
{"_id": "q_3822", "text": "The idea is, we recognize when we have a new testcase by \n checking the first cell. If it's not empty and not a comment, \n we have a new test case."}
{"_id": "q_3823", "text": "Parse command line arguments, and run rflint"}
{"_id": "q_3824", "text": "Report a rule violation"}
{"_id": "q_3825", "text": "Returns a list of rules of a given class\n \n Rules are treated as singletons - we only instantiate each\n rule once."}
{"_id": "q_3826", "text": "Import the given rule file"}
{"_id": "q_3827", "text": "Handle the parsing of command line arguments."}
{"_id": "q_3828", "text": "Creates a customized Draft4ExtendedValidator.\n\n :param spec_resolver: resolver for the spec\n :type resolver: :class:`jsonschema.RefResolver`"}
{"_id": "q_3829", "text": "While yaml supports integer keys, these are not valid in\n json, and will break jsonschema. This method coerces all keys\n to strings."}
{"_id": "q_3830", "text": "Open a file, read it and return its contents."}
{"_id": "q_3831", "text": "Takes a list of reference sentences for a single segment\n and returns an object that encapsulates everything that BLEU\n needs to know about them."}
{"_id": "q_3832", "text": "Takes a reference sentences for a single segment\n and returns an object that encapsulates everything that BLEU\n needs to know about them. Also provides a set cause bleualign wants it"}
{"_id": "q_3833", "text": "Creates the sentence alignment of two texts.\n\n Texts can consist of several blocks. Block boundaries cannot be crossed by sentence \n alignment links. \n\n Each block consists of a list that contains the lengths (in characters) of the sentences\n in this block.\n \n @param source_blocks: The list of blocks in the source text.\n @param target_blocks: The list of blocks in the target text.\n @param params: the sentence alignment parameters.\n\n @returns: A list of sentence alignment lists"}
{"_id": "q_3834", "text": "Creates a path to model files\n model_path - string"}
{"_id": "q_3835", "text": "Strips illegal characters from a string. Used to sanitize input essays.\n Removes all non-punctuation, digit, or letter characters.\n Returns sanitized string.\n string - string"}
{"_id": "q_3836", "text": "Uses aspell to spell correct an input string.\n Requires aspell to be installed and added to the path.\n Returns the spell corrected string if aspell is found, original string if not.\n string - string"}
{"_id": "q_3837", "text": "Makes a list unique"}
{"_id": "q_3838", "text": "Generates a count of the number of times each unique item appears in a list"}
{"_id": "q_3839", "text": "Given an input string, part of speech tags the string, then generates a list of\n ngrams that appear in the string.\n Used to define grammatically correct part of speech tag sequences.\n Returns a list of part of speech tag sequences."}
{"_id": "q_3840", "text": "Generates predictions on a novel data array using a fit classifier\n clf is a classifier that has already been fit\n arr is a data array identical in dimension to the array clf was trained on\n Returns the array of predictions."}
{"_id": "q_3841", "text": "Calculates the average value of a list of numbers\n Returns a float"}
{"_id": "q_3842", "text": "Calculates kappa correlation between rater_a and rater_b.\n Kappa measures how well 2 quantities vary together.\n rater_a is a list of rater a scores\n rater_b is a list of rater b scores\n min_rating is an optional argument describing the minimum rating possible on the data set\n max_rating is an optional argument describing the maximum rating possible on the data set\n Returns a float corresponding to the kappa correlation"}
{"_id": "q_3843", "text": "Initializes dictionaries from an essay set object\n Dictionaries must be initialized prior to using this to extract features\n e_set is an input essay set\n returns a confirmation of initialization"}
{"_id": "q_3844", "text": "Gets a set of gramatically correct part of speech sequences from an input file called essaycorpus.txt\n Returns the set and caches the file"}
{"_id": "q_3845", "text": "Generates length based features from an essay set\n Generally an internal function called by gen_feats\n Returns an array of length features\n e_set - EssaySet object"}
{"_id": "q_3846", "text": "Generates bag of words features from an input essay set and trained FeatureExtractor\n Generally called by gen_feats\n Returns an array of features\n e_set - EssaySet object"}
{"_id": "q_3847", "text": "Generates bag of words, length, and prompt features from an essay set object\n returns an array of features\n e_set - EssaySet object"}
{"_id": "q_3848", "text": "Gets two classifiers for each type of algorithm, and returns them. First for predicting, second for cv error.\n type - one of util_functions.AlgorithmTypes"}
{"_id": "q_3849", "text": "Extracts features and generates predictors based on a given predictor set\n predictor_set - a PredictorSet object that has been initialized with data\n type - one of util_functions.AlgorithmType"}
{"_id": "q_3850", "text": "Function that creates essay set, extracts features, and writes out model\n See above functions for argument descriptions"}
{"_id": "q_3851", "text": "Initialize dictionaries with the textual inputs in the PredictorSet object\n p_set - PredictorSet object that has had data fed in"}
{"_id": "q_3852", "text": "r\"\"\"Get descriptors in module.\n\n Parameters:\n mdl(module): module to search\n submodule(bool): search recursively\n\n Returns:\n Iterator[Descriptor]"}
{"_id": "q_3853", "text": "Register Descriptors from json descriptor objects.\n\n Parameters:\n obj(list or dict): descriptors to register"}
{"_id": "q_3854", "text": "r\"\"\"Register descriptors.\n\n Descriptor-like:\n * Descriptor instance: self\n * Descriptor class: use Descriptor.preset() method\n * module: use Descriptor-likes in module\n * Iterable: use Descriptor-likes in Iterable\n\n Parameters:\n desc(Descriptor-like): descriptors to register\n version(str): version\n ignore_3D(bool): ignore 3D descriptors"}
{"_id": "q_3855", "text": "Output message.\n\n Parameters:\n s(str): message to output\n file(file-like): output to\n end(str): end mark of message\n\n Return:\n None"}
{"_id": "q_3856", "text": "r\"\"\"Check calculatable descriptor class or not.\n\n Returns:\n bool"}
{"_id": "q_3857", "text": "r\"\"\"Calculate atomic surface area.\n\n :type i: int\n :param i: atom index\n\n :rtype: float"}
{"_id": "q_3858", "text": "r\"\"\"Construct SurfaceArea from rdkit Mol type.\n\n :type mol: rdkit.Chem.Mol\n :param mol: input molecule\n\n :type conformer: int\n :param conformer: conformer id\n\n :type solvent_radius: float\n :param solvent_radius: solvent radius\n\n :type level: int\n :param level: mesh level\n\n :rtype: SurfaceArea"}
{"_id": "q_3859", "text": "Create Descriptor instance from json dict.\n\n Parameters:\n obj(dict): descriptor dict\n\n Returns:\n Descriptor: descriptor"}
{"_id": "q_3860", "text": "r\"\"\"Delete missing value.\n\n Returns:\n Result"}
{"_id": "q_3861", "text": "r\"\"\"Get items.\n\n Returns:\n Iterable[(Descriptor, value)]"}
{"_id": "q_3862", "text": "Reports an OrderError. Error message will state that\n first_tag came before second_tag."}
{"_id": "q_3863", "text": "Write the fields of a single review to out."}
{"_id": "q_3864", "text": "Write the fields of a single annotation to out."}
{"_id": "q_3865", "text": "Write a file fields to out."}
{"_id": "q_3866", "text": "Write a package fields to out."}
{"_id": "q_3867", "text": "Write an SPDX tag value document.\n - document - spdx.document instance.\n - out - file like object that will be written to.\n Optionally `validate` the document before writing and raise\n InvalidDocumentError if document.validate returns False."}
{"_id": "q_3868", "text": "Return an spdx.checksum.Algorithm instance representing the SHA1\n checksum or None if does not match CHECKSUM_RE."}
{"_id": "q_3869", "text": "Set the document version.\n Raise SPDXValueError if malformed value, CardinalityError\n if already defined"}
{"_id": "q_3870", "text": "Sets the document name.\n Raises CardinalityError if already defined."}
{"_id": "q_3871", "text": "Sets the document SPDX Identifier.\n Raises value error if malformed value, CardinalityError\n if already defined."}
{"_id": "q_3872", "text": "Sets document comment, Raises CardinalityError if\n comment already set.\n Raises SPDXValueError if comment is not free form text."}
{"_id": "q_3873", "text": "Sets the document namespace.\n Raise SPDXValueError if malformed value, CardinalityError\n if already defined."}
{"_id": "q_3874", "text": "Sets the `spdx_document_uri` attribute of the `ExternalDocumentRef`\n object."}
{"_id": "q_3875", "text": "Builds a tool object out of a string representation.\n Returns built tool. Raises SPDXValueError if failed to extract\n tool name or name is malformed"}
{"_id": "q_3876", "text": "Adds a creator to the document's creation info.\n Returns true if creator is valid.\n Creator must be built by an EntityBuilder.\n Raises SPDXValueError if not a creator type."}
{"_id": "q_3877", "text": "Sets created date, Raises CardinalityError if\n created date already set.\n Raises SPDXValueError if created is not a date."}
{"_id": "q_3878", "text": "Sets the license list version, Raises CardinalityError if\n already set, SPDXValueError if incorrect value."}
{"_id": "q_3879", "text": "Resets builder state to allow building new creation info."}
{"_id": "q_3880", "text": "Adds a reviewer to the SPDX Document.\n Reviwer is an entity created by an EntityBuilder.\n Raises SPDXValueError if not a valid reviewer type."}
{"_id": "q_3881", "text": "Sets the review date. Raises CardinalityError if\n already set. OrderError if no reviewer defined before.\n Raises SPDXValueError if invalid reviewed value."}
{"_id": "q_3882", "text": "Adds an annotator to the SPDX Document.\n Annotator is an entity created by an EntityBuilder.\n Raises SPDXValueError if not a valid annotator type."}
{"_id": "q_3883", "text": "Sets the annotation date. Raises CardinalityError if\n already set. OrderError if no annotator defined before.\n Raises SPDXValueError if invalid value."}
{"_id": "q_3884", "text": "Sets the annotation comment. Raises CardinalityError if\n already set. OrderError if no annotator defined before.\n Raises SPDXValueError if comment is not free form text."}
{"_id": "q_3885", "text": "Sets the annotation type. Raises CardinalityError if\n already set. OrderError if no annotator defined before.\n Raises SPDXValueError if invalid value."}
{"_id": "q_3886", "text": "Resets the builder's state in order to build new packages."}
{"_id": "q_3887", "text": "Creates a package for the SPDX Document.\n name - any string.\n Raises CardinalityError if package already defined."}
{"_id": "q_3888", "text": "Sets the package file name, if not already set.\n name - Any string.\n Raises CardinalityError if already has a file_name.\n Raises OrderError if no pacakge previously defined."}
{"_id": "q_3889", "text": "Sets the package supplier, if not already set.\n entity - Organization, Person or NoAssert.\n Raises CardinalityError if already has a supplier.\n Raises OrderError if no package previously defined."}
{"_id": "q_3890", "text": "Sets the package originator, if not already set.\n entity - Organization, Person or NoAssert.\n Raises CardinalityError if already has an originator.\n Raises OrderError if no package previously defined."}
{"_id": "q_3891", "text": "Sets the package download location, if not already set.\n location - A string\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined."}
{"_id": "q_3892", "text": "Sets the package homepage location if not already set.\n location - A string or None or NoAssert.\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined.\n Raises SPDXValueError if location has incorrect value."}
{"_id": "q_3893", "text": "Sets the package's source information, if not already set.\n text - Free form text.\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined.\n SPDXValueError if text is not free form text."}
{"_id": "q_3894", "text": "Sets the package's concluded licenses.\n licenses - License info.\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined.\n Raises SPDXValueError if data malformed."}
{"_id": "q_3895", "text": "Adds a license from a file to the package.\n Raises SPDXValueError if data malformed.\n Raises OrderError if no package previously defined."}
{"_id": "q_3896", "text": "Sets the package's declared license.\n Raises SPDXValueError if data malformed.\n Raises OrderError if no package previously defined.\n Raises CardinalityError if already set."}
{"_id": "q_3897", "text": "Sets the package's license comment.\n Raises OrderError if no package previously defined.\n Raises CardinalityError if already set.\n Raises SPDXValueError if text is not free form text."}
{"_id": "q_3898", "text": "Raises OrderError if no package defined."}
{"_id": "q_3899", "text": "Raises OrderError if no package or no file defined.\n Raises CardinalityError if more than one comment set.\n Raises SPDXValueError if text is not free form text."}
{"_id": "q_3900", "text": "Raises OrderError if no package or file defined.\n Raises CardinalityError if more than one chksum set."}
{"_id": "q_3901", "text": "Raises OrderError if no package or file defined.\n Raises SPDXValueError if malformed value."}
{"_id": "q_3902", "text": "Raises OrderError if no package or file defined.\n Raises SPDXValueError if text is not free form text.\n Raises CardinalityError if more than one per file."}
{"_id": "q_3903", "text": "Raises OrderError if no package or file defined.\n Raises SPDXValueError if not free form text or NONE or NO_ASSERT.\n Raises CardinalityError if more than one."}
{"_id": "q_3904", "text": "Raises OrderError if no package or file defined.\n Raises SPDXValueError if not free form text.\n Raises CardinalityError if more than one."}
{"_id": "q_3905", "text": "Sets a file name, uri or home artificat.\n Raises OrderError if no package or file defined."}
{"_id": "q_3906", "text": "Resets the builder's state to enable building new files."}
{"_id": "q_3907", "text": "Sets license extracted text.\n Raises SPDXValueError if text is not free form text.\n Raises OrderError if no license ID defined."}
{"_id": "q_3908", "text": "Sets license name.\n Raises SPDXValueError if name is not str or utils.NoAssert\n Raises OrderError if no license id defined."}
{"_id": "q_3909", "text": "Sets license comment.\n Raises SPDXValueError if comment is not free form text.\n Raises OrderError if no license ID defined."}
{"_id": "q_3910", "text": "Adds a license cross reference.\n Raises OrderError if no License ID defined."}
{"_id": "q_3911", "text": "Return an ISO-8601 representation of a datetime object."}
{"_id": "q_3912", "text": "Must be called before parse."}
{"_id": "q_3913", "text": "Parses a license list and returns a License or None if it failed."}
{"_id": "q_3914", "text": "Write an SPDX RDF document.\n - document - spdx.document instance.\n - out - file like object that will be written to.\n Optionally `validate` the document before writing and raise\n InvalidDocumentError if document.validate returns False."}
{"_id": "q_3915", "text": "Return a node representing spdx.checksum."}
{"_id": "q_3916", "text": "Traverse conjunctions and disjunctions like trees and return a\n set of all licenses in it as nodes."}
{"_id": "q_3917", "text": "Return a node representing a conjunction of licenses."}
{"_id": "q_3918", "text": "Handle dependencies for a single file.\n - doc_file - instance of spdx.file.File."}
{"_id": "q_3919", "text": "Return a review node."}
{"_id": "q_3920", "text": "Return an annotation node."}
{"_id": "q_3921", "text": "Return a node representing package verification code."}
{"_id": "q_3922", "text": "Write package optional fields."}
{"_id": "q_3923", "text": "Return a Node representing the package.\n Files must have been added to the graph before this method is called."}
{"_id": "q_3924", "text": "Return node representing pkg_file\n pkg_file should be instance of spdx.file."}
{"_id": "q_3925", "text": "Add hasFile triples to graph.\n Must be called after files have been added."}
{"_id": "q_3926", "text": "Add and return the root document node to graph."}
{"_id": "q_3927", "text": "Returns True if the fields are valid according to the SPDX standard.\n Appends user friendly messages to the messages parameter."}
{"_id": "q_3928", "text": "Checks if value is a special SPDX value such as\n NONE, NOASSERTION or UNKNOWN if so returns proper model.\n else returns value"}
{"_id": "q_3929", "text": "Return license comment or None."}
{"_id": "q_3930", "text": "Return an ExtractedLicense object to represent a license object.\n But does not add it to the SPDXDocument model.\n Return None if failed."}
{"_id": "q_3931", "text": "Build and return an ExtractedLicense or None.\n Note that this function adds the license to the document."}
{"_id": "q_3932", "text": "Returns first found fileName property or None if not found."}
{"_id": "q_3933", "text": "Sets file dependencies."}
{"_id": "q_3934", "text": "Parse all file contributors and adds them to the model."}
{"_id": "q_3935", "text": "Sets file notice text."}
{"_id": "q_3936", "text": "Sets file comment text."}
{"_id": "q_3937", "text": "Sets file license comment."}
{"_id": "q_3938", "text": "Sets file license information."}
{"_id": "q_3939", "text": "Sets file type."}
{"_id": "q_3940", "text": "Sets file checksum. Assumes SHA1 algorithm without checking."}
{"_id": "q_3941", "text": "Sets file licenses concluded."}
{"_id": "q_3942", "text": "Returns review date or None if not found.\n Reports error on failure.\n Note does not check value format."}
{"_id": "q_3943", "text": "Returns annotation comment or None if found none or more than one.\n Reports errors."}
{"_id": "q_3944", "text": "Returns annotation date or None if not found.\n Reports error on failure.\n Note does not check value format."}
{"_id": "q_3945", "text": "Parse creators, created and comment."}
{"_id": "q_3946", "text": "Parses the External Document ID, SPDX Document URI and Checksum."}
{"_id": "q_3947", "text": "Validate the package fields.\n Append user friendly error messages to the `messages` list."}
{"_id": "q_3948", "text": "Helper for validate_mandatory_str_field and\n validate_optional_str_fields"}
{"_id": "q_3949", "text": "Sets document comment, Raises CardinalityError if\n comment already set."}
{"_id": "q_3950", "text": "Sets the external document reference's check sum, if not already set.\n chk_sum - The checksum value in the form of a string."}
{"_id": "q_3951", "text": "Sets the package's source information, if not already set.\n text - Free form text.\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined."}
{"_id": "q_3952", "text": "Sets the package's verification code excluded file.\n Raises OrderError if no package previously defined."}
{"_id": "q_3953", "text": "Set's the package summary.\n Raises CardinalityError if summary already set.\n Raises OrderError if no package previously defined."}
{"_id": "q_3954", "text": "Sets the file check sum, if not already set.\n chk_sum - A string\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined."}
{"_id": "q_3955", "text": "Raises OrderError if no package or file defined.\n Raises CardinalityError if more than one per file."}
{"_id": "q_3956", "text": "Raises OrderError if no package or no file defined.\n Raises CardinalityError if more than one comment set."}
{"_id": "q_3957", "text": "Sets the annotation comment. Raises CardinalityError if\n already set. OrderError if no annotator defined before."}
{"_id": "q_3958", "text": "Sets the annotation type. Raises CardinalityError if\n already set. OrderError if no annotator defined before."}
{"_id": "q_3959", "text": "Validate all fields of the document and update the\n messages list with user friendly error messages for display."}
{"_id": "q_3960", "text": "Decorator to synchronize function."}
{"_id": "q_3961", "text": "Program message output."}
{"_id": "q_3962", "text": "Utility function to handle runtime failures gracefully.\n Show concise information if possible, then terminate program."}
{"_id": "q_3963", "text": "Clean up temp files"}
{"_id": "q_3964", "text": "Get the fixed part of the path without wildcard"}
{"_id": "q_3965", "text": "Given a API name, list all legal parameters using boto3 service model."}
{"_id": "q_3966", "text": "Combine existing parameters with extra options supplied from command line\n options. Carefully merge special type of parameter if needed."}
{"_id": "q_3967", "text": "Terminate all threads by deleting the queue and forcing the child threads\n to quit."}
{"_id": "q_3968", "text": "Utility function to add a single task into task queue"}
{"_id": "q_3969", "text": "Utility function to wait all tasks to complete"}
{"_id": "q_3970", "text": "Retrieve S3 access keys from the environment, or None if not present."}
{"_id": "q_3971", "text": "Retrieve S3 access keys from the command line, or None if not present."}
{"_id": "q_3972", "text": "Retrieve S3 access key settings from s3cmd's config file, if present; otherwise return None."}
{"_id": "q_3973", "text": "Initialize s3 access keys from environment variable or s3cfg config file."}
{"_id": "q_3974", "text": "Connect to S3 storage"}
{"_id": "q_3975", "text": "List all buckets"}
{"_id": "q_3976", "text": "Walk through a S3 directory. This function initiate a walk with a basedir.\n It also supports multiple wildcards."}
{"_id": "q_3977", "text": "Walk through local directories from root basedir"}
{"_id": "q_3978", "text": "Get privileges from metadata of the source in s3, and apply them to target"}
{"_id": "q_3979", "text": "Download a single file or a directory by adding a task into queue"}
{"_id": "q_3980", "text": "Download files.\n This function can handle multiple files if source S3 URL has wildcard\n characters. It also handles recursive mode by download all files and\n keep the directory structure."}
{"_id": "q_3981", "text": "Copy a single file or a directory by adding a task into queue"}
{"_id": "q_3982", "text": "Sync directory to directory."}
{"_id": "q_3983", "text": "Check MD5 for a local file and a remote file.\n Return True if they have the same md5 hash, otherwise False."}
{"_id": "q_3984", "text": "Partially match a path and a filter_path with wildcards.\n This function will return True if this path partially match a filter path.\n This is used for walking through directories with multiple level wildcard."}
{"_id": "q_3985", "text": "Thread worker for s3walk.\n Recursively walk into all subdirectories if they still match the filter\n path partially."}
{"_id": "q_3986", "text": "Thread worker for upload operation."}
{"_id": "q_3987", "text": "Verify the file size of the downloaded file."}
{"_id": "q_3988", "text": "Write local file chunk"}
{"_id": "q_3989", "text": "Copy a single file from source to target using boto S3 library."}
{"_id": "q_3990", "text": "Main entry to handle commands. Dispatch to individual command handler."}
{"_id": "q_3991", "text": "Validate input parameters with given format.\n This function also checks for wildcards for recursive mode."}
{"_id": "q_3992", "text": "Pretty print the result of s3walk. Here we calculate the maximum width\n of each column and align them."}
{"_id": "q_3993", "text": "Handler for mb command"}
{"_id": "q_3994", "text": "Handler for put command"}
{"_id": "q_3995", "text": "Handler for get command"}
{"_id": "q_3996", "text": "Handler for dsync command."}
{"_id": "q_3997", "text": "Handler for cp command"}
{"_id": "q_3998", "text": "Handler for mv command"}
{"_id": "q_3999", "text": "Handler for size command"}
{"_id": "q_4000", "text": "Handler of total_size command"}
{"_id": "q_4001", "text": "Search for date information in the string"}
{"_id": "q_4002", "text": "Search for time information in the string"}
{"_id": "q_4003", "text": "includes the contents of a file on disk.\n takes a filename"}
{"_id": "q_4004", "text": "pipes the output of a program"}
{"_id": "q_4005", "text": "unescapes html entities. the opposite of escape."}
{"_id": "q_4006", "text": "Set attributes on the current active tag context"}
{"_id": "q_4007", "text": "Add or update the value of an attribute."}
{"_id": "q_4008", "text": "Recursively searches children for tags of a certain\n type with matching attributes."}
{"_id": "q_4009", "text": "Normalize attribute names for shorthand and work arounds for limitations\n in Python's syntax"}
{"_id": "q_4010", "text": "This will call `clean_attribute` on the attribute and also allows for the\n creation of boolean attributes.\n\n Ex. input(selected=True) is equivalent to input(selected=\"selected\")"}
{"_id": "q_4011", "text": "Discover gateways using multicast"}
{"_id": "q_4012", "text": "Get data from gateway"}
{"_id": "q_4013", "text": "Push data broadcasted from gateway to device"}
{"_id": "q_4014", "text": "Get key using token from gateway"}
{"_id": "q_4015", "text": "Creates a registration message to identify the worker to the interchange"}
{"_id": "q_4016", "text": "Send heartbeat to the incoming task queue"}
{"_id": "q_4017", "text": "Receives a results from the MPI worker pool and send it out via 0mq\n\n Returns:\n --------\n result: task result from the workers"}
{"_id": "q_4018", "text": "Pulls tasks from the incoming tasks 0mq pipe onto the internal\n pending task queue\n\n Parameters:\n -----------\n kill_event : threading.Event\n Event to let the thread know when it is time to die."}
{"_id": "q_4019", "text": "Start the Manager process.\n\n The worker loops on this:\n\n 1. If the last message sent was older than heartbeat period we send a heartbeat\n 2.\n\n\n TODO: Move task receiving to a thread"}
{"_id": "q_4020", "text": "Decorator function to launch a function as a separate process"}
{"_id": "q_4021", "text": "Send UDP messages to usage tracker asynchronously\n\n This multiprocessing based messenger was written to overcome the limitations\n of signalling/terminating a thread that is blocked on a system call. This\n messenger is created as a separate process, and initialized with 2 queues,\n to_send to receive messages to be sent to the internet.\n\n Args:\n - domain_name (str) : Domain name string\n - UDP_IP (str) : IP address YYY.YYY.YYY.YYY\n - UDP_PORT (int) : UDP port to send out on\n - sock_timeout (int) : Socket timeout\n - to_send (multiprocessing.Queue) : Queue of outgoing messages to internet"}
{"_id": "q_4022", "text": "By default tracking is enabled.\n\n If Test mode is set via env variable PARSL_TESTING, a test flag is set\n\n Tracking is disabled if :\n 1. config[\"globals\"][\"usageTracking\"] is set to False (Bool)\n 2. Environment variable PARSL_TRACKING is set to false (case insensitive)"}
{"_id": "q_4023", "text": "Collect preliminary run info at the start of the DFK.\n\n Returns :\n - Message dict dumped as json string, ready for UDP"}
{"_id": "q_4024", "text": "Collect the final run information at the time of DFK cleanup.\n\n Returns:\n - Message dict dumped as json string, ready for UDP"}
{"_id": "q_4025", "text": "Send UDP message."}
{"_id": "q_4026", "text": "Send message over UDP.\n\n If tracking is disables, the bytes_sent will always be set to -1\n\n Returns:\n (bytes_sent, time_taken)"}
{"_id": "q_4027", "text": "This function is called as a callback when an AppFuture\n is in its final state.\n\n It will trigger post-app processing such as checkpointing\n and stageout.\n\n Args:\n task_id (string) : Task id\n future (Future) : The relevant app future (which should be\n consistent with the task structure 'app_fu' entry\n\n KWargs:\n memo_cbk(Bool) : Indicates that the call is coming from a memo update,\n that does not require additional memo updates."}
{"_id": "q_4028", "text": "Handle the actual submission of the task to the executor layer.\n\n If the app task has the executors attributes not set (default=='all')\n the task is launched on a randomly selected executor from the\n list of executors. This behavior could later be updated to support\n binding to executors based on user specified criteria.\n\n If the app task specifies a particular set of executors, it will be\n targeted at those specific executors.\n\n Args:\n task_id (uuid string) : A uuid string that uniquely identifies the task\n executable (callable) : A callable object\n args (list of positional args)\n kwargs (arbitrary keyword arguments)\n\n\n Returns:\n Future that tracks the execution of the submitted executable"}
{"_id": "q_4029", "text": "Count the number of unresolved futures on which a task depends.\n\n Args:\n - args (List[args]) : The list of args list to the fn\n - kwargs (Dict{kwargs}) : The dict of all kwargs passed to the fn\n\n Returns:\n - count, [list of dependencies]"}
{"_id": "q_4030", "text": "Add task to the dataflow system.\n\n If the app task has the executors attributes not set (default=='all')\n the task will be launched on a randomly selected executor from the\n list of executors. If the app task specifies a particular set of\n executors, it will be targeted at the specified executors.\n\n >>> IF all deps are met:\n >>> send to the runnable queue and launch the task\n >>> ELSE:\n >>> post the task in the pending queue\n\n Args:\n - func : A function object\n - *args : Args to the function\n\n KWargs :\n - executors (list or string) : List of executors this call could go to.\n Default='all'\n - fn_hash (Str) : Hash of the function and inputs\n Default=None\n - cache (Bool) : To enable memoization or not\n - kwargs (dict) : Rest of the kwargs to the fn passed as dict.\n\n Returns:\n (AppFuture) [DataFutures,]"}
{"_id": "q_4031", "text": "DataFlowKernel cleanup.\n\n This involves killing resources explicitly and sending die messages to IPP workers.\n\n If the executors are managed (created by the DFK), then we call scale_in on each of\n the executors and call executor.shutdown. Otherwise, we do nothing, and executor\n cleanup is left to the user."}
{"_id": "q_4032", "text": "Load a checkpoint file into a lookup table.\n\n The data being loaded from the pickle file mostly contains input\n attributes of the task: func, args, kwargs, env...\n To simplify the check of whether the exact task has been completed\n in the checkpoint, we hash these input params and use it as the key\n for the memoized lookup table.\n\n Args:\n - checkpointDirs (list) : List of filepaths to checkpoints\n Eg. ['runinfo/001', 'runinfo/002']\n\n Returns:\n - memoized_lookup_table (dict)"}
{"_id": "q_4033", "text": "Load checkpoints from the checkpoint files into a dictionary.\n\n The results are used to pre-populate the memoizer's lookup_table\n\n Kwargs:\n - checkpointDirs (list) : List of run folder to use as checkpoints\n Eg. ['runinfo/001', 'runinfo/002']\n\n Returns:\n - dict containing, hashed -> future mappings"}
{"_id": "q_4034", "text": "Pull tasks from the incoming tasks 0mq pipe onto the internal\n pending task queue\n\n Parameters:\n -----------\n kill_event : threading.Event\n Event to let the thread know when it is time to die."}
{"_id": "q_4035", "text": "Command server to run async command to the interchange"}
{"_id": "q_4036", "text": "Return the DataManager of the currently loaded DataFlowKernel."}
{"_id": "q_4037", "text": "Transport the file from the input source to the executor.\n\n This function returns a DataFuture.\n\n Args:\n - self\n - file (File) : file to stage in\n - executor (str) : an executor the file is going to be staged in to.\n If the executor argument is not specified for a file\n with 'globus' scheme, the file will be staged in to\n the first executor with the \"globus\" key in a config."}
{"_id": "q_4038", "text": "Finds the checkpoints from all last runs.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for the checkpointFiles parameter of DataFlowKernel\n constructor"}
{"_id": "q_4039", "text": "Find the checkpoint from the last run, if one exists.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for checkpointFiles parameter of DataFlowKernel\n constructor, with 0 or 1 elements"}
{"_id": "q_4040", "text": "Revert to using stdlib pickle.\n\n Reverts custom serialization enabled by use_dill|cloudpickle."}
{"_id": "q_4041", "text": "Specify path to the ipcontroller-engine.json file.\n\n This file is stored in in the ipython_dir/profile folders.\n\n Returns :\n - str, File path to engine file"}
{"_id": "q_4042", "text": "Terminate the controller process and its child processes.\n\n Args:\n - None"}
{"_id": "q_4043", "text": "Create a hash of the task and its inputs and check the lookup table for this hash.\n\n If present, the results are returned. The result is a tuple indicating whether a memo\n exists and the result, since a Null result is possible and could be confusing.\n This seems like a reasonable option without relying on an cache_miss exception.\n\n Args:\n - task(task) : task from the dfk.tasks table\n\n Returns:\n Tuple of the following:\n - present (Bool): Is this present in the memo_lookup_table\n - Result (Py Obj): Result of the function if present in table\n\n This call will also set task['hashsum'] to the unique hashsum for the func+inputs."}
{"_id": "q_4044", "text": "Updates the memoization lookup table with the result from a task.\n\n Args:\n - task_id (int): Integer task id\n - task (dict) : A task dict from dfk.tasks\n - r (Result future): Result future\n\n A warning is issued when a hash collision occurs during the update.\n This is not likely."}
{"_id": "q_4045", "text": "Extract buffers larger than a certain threshold."}
{"_id": "q_4046", "text": "Restore extracted buffers."}
{"_id": "q_4047", "text": "Serialize an object into a list of sendable buffers.\n\n Parameters\n ----------\n\n obj : object\n The object to be serialized\n buffer_threshold : int\n The threshold (in bytes) for pulling out data buffers\n to avoid pickling them.\n item_threshold : int\n The maximum number of items over which canning will iterate.\n Containers (lists, dicts) larger than this will be pickled without\n introspection.\n\n Returns\n -------\n [bufs] : list of buffers representing the serialized object."}
{"_id": "q_4048", "text": "Reconstruct an object serialized by serialize_object from data buffers.\n\n Parameters\n ----------\n\n bufs : list of buffers/bytes\n\n g : globals to be used when uncanning\n\n Returns\n -------\n\n (newobj, bufs) : unpacked object, and the list of remaining unused buffers."}
{"_id": "q_4049", "text": "Generate submit script and write it to a file.\n\n Args:\n - template (string) : The template string to be used for the writing submit script\n - script_filename (string) : Name of the submit script\n - job_name (string) : job name\n - configs (dict) : configs that get pushed into the template\n\n Returns:\n - True: on success\n\n Raises:\n SchedulerMissingArgs : If template is missing args\n ScriptPathError : Unable to write submit script out"}
{"_id": "q_4050", "text": "Cancels the jobs specified by a list of job ids\n\n Args:\n job_ids : [<job_id> ...]\n\n Returns :\n [True/False...] : If the cancel operation fails the entire list will be False."}
{"_id": "q_4051", "text": "Save information that must persist to a file.\n\n We do not want to create a new VPC and new identical security groups, so we save\n information about them in a file between runs."}
{"_id": "q_4052", "text": "Create a session.\n\n First we look in self.key_file for a path to a json file with the\n credentials. The key file should have 'AWSAccessKeyId' and 'AWSSecretKey'.\n\n Next we look at self.profile for a profile name and try\n to use the Session call to automatically pick up the keys for the profile from\n the user default keys file ~/.aws/config.\n\n Finally, boto3 will look for the keys in environment variables:\n AWS_ACCESS_KEY_ID: The access key for your AWS account.\n AWS_SECRET_ACCESS_KEY: The secret key for your AWS account.\n AWS_SESSION_TOKEN: The session key for your AWS account.\n This is only needed when you are using temporary credentials.\n The AWS_SECURITY_TOKEN environment variable can also be used,\n but is only supported for backwards compatibility purposes.\n AWS_SESSION_TOKEN is supported by multiple AWS SDKs besides python."}
{"_id": "q_4053", "text": "Start an instance in the VPC in the first available subnet.\n\n N instances will be started if nodes_per_block > 1.\n Not supported. We only do 1 node per block.\n\n Parameters\n ----------\n command : str\n Command string to execute on the node.\n job_name : str\n Name associated with the instances."}
{"_id": "q_4054", "text": "Get states of all instances on EC2 which were started by this file."}
{"_id": "q_4055", "text": "Submit the command onto a freshly instantiated AWS EC2 instance.\n\n Submit returns an ID that corresponds to the task that was just submitted.\n\n Parameters\n ----------\n command : str\n Command to be invoked on the remote side.\n blocksize : int\n Number of blocks requested.\n tasks_per_node : int (default=1)\n Number of command invocations to be launched per node\n job_name : str\n Prefix for the job name.\n\n Returns\n -------\n None or str\n If at capacity, None will be returned. Otherwise, the job identifier will be returned."}
{"_id": "q_4056", "text": "Cancel the jobs specified by a list of job ids.\n\n Parameters\n ----------\n job_ids : list of str\n List of of job identifiers\n\n Returns\n -------\n list of bool\n Each entry in the list will contain False if the operation fails. Otherwise, the entry will be True."}
{"_id": "q_4057", "text": "Teardown the EC2 infastructure.\n\n Terminate all EC2 instances, delete all subnets, delete security group, delete VPC,\n and reset all instance variables."}
{"_id": "q_4058", "text": "Scale out the existing resources."}
{"_id": "q_4059", "text": "Update the resource dictionary with job statuses."}
{"_id": "q_4060", "text": "Scales out the number of active workers by 1.\n\n This method is notImplemented for threads and will raise the error if called.\n\n Parameters:\n blocks : int\n Number of blocks to be provisioned."}
{"_id": "q_4061", "text": "Returns the status of the executor via probing the execution providers."}
{"_id": "q_4062", "text": "Callback from executor future to update the parent.\n\n Args:\n - parent_fu (Future): Future returned by the executor along with callback\n\n Returns:\n - None\n\n Updates the super() with the result() or exception()"}
{"_id": "q_4063", "text": "Cancels the resources identified by the job_ids provided by the user.\n\n Args:\n - job_ids (list): A list of job identifiers\n\n Returns:\n - A list of status from cancelling the job which can be True, False\n\n Raises:\n - ExecutionProviderException or its subclasses"}
{"_id": "q_4064", "text": "This is a function that mocks the Swift-T side.\n\n It listens on the the incoming_q for tasks and posts returns on the outgoing_q.\n\n Args:\n - incoming_q (Queue object) : The queue to listen on\n - outgoing_q (Queue object) : Queue to post results on\n\n The messages posted on the incoming_q will be of the form :\n\n .. code:: python\n\n {\n \"task_id\" : <uuid.uuid4 string>,\n \"buffer\" : serialized buffer containing the fn, args and kwargs\n }\n\n If ``None`` is received, the runner will exit.\n\n Response messages should be of the form:\n\n .. code:: python\n\n {\n \"task_id\" : <uuid.uuid4 string>,\n \"result\" : serialized buffer containing result\n \"exception\" : serialized exception object\n }\n\n On exiting the runner will post ``None`` to the outgoing_q"}
{"_id": "q_4065", "text": "Shutdown method, to kill the threads and workers."}
{"_id": "q_4066", "text": "Submits work to the the outgoing_q.\n\n The outgoing_q is an external process listens on this\n queue for new work. This method is simply pass through and behaves like a\n submit call as described here `Python docs: <https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor>`_\n\n Args:\n - func (callable) : Callable function\n - *args (list) : List of arbitrary positional arguments.\n\n Kwargs:\n - **kwargs (dict) : A dictionary of arbitrary keyword args for func.\n\n Returns:\n Future"}
{"_id": "q_4067", "text": "Return the resolved filepath on the side where it is called from.\n\n The appropriate filepath will be returned when called from within\n an app running remotely as well as regular python on the client side.\n\n Args:\n - self\n Returns:\n - filepath (string)"}
{"_id": "q_4068", "text": "The App decorator function.\n\n Args:\n - apptype (string) : Apptype can be bash|python\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - walltime (int) : Walltime for app in seconds,\n default=60\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of the app call\n default=False\n\n Returns:\n A PythonApp or BashApp object, which when called runs the apps through the executor."}
{"_id": "q_4069", "text": "Decorator function for making python apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@python_app` if using all defaults or `@python_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False."}
{"_id": "q_4070", "text": "Decorator function for making bash apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@bash_app` if using all defaults or `@bash_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False."}
{"_id": "q_4071", "text": "Internal\n Wrap the Parsl app with a function that will call the monitor function and point it at the correct pid when the task begins."}
{"_id": "q_4072", "text": "Transport file on the remote side to a local directory\n\n Args:\n - remote_source (string): remote_source\n - local_dir (string): Local directory to copy to\n\n\n Returns:\n - str: Local path to file\n\n Raises:\n - FileExists : Name collision at local directory.\n - FileCopyException : FileCopy failed."}
{"_id": "q_4073", "text": "Return true if the path refers to an existing directory.\n\n Parameters\n ----------\n path : str\n Path of directory on the remote side to check."}
{"_id": "q_4074", "text": "Create a directory on the remote side.\n\n If intermediate directories do not exist, they will be created.\n\n Parameters\n ----------\n path : str\n Path of directory on the remote side to create.\n mode : int\n Permissions (posix-style) for the newly-created directory.\n exist_ok : bool\n If False, raise an OSError if the target directory already exists."}
{"_id": "q_4075", "text": "Let the FlowControl system know that there is an event."}
{"_id": "q_4076", "text": "Create the kubernetes deployment"}
{"_id": "q_4077", "text": "Compose the launch command and call the scale_out\n\n This should be implemented in the child classes to take care of\n executor specific oddities."}
{"_id": "q_4078", "text": "Starts the interchange process locally\n\n Starts the interchange process locally and uses an internal command queue to\n get the worker task and result ports that the interchange has bound to."}
{"_id": "q_4079", "text": "Puts a worker on hold, preventing scheduling of additional tasks to it.\n\n This is called \"hold\" mostly because this only stops scheduling of tasks,\n and does not actually kill the worker.\n\n Parameters\n ----------\n\n worker_id : str\n Worker id to be put on hold"}
{"_id": "q_4080", "text": "Sends hold command to all managers which are in a specific block\n\n Parameters\n ----------\n block_id : str\n Block identifier of the block to be put on hold"}
{"_id": "q_4081", "text": "Return status of all blocks."}
{"_id": "q_4082", "text": "Called by RQ when there is a failure in a worker.\n\n NOTE: Make sure that in your RQ worker process, rollbar.init() has been called with\n handler='blocking'. The default handler, 'thread', does not work from inside an RQ worker."}
{"_id": "q_4083", "text": "Pyramid entry point"}
{"_id": "q_4084", "text": "Decorator for making error handling on AWS Lambda easier"}
{"_id": "q_4085", "text": "Reports an arbitrary string message to Rollbar.\n\n message: the string body of the message\n level: level to report at. One of: 'critical', 'error', 'warning', 'info', 'debug'\n request: the request object for the context of the message\n extra_data: dictionary of params to include with the message. 'body' is reserved.\n payload_data: param names to pass in the 'data' level of the payload; overrides defaults."}
{"_id": "q_4086", "text": "Searches a project for items that match the input criteria.\n\n title: all or part of the item's title to search for.\n return_fields: the fields that should be returned for each item.\n e.g. ['id', 'project_id', 'status'] will return a dict containing\n only those fields for each item.\n access_token: a project access token. If this is not provided,\n the one provided to init() will be used instead.\n search_fields: additional fields to include in the search.\n currently supported: status, level, environment"}
{"_id": "q_4087", "text": "Creates .rollbar log file for use with rollbar-agent"}
{"_id": "q_4088", "text": "Returns a dictionary describing the logged-in user using data from `request.\n\n Try request.rollbar_person first, then 'user', then 'user_id'"}
{"_id": "q_4089", "text": "Attempts to add information from the lambda context if it exists"}
{"_id": "q_4090", "text": "Attempts to build request data; if successful, sets the 'request' key on `data`."}
{"_id": "q_4091", "text": "Returns True if we should record local variables for the given frame."}
{"_id": "q_4092", "text": "Returns a dictionary containing data from the request.\n Can handle webob or werkzeug-based request objects."}
{"_id": "q_4093", "text": "Returns a dictionary containing information about the server environment."}
{"_id": "q_4094", "text": "This runs the protocol on port 8000"}
{"_id": "q_4095", "text": "Read into ``buf`` from the device. The number of bytes read will be the\n length of ``buf``.\n\n If ``start`` or ``end`` is provided, then the buffer will be sliced\n as if ``buf[start:end]``. This will not cause an allocation like\n ``buf[start:end]`` will so it saves memory.\n\n :param bytearray buffer: buffer to write into\n :param int start: Index to start writing at\n :param int end: Index to write up to but not include"}
{"_id": "q_4096", "text": "Write the bytes from ``buffer`` to the device. Transmits a stop bit if\n ``stop`` is set.\n\n If ``start`` or ``end`` is provided, then the buffer will be sliced\n as if ``buffer[start:end]``. This will not cause an allocation like\n ``buffer[start:end]`` will so it saves memory.\n\n :param bytearray buffer: buffer containing the bytes to write\n :param int start: Index to start writing from\n :param int end: Index to read up to but not include\n :param bool stop: If true, output an I2C stop condition after the buffer is written"}
{"_id": "q_4097", "text": "Write the bytes from ``out_buffer`` to the device, then immediately\n reads into ``in_buffer`` from the device. The number of bytes read\n will be the length of ``in_buffer``.\n Transmits a stop bit after the write, if ``stop`` is set.\n\n If ``out_start`` or ``out_end`` is provided, then the output buffer\n will be sliced as if ``out_buffer[out_start:out_end]``. This will\n not cause an allocation like ``buffer[out_start:out_end]`` will so\n it saves memory.\n\n If ``in_start`` or ``in_end`` is provided, then the input buffer\n will be sliced as if ``in_buffer[in_start:in_end]``. This will not\n cause an allocation like ``in_buffer[in_start:in_end]`` will so\n it saves memory.\n\n :param bytearray out_buffer: buffer containing the bytes to write\n :param bytearray in_buffer: buffer containing the bytes to read into\n :param int out_start: Index to start writing from\n :param int out_end: Index to read up to but not include\n :param int in_start: Index to start writing at\n :param int in_end: Index to write up to but not include\n :param bool stop: If true, output an I2C stop condition after the buffer is written"}
{"_id": "q_4098", "text": "This function returns a Hangul letter by composing the specified chosung, joongsung, and jongsung.\n @param chosung\n @param joongsung\n @param jongsung the terminal Hangul letter. This is optional if you do not need a jongsung."}
{"_id": "q_4099", "text": "Check whether this letter contains Jongsung"}
{"_id": "q_4100", "text": "Returns true if node is inside the name of an except handler."}
{"_id": "q_4101", "text": "Return true if given node is inside lambda"}
{"_id": "q_4102", "text": "Recursively returns all atoms in nested lists and tuples."}
{"_id": "q_4103", "text": "return True if the node is referencing the \"super\" builtin function"}
{"_id": "q_4104", "text": "return true if the function does nothing but raising an exception"}
{"_id": "q_4105", "text": "return true if the given Name node is used in function or lambda\n default argument's value"}
{"_id": "q_4106", "text": "return true if the name is used in function decorator"}
{"_id": "q_4107", "text": "return True if `frame` is an astroid.Class node with `node` in the\n subtree of its bases attribute"}
{"_id": "q_4108", "text": "return the higher parent which is not an AssignName, Tuple or List node"}
{"_id": "q_4109", "text": "decorator to store messages that are handled by a checker method"}
{"_id": "q_4110", "text": "Returns the specified argument from a function call.\n\n :param astroid.Call call_node: Node representing a function call to check.\n :param int position: position of the argument.\n :param str keyword: the keyword of the argument.\n\n :returns: The node representing the argument, None if the argument is not found.\n :rtype: astroid.Name\n :raises ValueError: if both position and keyword are None.\n :raises NoSuchArgumentError: if no argument at the provided position or with\n the provided keyword."}
{"_id": "q_4111", "text": "Check if the given exception handler catches\n the given error_type.\n\n The *handler* parameter is a node, representing an ExceptHandler node.\n The *error_type* can be an exception, such as AttributeError,\n the name of an exception, or it can be a tuple of errors.\n The function will return True if the handler catches any of the\n given errors."}
{"_id": "q_4112", "text": "Detect if the given function node is decorated with a property."}
{"_id": "q_4113", "text": "Determine if the `func` node has a decorator with the qualified name `qname`."}
{"_id": "q_4114", "text": "Return the ExceptHandler or the TryExcept node in which the node is."}
{"_id": "q_4115", "text": "Return the inferred value for the given node.\n\n Return None if inference failed or if there is some ambiguity (more than\n one node has been inferred)."}
{"_id": "q_4116", "text": "Return the inferred type for `node`\n\n If there is more than one possible type, or if inferred type is Uninferable or None,\n return None"}
{"_id": "q_4117", "text": "Check if the given function node is a singledispatch function."}
{"_id": "q_4118", "text": "Split the names of the given module into subparts\n\n For example,\n _qualified_names('pylint.checkers.ImportsChecker')\n returns\n ['pylint', 'pylint.checkers', 'pylint.checkers.ImportsChecker']"}
{"_id": "q_4119", "text": "Get a prepared module name from the given import node\n\n In the case of relative imports, this will return the\n absolute qualified module name, which might be useful\n for debugging. Otherwise, the initial module name\n is returned unchanged."}
{"_id": "q_4120", "text": "return a string which represents imports as a tree"}
{"_id": "q_4121", "text": "triggered when an import statement is seen"}
{"_id": "q_4122", "text": "triggered when a from statement is seen"}
{"_id": "q_4123", "text": "Check `node` import or importfrom node position is correct\n\n Send a message if `node` comes before another instruction"}
{"_id": "q_4124", "text": "Checks imports of module `node` are grouped by category\n\n Imports must follow this order: standard, 3rd party, local"}
{"_id": "q_4125", "text": "notify an imported module, used to analyze dependencies"}
{"_id": "q_4126", "text": "check if the module is deprecated"}
{"_id": "q_4127", "text": "check if the module has a preferred replacement"}
{"_id": "q_4128", "text": "return a verbatim layout for displaying dependencies"}
{"_id": "q_4129", "text": "build the internal or the external depedency graph"}
{"_id": "q_4130", "text": "Read config file and return list of options"}
{"_id": "q_4131", "text": "return true if the node should be treated"}
{"_id": "q_4132", "text": "get callbacks from handler for the visited node"}
{"_id": "q_4133", "text": "Check the consistency of msgid.\n\n msg ids for a checker should be a string of len 4, where the two first\n characters are the checker id and the two last the msg id in this\n checker.\n\n :raises InvalidMessageError: If the checker id in the messages are not\n always the same."}
{"_id": "q_4134", "text": "Check that a datetime was infered.\n If so, emit boolean-datetime warning."}
{"_id": "q_4135", "text": "Manage message of different type and in the context of path."}
{"_id": "q_4136", "text": "Launch layouts display"}
{"_id": "q_4137", "text": "get title for objects"}
{"_id": "q_4138", "text": "set different default options with _default dictionary"}
{"_id": "q_4139", "text": "visit one class and add it to diagram"}
{"_id": "q_4140", "text": "return ancestor nodes of a class node"}
{"_id": "q_4141", "text": "extract recursively classes related to klass_node"}
{"_id": "q_4142", "text": "leave the pyreverse.utils.Project node\n\n return the generated diagram definition"}
{"_id": "q_4143", "text": "visit astroid.ImportFrom and catch modules for package diagram"}
{"_id": "q_4144", "text": "return a class diagram definition for the given klass and its\n related klasses"}
{"_id": "q_4145", "text": "Get the diagrams configuration data\n\n :param project:The pyreverse project\n :type project: pyreverse.utils.Project\n :param linker: The linker\n :type linker: pyreverse.inspector.Linker(IdGeneratorMixIn, LocalsVisitor)\n\n :returns: The list of diagram definitions\n :rtype: list(:class:`pylint.pyreverse.diagrams.ClassDiagram`)"}
{"_id": "q_4146", "text": "Check if the given owner should be ignored\n\n This will verify if the owner's module is in *ignored_modules*\n or the owner's module fully qualified name is in *ignored_modules*\n or if the *ignored_modules* contains a pattern which catches\n the fully qualified name of the module.\n\n Also, similar checks are done for the owner itself, if its name\n matches any name from the *ignored_classes* or if its qualified\n name can be found in *ignored_classes*."}
{"_id": "q_4147", "text": "Check if the given node has a parent of the given type."}
{"_id": "q_4148", "text": "Check if the given name is used as a variadic argument."}
{"_id": "q_4149", "text": "Check that the given uninferable Call node does not\n call an actual function."}
{"_id": "q_4150", "text": "Detect TypeErrors for unary operands."}
{"_id": "q_4151", "text": "visit an astroid.AssignName node\n\n handle locals_type"}
{"_id": "q_4152", "text": "handle an astroid.assignattr node\n\n handle instance_attrs_type"}
{"_id": "q_4153", "text": "return true if the module should be added to dependencies"}
{"_id": "q_4154", "text": "colorize message by wrapping it with ansi escape codes\n\n :type msg: str or unicode\n :param msg: the message string to colorize\n\n :type color: str or None\n :param color:\n the color identifier (see `ANSI_COLORS` for available values)\n\n :type style: str or None\n :param style:\n style string (see `ANSI_COLORS` for available values). To get\n several style effects at the same time, use a coma as separator.\n\n :raise KeyError: if an unexistent color or style identifier is given\n\n :rtype: str or unicode\n :return: the ansi escaped string"}
{"_id": "q_4155", "text": "Register the reporter classes with the linter."}
{"_id": "q_4156", "text": "manage message of different types, and colorize output\n using ansi escape codes"}
{"_id": "q_4157", "text": "open a vcg graph"}
{"_id": "q_4158", "text": "draw a node"}
{"_id": "q_4159", "text": "draw an edge from a node to another."}
{"_id": "q_4160", "text": "Check the new string formatting."}
{"_id": "q_4161", "text": "check for bad escapes in a non-raw string.\n\n prefix: lowercase string of eg 'ur' string prefix markers.\n string_body: the un-parsed body of the string, not including the quote\n marks.\n start_row: integer line number in the source."}
{"_id": "q_4162", "text": "display a section as text"}
{"_id": "q_4163", "text": "Display an evaluation section as a text."}
{"_id": "q_4164", "text": "Register a MessageDefinition with consistency in mind.\n\n :param MessageDefinition message: The message definition being added."}
{"_id": "q_4165", "text": "Check that a symbol is not already used."}
{"_id": "q_4166", "text": "Raise an error when a symbol is duplicated.\n\n :param str msgid: The msgid corresponding to the symbols\n :param str symbol: Offending symbol\n :param str other_symbol: Other offending symbol\n :raises InvalidMessageError: when a symbol is duplicated."}
{"_id": "q_4167", "text": "Raise an error when a msgid is duplicated.\n\n :param str symbol: The symbol corresponding to the msgids\n :param str msgid: Offending msgid\n :param str other_msgid: Other offending msgid\n :raises InvalidMessageError: when a msgid is duplicated."}
{"_id": "q_4168", "text": "Generates a user-consumable representation of a message.\n\n Can be just the message ID or the ID and the symbol."}
{"_id": "q_4169", "text": "Output full messages list documentation in ReST format."}
{"_id": "q_4170", "text": "Output full documentation in ReST format for all extension modules"}
{"_id": "q_4171", "text": "Use sched_affinity if available for virtualized or containerized environments."}
{"_id": "q_4172", "text": "take a list of module names which are pylint plugins and load\n and register them"}
{"_id": "q_4173", "text": "overridden from config.OptionsProviderMixin to handle some\n special options"}
{"_id": "q_4174", "text": "disable all reporters"}
{"_id": "q_4175", "text": "Disable all other checkers and enable Python 3 warnings."}
{"_id": "q_4176", "text": "return all available checkers as a list"}
{"_id": "q_4177", "text": "Get all the checker names that this linter knows about."}
{"_id": "q_4178", "text": "return checkers needed for activated messages and reports"}
{"_id": "q_4179", "text": "get modules and errors from a list of modules and handle errors"}
{"_id": "q_4180", "text": "set the name of the currently analyzed module and\n init statistics for it"}
{"_id": "q_4181", "text": "Check a module from its astroid representation."}
{"_id": "q_4182", "text": "optik callback for printing some help about a particular message"}
{"_id": "q_4183", "text": "optik callback for printing full documentation"}
{"_id": "q_4184", "text": "Wrap the text on the given line length."}
{"_id": "q_4185", "text": "return decoded line from encoding or decode with default encoding"}
{"_id": "q_4186", "text": "Determines if the basename is matched in a regex blacklist\n\n :param str base_name: The basename of the file\n :param list black_list_re: A collection of regex patterns to match against.\n Successful matches are blacklisted.\n\n :returns: `True` if the basename is blacklisted, `False` otherwise.\n :rtype: bool"}
{"_id": "q_4187", "text": "load all module and package in the given directory, looking for a\n 'register' function in each one, used to register pylint checkers"}
{"_id": "q_4188", "text": "return string as a comment"}
{"_id": "q_4189", "text": "format an options section using the INI format"}
{"_id": "q_4190", "text": "format options using the INI format"}
{"_id": "q_4191", "text": "overridden to detect problems easily"}
{"_id": "q_4192", "text": "return the ancestor nodes"}
{"_id": "q_4193", "text": "trick to get table content without actually writing it\n\n return an aligned list of lists containing table cells values as string"}
{"_id": "q_4194", "text": "Walk the AST to collect block level options line numbers."}
{"_id": "q_4195", "text": "Report an ignored message.\n\n state_scope is either MSG_STATE_SCOPE_MODULE or MSG_STATE_SCOPE_CONFIG,\n depending on whether the message was disabled locally in the module,\n or globally. The other arguments are the same as for add_message."}
{"_id": "q_4196", "text": "register a report\n\n reportid is the unique identifier for the report\n r_title the report's title\n r_cb the method to call to make the report\n checker is the checker defining the report"}
{"_id": "q_4197", "text": "render registered reports"}
{"_id": "q_4198", "text": "Get the name of the property that the given node is a setter for.\n\n :param node: The node to get the property name for.\n :type node: str\n\n :rtype: str or None\n :returns: The name of the property that the node is a setter for,\n or None if one could not be found."}
{"_id": "q_4199", "text": "Get the property node for the given setter node.\n\n :param node: The node to get the property for.\n :type node: astroid.FunctionDef\n\n :rtype: astroid.FunctionDef or None\n :returns: The node relating to the property of the given setter node,\n or None if one could not be found."}
{"_id": "q_4200", "text": "Check if a return node returns a value other than None.\n\n :param return_node: The return node to check.\n :type return_node: astroid.Return\n\n :rtype: bool\n :return: True if the return node returns a value other than None,\n False otherwise."}
{"_id": "q_4201", "text": "Gets all of the possible raised exception types for the given raise node.\n\n .. note::\n\n Caught exception types are ignored.\n\n\n :param node: The raise node to find exception types for.\n :type node: astroid.node_classes.NodeNG\n\n :returns: A list of exception types possibly raised by :param:`node`.\n :rtype: set(str)"}
{"_id": "q_4202", "text": "inspect the source file to find messages activated or deactivated by id."}
{"_id": "q_4203", "text": "inspect the source file to find encoding problem"}
{"_id": "q_4204", "text": "inspect the source to find fixme problems"}
{"_id": "q_4205", "text": "Check if the name is a future import from another module."}
{"_id": "q_4206", "text": "get overridden method if any"}
{"_id": "q_4207", "text": "return extra information to add to the message for unpacking-non-sequence\n and unbalanced-tuple-unpacking errors"}
{"_id": "q_4208", "text": "Detect that the given frames shares a global\n scope.\n\n Two frames shares a global scope when neither\n of them are hidden under a function scope, as well\n as any of parent scope of them, until the root scope.\n In this case, depending from something defined later on\n will not work, because it is still undefined.\n\n Example:\n class A:\n # B has the same global scope as `C`, leading to a NameError.\n class B(C): ...\n class C: ..."}
{"_id": "q_4209", "text": "Return True if the node is in a local class scope, as an assignment.\n\n :param node: Node considered\n :type node: astroid.Node\n :return: True if the node is in a local class scope, as an assignment. False otherwise.\n :rtype: bool"}
{"_id": "q_4210", "text": "Return True if there is a node with the same name in the to_consume dict of an upper scope\n and if that scope is a function\n\n :param node: node to check for\n :type node: astroid.Node\n :param index: index of the current consumer inside self._to_consume\n :type index: int\n :return: True if there is a node with the same name in the to_consume dict of an upper scope\n and if that scope is a function\n :rtype: bool"}
{"_id": "q_4211", "text": "Check for unbalanced tuple unpacking\n and unpacking non sequences."}
{"_id": "q_4212", "text": "Update consumption analysis for metaclasses."}
{"_id": "q_4213", "text": "return a list of subpackages for the given directory"}
{"_id": "q_4214", "text": "make a layout with some stats about duplication"}
{"_id": "q_4215", "text": "append a file to search for similarities"}
{"_id": "q_4216", "text": "display computed similarities on stdout"}
{"_id": "q_4217", "text": "find similarities in the two given linesets"}
{"_id": "q_4218", "text": "create the index for this set"}
{"_id": "q_4219", "text": "Check if a definition signature is equivalent to a call."}
{"_id": "q_4220", "text": "Determine if the two methods have different parameters\n\n They are considered to have different parameters if:\n\n * they have different positional parameters, including different names\n\n * one of the methods is having variadics, while the other is not\n\n * they have different keyword only parameters."}
{"_id": "q_4221", "text": "Safely infer the return value of a function.\n\n Returns None if inference failed or if there is some ambiguity (more than\n one node has been inferred). Otherwise returns infered value."}
{"_id": "q_4222", "text": "Set the given node as accessed."}
{"_id": "q_4223", "text": "init visit variable _accessed"}
{"_id": "q_4224", "text": "Detect that a class has a consistent mro or duplicate bases."}
{"_id": "q_4225", "text": "Detect that a class inherits something which is not\n a class or a type."}
{"_id": "q_4226", "text": "Check if the given function node is an useless method override\n\n We consider it *useless* if it uses the super() builtin, but having\n nothing additional whatsoever than not implementing the method at all.\n If the method uses super() to delegate an operation to the rest of the MRO,\n and if the method called is the same as the current one, the arguments\n passed to super() are the same as the parameters that were passed to\n this method, then the method could be removed altogether, by letting\n other implementation to take precedence."}
{"_id": "q_4227", "text": "on method node, check if this method couldn't be a function\n\n ignore class, static and abstract methods, initializer,\n methods overridden from a parent class."}
{"_id": "q_4228", "text": "Check that the given AssignAttr node\n is defined in the class slots."}
{"_id": "q_4229", "text": "check if the name handle an access to a class member\n if so, register it"}
{"_id": "q_4230", "text": "check that the given class node implements abstract methods from\n base classes"}
{"_id": "q_4231", "text": "Verify that the exception context is properly set.\n\n An exception context can be only `None` or an exception."}
{"_id": "q_4232", "text": "display results encapsulated in the layout tree"}
{"_id": "q_4233", "text": "Check if a class node is a typing.NamedTuple class"}
{"_id": "q_4234", "text": "Check if a class definition defines an Enum class.\n\n :param node: The class node to check.\n :type node: astroid.ClassDef\n\n :returns: True if the given node represents an Enum class. False otherwise.\n :rtype: bool"}
{"_id": "q_4235", "text": "Check if a class definition defines a Python 3.7+ dataclass\n\n :param node: The class node to check.\n :type node: astroid.ClassDef\n\n :returns: True if the given node represents a dataclass class. False otherwise.\n :rtype: bool"}
{"_id": "q_4236", "text": "initialize visit variables"}
{"_id": "q_4237", "text": "check number of public methods"}
{"_id": "q_4238", "text": "check the node has any spelling errors"}
{"_id": "q_4239", "text": "Format the message according to the given template.\n\n The template format is the one of the format method :\n cf. http://docs.python.org/2/library/string.html#formatstrings"}
{"_id": "q_4240", "text": "Check if the given node is an actual elif\n\n This is a problem we're having with the builtin ast module,\n which splits `elif` branches into a separate if statement.\n Unfortunately we need to know the exact type in certain\n cases."}
{"_id": "q_4241", "text": "Check if the given if node can be simplified.\n\n The if statement can be reduced to a boolean expression\n in some cases. For instance, if there are two branches\n and both of them return a boolean value that depends on\n the result of the statement's test, then this can be reduced\n to `bool(test)` without losing any functionality."}
{"_id": "q_4242", "text": "Check if an exception of type StopIteration is raised inside a generator"}
{"_id": "q_4243", "text": "Check if a StopIteration exception is raised by the call to next function\n\n If the next value has a default value, then do not add message.\n\n :param node: Check to see if this Call node is a next function\n :type node: :class:`astroid.node_classes.Call`"}
{"_id": "q_4244", "text": "Get the duplicated types from the underlying isinstance calls.\n\n :param astroid.BoolOp node: Node which should contain a bunch of isinstance calls.\n :returns: Dictionary of the comparison objects from the isinstance calls,\n to duplicate values from consecutive calls.\n :rtype: dict"}
{"_id": "q_4245", "text": "Check isinstance calls which can be merged together."}
{"_id": "q_4246", "text": "Returns true if node is 'condition and true_value or false_value' form.\n\n All of: condition, true_value and false_value should not be a complex boolean expression"}
{"_id": "q_4247", "text": "Check that all return statements inside a function are consistent.\n\n Return statements are consistent if:\n - all returns are explicit and if there is no implicit return;\n - all returns are empty and if there is, possibly, an implicit return.\n\n Args:\n node (astroid.FunctionDef): the function holding the return statements."}
{"_id": "q_4248", "text": "Check if the node ends with an explicit return statement.\n\n Args:\n node (astroid.NodeNG): node to be checked.\n\n Returns:\n bool: True if the node ends with an explicit statement, False otherwise."}
{"_id": "q_4249", "text": "Emit a convention whenever range and len are used for indexing."}
{"_id": "q_4250", "text": "check if we need graphviz for different output format"}
{"_id": "q_4251", "text": "checking arguments and run project"}
{"_id": "q_4252", "text": "write a class diagram"}
{"_id": "q_4253", "text": "initialize DotWriter and add options for layout."}
{"_id": "q_4254", "text": "return True if message may be emitted using the current interpreter"}
{"_id": "q_4255", "text": "return the help string for the given message id"}
{"_id": "q_4256", "text": "Extracts the environment PYTHONPATH and appends the current sys.path to\n those."}
{"_id": "q_4257", "text": "Pylint the given file.\n\n When run from emacs we will be in the directory of a file, and passed its\n filename. If this file is part of a package and is trying to import other\n modules from within its own package or another package rooted in a directory\n below it, pylint will classify it as a failed import.\n\n To get around this, we traverse down the directory tree to find the root of\n the package this module is in. We then invoke pylint from this directory.\n\n Finally, we must correct the filenames in the output generated by pylint so\n Emacs doesn't become confused (it will expect just the original filename,\n while pylint may extend it with extra directories if we've traversed down\n the tree)"}
{"_id": "q_4258", "text": "Run pylint from python\n\n ``command_options`` is a string containing ``pylint`` command line options;\n ``return_std`` (boolean) indicates return of created standard output\n and error (see below);\n ``stdout`` and ``stderr`` are 'file-like' objects in which standard output\n could be written.\n\n Calling agent is responsible for stdout/err management (creation, close).\n Default standard output and error are those from sys,\n or standalone ones (``subprocess.PIPE``) are used\n if they are not set and ``return_std``.\n\n If ``return_std`` is set to ``True``, this function returns a 2-uple\n containing standard output and error related to created process,\n as follows: ``(stdout, stderr)``.\n\n To silently run Pylint on a module, and get its standard output and error:\n >>> (pylint_stdout, pylint_stderr) = py_run( 'module_name.py', True)"}
{"_id": "q_4259", "text": "recursive function doing the real work for get_cycles"}
{"_id": "q_4260", "text": "returns self._source"}
{"_id": "q_4261", "text": "Generates a graph file.\n\n :param str outputfile: filename and path [defaults to graphname.png]\n :param str dotfile: filename and path [defaults to graphname.dot]\n :param str mapfile: filename and path\n\n :rtype: str\n :return: a path to the generated file"}
{"_id": "q_4262", "text": "If the msgid is a numeric one, then register it to inform the user\n it could furnish instead a symbolic msgid."}
{"_id": "q_4263", "text": "reenable message of the given id"}
{"_id": "q_4264", "text": "Get the message symbol of the given message id\n\n Return the original message id if the message does not\n exist."}
{"_id": "q_4265", "text": "Adds a message given by ID or name.\n\n If provided, the message string is expanded using args.\n\n AST checkers must provide the node argument (but may optionally\n provide line if the line number is different), raw and token checkers\n must provide the line argument."}
{"_id": "q_4266", "text": "output a full documentation in ReST format"}
{"_id": "q_4267", "text": "Return a line with |s for each of the positions in the given lists."}
{"_id": "q_4268", "text": "Get an indentation string for hanging indentation, consisting of the line-indent plus\n a number of spaces to fill up to the column of this token.\n\n e.g. the token indent for foo\n in \"<TAB><TAB>print(foo)\"\n is \"<TAB><TAB> \""}
{"_id": "q_4269", "text": "Returns the valid offsets for the token at the given position."}
{"_id": "q_4270", "text": "Extracts indentation information for a hanging indent\n\n Case of hanging indent after a bracket (including parenthesis)\n\n :param str bracket: bracket in question\n :param int position: Position of bracket in self._tokens\n\n :returns: the state and valid positions for hanging indentation\n :rtype: _ContinuedIndent"}
{"_id": "q_4271", "text": "Extracts indentation information for a continued indent."}
{"_id": "q_4272", "text": "Pushes a new token for continued indentation on the stack.\n\n Tokens that can modify continued indentation offsets are:\n * opening brackets\n * 'lambda'\n * : inside dictionaries\n\n push_token relies on the caller to filter out those\n interesting tokens.\n\n :param int token: The concrete token\n :param int position: The position of the token in the stream."}
{"_id": "q_4273", "text": "a new line has been encountered, process it if necessary"}
{"_id": "q_4274", "text": "Check that there are not unnecessary parens after a keyword.\n\n Parens are unnecessary if there is exactly one balanced outer pair on a\n line, and it is followed by a colon, and contains no commas (i.e. is not a\n tuple).\n\n Args:\n tokens: list of Tokens; the entire list of Tokens.\n start: int; the position of the keyword in the token list."}
{"_id": "q_4275", "text": "Extended check of PEP-484 type hint presence"}
{"_id": "q_4276", "text": "check the node line number and check it if not yet done"}
{"_id": "q_4277", "text": "Check for lines containing multiple statements."}
{"_id": "q_4278", "text": "check lines have less than a maximum number of characters"}
{"_id": "q_4279", "text": "return the indent level of the string"}
{"_id": "q_4280", "text": "Checks if an import node is in the context of a conditional."}
{"_id": "q_4281", "text": "Look for indexing exceptions."}
{"_id": "q_4282", "text": "Visit an except handler block and check for exception unpacking."}
{"_id": "q_4283", "text": "Visit a raise statement and check for raising\n strings or old-raise-syntax."}
{"_id": "q_4284", "text": "search the pylint rc file and return its path if it find it, else None"}
{"_id": "q_4285", "text": "return a validated value for an option according to its type\n\n optional argument name is only used for error message formatting"}
{"_id": "q_4286", "text": "optik callback for option setting"}
{"_id": "q_4287", "text": "write a configuration file according to the current configuration\n into the given stream or stdout"}
{"_id": "q_4288", "text": "return the usage string for available options"}
{"_id": "q_4289", "text": "initialize the provider using default values"}
{"_id": "q_4290", "text": "return the dictionary defining an option given its name"}
{"_id": "q_4291", "text": "return an iterator on options grouped by section\n\n (section, [list of (optname, optdict, optvalue)])"}
{"_id": "q_4292", "text": "Determines if a BoundMethod node represents a method call.\n\n Args:\n func (astroid.BoundMethod): The BoundMethod AST node to check.\n types (Optional[String]): Optional sequence of caller type names to restrict check.\n methods (Optional[String]): Optional sequence of method names to restrict check.\n\n Returns:\n bool: true if the node represents a method call for the given type and\n method names, False otherwise."}
{"_id": "q_4293", "text": "Checks if node represents a string with complex formatting specs.\n\n Args:\n node (astroid.node_classes.NodeNG): AST node to check\n Returns:\n bool: True if inferred string uses complex formatting, False otherwise"}
{"_id": "q_4294", "text": "Checks to see if a module uses a non-Python logging module."}
{"_id": "q_4295", "text": "Checks calls to logging methods."}
{"_id": "q_4296", "text": "return True if the node is inside a kind of for loop"}
{"_id": "q_4297", "text": "Returns the loop node that holds the break node in arguments.\n\n Args:\n break_node (astroid.Break): the break node of interest.\n\n Returns:\n astroid.For or astroid.While: the loop node holding the break node."}
{"_id": "q_4298", "text": "Returns true if a loop may ends up in a break statement.\n\n Args:\n loop (astroid.For, astroid.While): the loop node inspected.\n\n Returns:\n bool: True if the loop may ends up in a break statement, False otherwise."}
{"_id": "q_4299", "text": "Returns a tuple of property classes and names.\n\n Property classes are fully qualified, such as 'abc.abstractproperty' and\n property names are the actual names, such as 'abstract_property'."}
{"_id": "q_4300", "text": "return True if the object is a method redefined via decorator.\n\n For example:\n @property\n def x(self): return self._x\n @x.setter\n def x(self, value): self._x = value"}
{"_id": "q_4301", "text": "Is this a call with exactly 1 argument,\n where that argument is positional?"}
{"_id": "q_4302", "text": "Check instantiating abstract class with\n abc.ABCMeta as metaclass."}
{"_id": "q_4303", "text": "Check that any loop with an else clause has a break statement."}
{"_id": "q_4304", "text": "initialize visit variables and statistics"}
{"_id": "q_4305", "text": "check for various kind of statements without effect"}
{"_id": "q_4306", "text": "check whether or not the lambda is suspicious"}
{"_id": "q_4307", "text": "check the use of an assert statement on a tuple."}
{"_id": "q_4308", "text": "check duplicate key in dictionary"}
{"_id": "q_4309", "text": "check that a node is not inside a finally clause of a\n try...finally statement.\n If we found before a try...finally bloc a parent which its type is\n in breaker_classes, we skip the whole check."}
{"_id": "q_4310", "text": "check module level assigned names"}
{"_id": "q_4311", "text": "Check if we compare to a literal, which is usually what we do not want to do."}
{"_id": "q_4312", "text": "create the subgraphs representing any `if` and `for` statements"}
{"_id": "q_4313", "text": "parse the body and any `else` block of `if` and `for` statements"}
{"_id": "q_4314", "text": "visit an astroid.Module node to check too complex rating and\n add message if is greather than max_complexity stored from options"}
{"_id": "q_4315", "text": "walk to the checker's dir and collect visit and leave methods"}
{"_id": "q_4316", "text": "call visit events of astroid checkers for the given node, recurse on\n its children, then leave events."}
{"_id": "q_4317", "text": "create a relation ship"}
{"_id": "q_4318", "text": "return a relation ship or None"}
{"_id": "q_4319", "text": "return visible attributes, possibly with class name"}
{"_id": "q_4320", "text": "return visible methods"}
{"_id": "q_4321", "text": "create a diagram object"}
{"_id": "q_4322", "text": "return class names if needed in diagram"}
{"_id": "q_4323", "text": "return all class nodes in the diagram"}
{"_id": "q_4324", "text": "return a module by its name, looking also for relative imports;\n raise KeyError if not found"}
{"_id": "q_4325", "text": "add dependencies created by from-imports"}
{"_id": "q_4326", "text": "Deletes old deployed versions of the function in AWS Lambda.\n\n Won't delete $Latest and any aliased version\n\n :param str src:\n The path to your Lambda ready project (folder must contain a valid\n config.yaml and handler module (e.g.: service.py).\n :param int keep_last_versions:\n The number of recent versions to keep and not delete"}
{"_id": "q_4327", "text": "Deploys a new function to AWS Lambda.\n\n :param str src:\n The path to your Lambda ready project (folder must contain a valid\n config.yaml and handler module (e.g.: service.py).\n :param str local_package:\n The path to a local package with should be included in the deploy as\n well (and/or is not available on PyPi)"}
{"_id": "q_4328", "text": "Deploys a new function via AWS S3.\n\n :param str src:\n The path to your Lambda ready project (folder must contain a valid\n config.yaml and handler module (e.g.: service.py).\n :param str local_package:\n The path to a local package with should be included in the deploy as\n well (and/or is not available on PyPi)"}
{"_id": "q_4329", "text": "Uploads a new function to AWS S3.\n\n :param str src:\n The path to your Lambda ready project (folder must contain a valid\n config.yaml and handler module (e.g.: service.py).\n :param str local_package:\n The path to a local package with should be included in the deploy as\n well (and/or is not available on PyPi)"}
{"_id": "q_4330", "text": "Copies template files to a given directory.\n\n :param str src:\n The path to output the template lambda project files.\n :param bool minimal:\n Minimal possible template files (excludes event.json)."}
{"_id": "q_4331", "text": "Tranlate a string of the form \"module.function\" into a callable\n function.\n\n :param str src:\n The path to your Lambda project containing a valid handler file.\n :param str handler:\n A dot delimited string representing the `<module>.<function name>`."}
{"_id": "q_4332", "text": "Shortcut to insert the `account_id` and `role` into the iam string."}
{"_id": "q_4333", "text": "Upload a function to AWS S3."}
{"_id": "q_4334", "text": "Download the data at a URL, and cache it under the given name.\n\n The file is stored under `pyav/test` with the given name in the directory\n :envvar:`PYAV_TESTDATA_DIR`, or the first that is writeable of:\n\n - the current virtualenv\n - ``/usr/local/share``\n - ``/usr/local/lib``\n - ``/usr/share``\n - ``/usr/lib``\n - the user's home"}
{"_id": "q_4335", "text": "Download and return a path to a sample from the FFmpeg test suite.\n\n Data is handled by :func:`cached_download`.\n\n See the `FFmpeg Automated Test Environment <https://www.ffmpeg.org/fate.html>`_"}
{"_id": "q_4336", "text": "Get distutils-compatible extension extras for the given library.\n\n This requires ``pkg-config``."}
{"_id": "q_4337", "text": "Update the `dst` with the `src`, extending values where lists.\n\n Primiarily useful for integrating results from `get_library_config`."}
{"_id": "q_4338", "text": "Spawn a process, and eat the stdio."}
{"_id": "q_4339", "text": "filter the quoted text out of a message"}
{"_id": "q_4340", "text": "parse one document to prep for TextRank"}
{"_id": "q_4341", "text": "construct the TextRank graph from parsed paragraphs"}
{"_id": "q_4342", "text": "output the graph in Dot file format"}
{"_id": "q_4343", "text": "render the TextRank graph for visual formats"}
{"_id": "q_4344", "text": "run the TextRank algorithm"}
{"_id": "q_4345", "text": "leverage noun phrase chunking"}
{"_id": "q_4346", "text": "iterate through the noun phrases"}
{"_id": "q_4347", "text": "iterator for collecting the named-entities"}
{"_id": "q_4348", "text": "create a MinHash digest"}
{"_id": "q_4349", "text": "determine distance for each sentence"}
{"_id": "q_4350", "text": "iterator for the most significant sentences, up to a specified limit"}
{"_id": "q_4351", "text": "pretty print a JSON object"}
{"_id": "q_4352", "text": "Fetch data about tag"}
{"_id": "q_4353", "text": "Create the tag."}
{"_id": "q_4354", "text": "Private method to extract from a value, the resources.\n It will check the type of object in the array provided and build\n the right structure for the API."}
{"_id": "q_4355", "text": "Add the Tag to a Droplet.\n\n Attributes accepted at creation time:\n droplet: array of string or array of int, or array of Droplets."}
{"_id": "q_4356", "text": "Remove the Tag from the Droplet.\n\n Attributes accepted at creation time:\n droplet: array of string or array of int, or array of Droplets."}
{"_id": "q_4357", "text": "Class method that will return a Action object by ID."}
{"_id": "q_4358", "text": "Wait until the action is marked as completed or with an error.\n It will return True in case of success, otherwise False.\n\n Optional Args:\n update_every_seconds - int : number of seconds to wait before\n checking if the action is completed."}
{"_id": "q_4359", "text": "Class method that will return a Droplet object by ID.\n\n Args:\n api_token (str): token\n droplet_id (int): droplet id"}
{"_id": "q_4360", "text": "Take a snapshot!\n\n Args:\n snapshot_name (str): name of snapshot\n\n Optional Args:\n return_dict (bool): Return a dict when True (default),\n otherwise return an Action.\n power_off (bool): Before taking the snapshot the droplet will be\n turned off with another API call. It will wait until the\n droplet will be powered off.\n\n Returns dict or Action"}
{"_id": "q_4361", "text": "Change the kernel to a new one\n\n Args:\n kernel : instance of digitalocean.Kernel.Kernel\n\n Optional Args:\n return_dict (bool): Return a dict when True (default),\n otherwise return an Action.\n\n Returns dict or Action"}
{"_id": "q_4362", "text": "Check and return a list of SSH key IDs or fingerprints according\n to DigitalOcean's API. This method is used to check and create a\n droplet with the correct SSH keys."}
{"_id": "q_4363", "text": "Returns a list of Action objects\n This actions can be used to check the droplet's status"}
{"_id": "q_4364", "text": "Returns a specific Action by its ID.\n\n Args:\n action_id (int): id of action"}
{"_id": "q_4365", "text": "Returns a list of Record objects"}
{"_id": "q_4366", "text": "Load the FloatingIP object from DigitalOcean.\n\n Requires self.ip to be set."}
{"_id": "q_4367", "text": "Creates a FloatingIP and assigns it to a Droplet.\n\n Note: Every argument and parameter given to this method will be\n assigned to the object.\n\n Args:\n droplet_id: int - droplet id"}
{"_id": "q_4368", "text": "Assign a FloatingIP to a Droplet.\n\n Args:\n droplet_id: int - droplet id"}
{"_id": "q_4369", "text": "Add tags to this Firewall."}
{"_id": "q_4370", "text": "Remove tags from this Firewall."}
{"_id": "q_4371", "text": "Class method that will return a SSHKey object by ID."}
{"_id": "q_4372", "text": "Load the SSHKey object from DigitalOcean.\n\n Requires either self.id or self.fingerprint to be set."}
{"_id": "q_4373", "text": "This method will load a SSHKey object from DigitalOcean\n from a public_key. This method will avoid problems like\n uploading the same public_key twice."}
{"_id": "q_4374", "text": "This function returns a list of Region object."}
{"_id": "q_4375", "text": "This function returns a list of Droplet object."}
{"_id": "q_4376", "text": "This function returns a list of SSHKey object."}
{"_id": "q_4377", "text": "Return a SSHKey object by its ID."}
{"_id": "q_4378", "text": "This method returns a list of all tags."}
{"_id": "q_4379", "text": "This function returns a list of FloatingIP objects."}
{"_id": "q_4380", "text": "Returns a list of Load Balancer objects."}
{"_id": "q_4381", "text": "Returns a Load Balancer object by its ID.\n\n Args:\n id (str): Load Balancer ID"}
{"_id": "q_4382", "text": "This function returns a list of Certificate objects."}
{"_id": "q_4383", "text": "This method returns a list of all Snapshots."}
{"_id": "q_4384", "text": "This method returns a list of all Snapshots based on Droplets."}
{"_id": "q_4385", "text": "This method returns a list of all Snapshots based on volumes."}
{"_id": "q_4386", "text": "This function returns a list of Volume objects."}
{"_id": "q_4387", "text": "Returns a Volume object by its ID."}
{"_id": "q_4388", "text": "Return a Firewall by its ID."}
{"_id": "q_4389", "text": "Class method that will return a LoadBalancer object by its ID.\n\n Args:\n api_token (str): DigitalOcean API token\n id (str): Load Balancer ID"}
{"_id": "q_4390", "text": "Loads updated attributues for a LoadBalancer object.\n\n Requires self.id to be set."}
{"_id": "q_4391", "text": "Creates a new LoadBalancer.\n\n Note: Every argument and parameter given to this method will be\n assigned to the object.\n\n Args:\n name (str): The Load Balancer's name\n region (str): The slug identifier for a DigitalOcean region\n algorithm (str, optional): The load balancing algorithm to be\n used. Currently, it must be either \"round_robin\" or\n \"least_connections\"\n forwarding_rules (obj:`list`): A list of `ForwrdingRules` objects\n health_check (obj, optional): A `HealthCheck` object\n sticky_sessions (obj, optional): A `StickySessions` object\n redirect_http_to_https (bool, optional): A boolean indicating\n whether HTTP requests to the Load Balancer should be\n redirected to HTTPS\n droplet_ids (obj:`list` of `int`): A list of IDs representing\n Droplets to be added to the Load Balancer (mutually\n exclusive with 'tag')\n tag (str): A string representing a DigitalOcean Droplet tag\n (mutually exclusive with 'droplet_ids')"}
{"_id": "q_4392", "text": "Save the LoadBalancer"}
{"_id": "q_4393", "text": "Assign a LoadBalancer to a Droplet.\n\n Args:\n droplet_ids (obj:`list` of `int`): A list of Droplet IDs"}
{"_id": "q_4394", "text": "Unassign a LoadBalancer.\n\n Args:\n droplet_ids (obj:`list` of `int`): A list of Droplet IDs"}
{"_id": "q_4395", "text": "Removes existing forwarding rules from a LoadBalancer.\n\n Args:\n forwarding_rules (obj:`list`): A list of `ForwrdingRules` objects"}
{"_id": "q_4396", "text": "Creates a new record for a domain.\n\n Args:\n type (str): The type of the DNS record (e.g. A, CNAME, TXT).\n name (str): The host name, alias, or service being defined by the\n record.\n data (int): Variable data depending on record type.\n priority (int): The priority for SRV and MX records.\n port (int): The port for SRV records.\n ttl (int): The time to live for the record, in seconds.\n weight (int): The weight for SRV records.\n flags (int): An unsigned integer between 0-255 used for CAA records.\n tags (string): The parameter tag for CAA records. Valid values are\n \"issue\", \"wildissue\", or \"iodef\""}
{"_id": "q_4397", "text": "Save existing record"}
{"_id": "q_4398", "text": "Checks if any timeout for the requests to DigitalOcean is required.\n To set a timeout, use the REQUEST_TIMEOUT_ENV_VAR environment\n variable."}
{"_id": "q_4399", "text": "Class method that will return an Volume object by ID."}
{"_id": "q_4400", "text": "Creates a Block Storage volume\n\n Note: Every argument and parameter given to this method will be\n assigned to the object.\n\n Args:\n name: string - a name for the volume\n snapshot_id: string - unique identifier for the volume snapshot\n size_gigabytes: int - size of the Block Storage volume in GiB\n filesystem_type: string, optional - name of the filesystem type the\n volume will be formated with ('ext4' or 'xfs')\n filesystem_label: string, optional - the label to be applied to the\n filesystem, only used in conjunction with filesystem_type\n\n Optional Args:\n description: string - text field to describe a volume"}
{"_id": "q_4401", "text": "Attach a Volume to a Droplet.\n\n Args:\n droplet_id: int - droplet id\n region: string - slug identifier for the region"}
{"_id": "q_4402", "text": "Detach a Volume to a Droplet.\n\n Args:\n size_gigabytes: int - size of the Block Storage volume in GiB\n region: string - slug identifier for the region"}
{"_id": "q_4403", "text": "Retrieve the list of snapshots that have been created from a volume.\n\n Args:"}
{"_id": "q_4404", "text": "Class method that will return a Certificate object by its ID."}
{"_id": "q_4405", "text": "Class method that will return an Image object by ID or slug.\n\n This method is used to validate the type of the image. If it is a\n number, it will be considered as an Image ID, instead if it is a\n string, it will considered as slug."}
{"_id": "q_4406", "text": "Creates a new custom DigitalOcean Image from the Linux virtual machine\n image located at the provided `url`."}
{"_id": "q_4407", "text": "Load slug.\n\n Loads by id, or by slug if id is not present or use slug is True."}
{"_id": "q_4408", "text": "Rename an image"}
{"_id": "q_4409", "text": "Convert reduce_sum layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4410", "text": "Convert slice operation.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4411", "text": "Convert clip operation.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4412", "text": "Convert elementwise addition.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4413", "text": "Convert elementwise subtraction.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4414", "text": "Convert Linear.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4415", "text": "Convert matmul layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4416", "text": "Convert constant layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4417", "text": "Convert transpose layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4418", "text": "Convert reshape layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4419", "text": "Convert squeeze operation.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4420", "text": "Convert unsqueeze operation.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4421", "text": "Convert shape operation.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4422", "text": "Convert Average pooling.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4423", "text": "Convert 3d Max pooling.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4424", "text": "Convert instance normalization layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4425", "text": "Convert dropout.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4426", "text": "Convert relu layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4427", "text": "Convert leaky relu layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4428", "text": "Convert softmax layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4429", "text": "Convert hardtanh layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4430", "text": "Convert selu layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"}
{"_id": "q_4431", "text": "Removes itself from the cache\n\n Note: This is required by the oauthlib"}
{"_id": "q_4432", "text": "Returns the User object\n\n Returns None if the user isn't found or the passwords don't match\n\n :param username: username of the user\n :param password: password of the user"}
{"_id": "q_4433", "text": "Creates a Token object and removes all expired tokens that belong\n to the user\n\n :param token: token object\n :param request: OAuthlib request object"}
{"_id": "q_4434", "text": "Creates Grant object with the given params\n\n :param client_id: ID of the client\n :param code:\n :param request: OAuthlib request object"}
{"_id": "q_4435", "text": "Init app with Flask instance.\n\n You can also pass the instance of Flask later::\n\n oauth = OAuth()\n oauth.init_app(app)"}
{"_id": "q_4436", "text": "Registers a new remote application.\n\n :param name: the name of the remote application\n :param register: whether the remote app will be registered\n\n Find more parameters from :class:`OAuthRemoteApp`."}
{"_id": "q_4437", "text": "Sends a request to the remote server with OAuth tokens attached.\n\n :param data: the data to be sent to the server.\n :param headers: an optional dictionary of headers.\n :param format: the format for the `data`. Can be `urlencoded` for\n URL encoded data or `json` for JSON.\n :param method: the HTTP request method to use.\n :param content_type: an optional content type. If a content type\n is provided, the data is passed as it, and\n the `format` is ignored.\n :param token: an optional token to pass, if it is None, token will\n be generated by tokengetter."}
{"_id": "q_4438", "text": "Handles an oauth1 authorization response."}
{"_id": "q_4439", "text": "Handles an oauth2 authorization response."}
{"_id": "q_4440", "text": "Handles authorization response smartly."}
{"_id": "q_4441", "text": "Uses cached client or create new one with specific token."}
{"_id": "q_4442", "text": "Creates a client with specific access token pair.\n\n :param token: a tuple of access token pair ``(token, token_secret)``\n or a dictionary of access token response.\n :returns: a :class:`requests_oauthlib.oauth1_session.OAuth1Session`\n object."}
{"_id": "q_4443", "text": "When consumer confirm the authrozation."}
{"_id": "q_4444", "text": "Request token handler decorator.\n\n The decorated function should return an dictionary or None as\n the extra credentials for creating the token response.\n\n If you don't need to add any extra credentials, it could be as\n simple as::\n\n @app.route('/oauth/request_token')\n @oauth.request_token_handler\n def request_token():\n return {}"}
{"_id": "q_4445", "text": "Get client secret.\n\n The client object must has ``client_secret`` attribute."}
{"_id": "q_4446", "text": "Get request token secret.\n\n The request token object should a ``secret`` attribute."}
{"_id": "q_4447", "text": "Get access token secret.\n\n The access token object should a ``secret`` attribute."}
{"_id": "q_4448", "text": "Default realms of the client."}
{"_id": "q_4449", "text": "Realms for this request token."}
{"_id": "q_4450", "text": "Retrieves a previously stored client provided RSA key."}
{"_id": "q_4451", "text": "Validates that supplied client key."}
{"_id": "q_4452", "text": "Validates request token is available for client."}
{"_id": "q_4453", "text": "Validates access token is available for client."}
{"_id": "q_4454", "text": "Validate the timestamp and nonce is used or not."}
{"_id": "q_4455", "text": "Check if the token has permission on those realms."}
{"_id": "q_4456", "text": "Validate verifier exists."}
{"_id": "q_4457", "text": "Verify if the request token is existed."}
{"_id": "q_4458", "text": "Save request token to database.\n\n A grantsetter is required, which accepts a token and request\n parameters::\n\n def grantsetter(token, request):\n grant = Grant(\n token=token['oauth_token'],\n secret=token['oauth_token_secret'],\n client=request.client,\n redirect_uri=oauth.redirect_uri,\n realms=request.realms,\n )\n return grant.save()"}
{"_id": "q_4459", "text": "The error page URI.\n\n When something turns error, it will redirect to this error page.\n You can configure the error page URI with Flask config::\n\n OAUTH2_PROVIDER_ERROR_URI = '/error'\n\n You can also define the error page by a named endpoint::\n\n OAUTH2_PROVIDER_ERROR_ENDPOINT = 'oauth.error'"}
{"_id": "q_4460", "text": "When consumer confirm the authorization."}
{"_id": "q_4461", "text": "Determine if client authentication is required for current request.\n\n According to the rfc6749, client authentication is required in the\n following cases:\n\n Resource Owner Password Credentials Grant: see `Section 4.3.2`_.\n Authorization Code Grant: see `Section 4.1.3`_.\n Refresh Token Grant: see `Section 6`_.\n\n .. _`Section 4.3.2`: http://tools.ietf.org/html/rfc6749#section-4.3.2\n .. _`Section 4.1.3`: http://tools.ietf.org/html/rfc6749#section-4.1.3\n .. _`Section 6`: http://tools.ietf.org/html/rfc6749#section-6"}
{"_id": "q_4462", "text": "Authenticate itself in other means.\n\n Other means means is described in `Section 3.2.1`_.\n\n .. _`Section 3.2.1`: http://tools.ietf.org/html/rfc6749#section-3.2.1"}
{"_id": "q_4463", "text": "Authenticate a non-confidential client.\n\n :param client_id: Client ID of the non-confidential client\n :param request: The Request object passed by oauthlib"}
{"_id": "q_4464", "text": "Get the list of scopes associated with the refresh token.\n\n This method is used in the refresh token grant flow. We return\n the scope of the token to be refreshed so it can be applied to the\n new access token."}
{"_id": "q_4465", "text": "Ensures the requested scope matches the scope originally granted\n by the resource owner. If the scope is omitted it is treated as equal\n to the scope originally granted by the resource owner.\n\n DEPRECATION NOTE: This method will cease to be used in oauthlib>0.4.2,\n future versions of ``oauthlib`` use the validator method\n ``get_original_scopes`` to determine the scope of the refreshed token."}
{"_id": "q_4466", "text": "Default redirect_uri for the given client."}
{"_id": "q_4467", "text": "Default scopes for the given client."}
{"_id": "q_4468", "text": "Persist the Bearer token."}
{"_id": "q_4469", "text": "Ensure client_id belong to a valid and active client."}
{"_id": "q_4470", "text": "Ensure the grant code is valid."}
{"_id": "q_4471", "text": "Ensure the client is authorized to use the grant type requested.\n\n It will allow any of the four grant types (`authorization_code`,\n `password`, `client_credentials`, `refresh_token`) by default.\n Implemented `allowed_grant_types` for client object to authorize\n the request.\n\n It is suggested that `allowed_grant_types` should contain at least\n `authorization_code` and `refresh_token`."}
{"_id": "q_4472", "text": "Ensure the token is valid and belongs to the client\n\n This method is used by the authorization code grant indirectly by\n issuing refresh tokens, resource owner password credentials grant\n (also indirectly) and the refresh token grant."}
{"_id": "q_4473", "text": "Ensure the client is authorized access to requested scopes."}
{"_id": "q_4474", "text": "Ensure the username and password is valid.\n\n Attach user object on request for later using."}
{"_id": "q_4475", "text": "Revoke an access or refresh token."}
{"_id": "q_4476", "text": "Since weibo is a rubbish server, it does not follow the standard,\n we need to change the authorization header for it."}
{"_id": "q_4477", "text": "Extract request params."}
{"_id": "q_4478", "text": "Make sure text is bytes type."}
{"_id": "q_4479", "text": "Create response class for Flask."}
{"_id": "q_4480", "text": "Gets the cached clients dictionary in current context."}
{"_id": "q_4481", "text": "Adds remote application and applies custom attributes on it.\n\n If the application instance's name is different from the argument\n provided name, or the keyword arguments is not empty, then the\n application instance will not be modified but be copied as a\n prototype.\n\n :param remote_app: the remote application instance.\n :type remote_app: the subclasses of :class:`BaseApplication`\n :params kwargs: the overriding attributes for the application instance."}
{"_id": "q_4482", "text": "Creates and adds new remote application.\n\n :param name: the remote application's name.\n :param version: '1' or '2', the version code of OAuth protocol.\n :param kwargs: the attributes of remote application."}
{"_id": "q_4483", "text": "Attempt to return a PWM instance for the platform which the code is being\n executed on. Currently supports only the Raspberry Pi using the RPi.GPIO\n library and Beaglebone Black using the Adafruit_BBIO library. Will throw an\n exception if a PWM instance can't be created for the current platform. The\n returned PWM object has the same interface as the RPi_PWM_Adapter and\n BBIO_PWM_Adapter classes."}
{"_id": "q_4484", "text": "Stop PWM output on specified pin."}
{"_id": "q_4485", "text": "Write the specified byte value to the IODIR registor. If no value\n specified the current buffered value will be written."}
{"_id": "q_4486", "text": "Write the specified byte value to the GPPU registor. If no value\n specified the current buffered value will be written."}
{"_id": "q_4487", "text": "Disable the FTDI drivers for the current platform. This is necessary\n because they will conflict with libftdi and accessing the FT232H. Note you\n can enable the FTDI drivers again by calling enable_FTDI_driver."}
{"_id": "q_4488", "text": "Close the FTDI device. Will be automatically called when the program ends."}
{"_id": "q_4489", "text": "Helper function to call write_data on the provided FTDI device and\n verify it succeeds."}
{"_id": "q_4490", "text": "Helper function to call the provided command on the FTDI device and\n verify the response matches the expected value."}
{"_id": "q_4491", "text": "Helper function to continuously poll reads on the FTDI device until an\n expected number of bytes are returned. Will throw a timeout error if no\n data is received within the specified number of timeout seconds. Returns\n the read data as a string if successful, otherwise raises an execption."}
{"_id": "q_4492", "text": "Enable MPSSE mode on the FTDI device."}
{"_id": "q_4493", "text": "Synchronize buffers with MPSSE by sending bad opcode and reading expected\n error response. Should be called once after enabling MPSSE."}
{"_id": "q_4494", "text": "Set the clock speed of the MPSSE engine. Can be any value from 450hz\n to 30mhz and will pick that speed or the closest speed below it."}
{"_id": "q_4495", "text": "Read both GPIO bus states and return a 16 bit value with their state.\n D0-D7 are the lower 8 bits and C0-C7 are the upper 8 bits."}
{"_id": "q_4496", "text": "Return command to update the MPSSE GPIO state to the current direction\n and level."}
{"_id": "q_4497", "text": "Set the input or output mode for a specified pin. Mode should be\n either OUT or IN."}
{"_id": "q_4498", "text": "Half-duplex SPI read. The specified length of bytes will be clocked\n in the MISO line and returned as a bytearray object."}
{"_id": "q_4499", "text": "Write the specified number of bytes to the chip."}
{"_id": "q_4500", "text": "Read a signed byte from the specified register."}
{"_id": "q_4501", "text": "Return an I2C device for the specified address and on the specified bus.\n If busnum isn't specified, the default I2C bus for the platform will attempt\n to be detected."}
{"_id": "q_4502", "text": "Write an 8-bit value to the specified register."}
{"_id": "q_4503", "text": "Read an unsigned byte from the specified register."}
{"_id": "q_4504", "text": "Attempt to return a GPIO instance for the platform which the code is being\n executed on. Currently supports only the Raspberry Pi using the RPi.GPIO\n library and Beaglebone Black using the Adafruit_BBIO library. Will throw an\n exception if a GPIO instance can't be created for the current platform. The\n returned GPIO object is an instance of BaseGPIO."}
{"_id": "q_4505", "text": "Set the input or output mode for a specified pin. Mode should be\n either OUTPUT or INPUT."}
{"_id": "q_4506", "text": "Set the input or output mode for a specified pin. Mode should be\n either DIR_IN or DIR_OUT."}
{"_id": "q_4507", "text": "Remove edge detection for a particular GPIO channel. Pin should be\n type IN."}
{"_id": "q_4508", "text": "Call the method repeatedly such that it will return a PKey object."}
{"_id": "q_4509", "text": "Call the function with an encrypted PEM and a passphrase callback which\n returns the wrong passphrase."}
{"_id": "q_4510", "text": "Call the function with an encrypted PEM and a passphrase callback which\n returns a non-string."}
{"_id": "q_4511", "text": "Create a CRL object with 100 Revoked objects, then call the\n get_revoked method repeatedly."}
{"_id": "q_4512", "text": "Copy an empty Revoked object repeatedly. The copy is not garbage\n collected, therefore it needs to be manually freed."}
{"_id": "q_4513", "text": "Generate a certificate given a certificate request.\n\n Arguments: req - Certificate request to use\n issuerCert - The certificate of the issuer\n issuerKey - The private key of the issuer\n serial - Serial number for the certificate\n notBefore - Timestamp (relative to now) when the certificate\n starts being valid\n notAfter - Timestamp (relative to now) when the certificate\n stops being valid\n digest - Digest method to use for signing, default is sha256\n Returns: The signed certificate in an X509 object"}
{"_id": "q_4514", "text": "Builds a decorator that ensures that functions that rely on OpenSSL\n functions that are not present in this build raise NotImplementedError,\n rather than AttributeError coming out of cryptography.\n\n :param flag: A cryptography flag that guards the functions, e.g.\n ``Cryptography_HAS_NEXTPROTONEG``.\n :param error: The string to be used in the exception if the flag is false."}
{"_id": "q_4515", "text": "Set the passphrase callback. This function will be called\n when a private key with a passphrase is loaded.\n\n :param callback: The Python callback to use. This must accept three\n positional arguments. First, an integer giving the maximum length\n of the passphrase it may return. If the returned passphrase is\n longer than this, it will be truncated. Second, a boolean value\n which will be true if the user should be prompted for the\n passphrase twice and the callback should verify that the two values\n supplied are equal. Third, the value given as the *userdata*\n parameter to :meth:`set_passwd_cb`. The *callback* must return\n a byte string. If an error occurs, *callback* should return a false\n value (e.g. an empty string).\n :param userdata: (optional) A Python object which will be given as\n argument to the callback\n :return: None"}
{"_id": "q_4516", "text": "Load a certificate chain from a file.\n\n :param certfile: The name of the certificate chain file (``bytes`` or\n ``unicode``). Must be PEM encoded.\n\n :return: None"}
{"_id": "q_4517", "text": "Load a certificate from a file\n\n :param certfile: The name of the certificate file (``bytes`` or\n ``unicode``).\n :param filetype: (optional) The encoding of the file, which is either\n :const:`FILETYPE_PEM` or :const:`FILETYPE_ASN1`. The default is\n :const:`FILETYPE_PEM`.\n\n :return: None"}
{"_id": "q_4518", "text": "Load a certificate from a X509 object\n\n :param cert: The X509 object\n :return: None"}
{"_id": "q_4519", "text": "Add certificate to chain\n\n :param certobj: The X509 certificate object to add to the chain\n :return: None"}
{"_id": "q_4520", "text": "Load the trusted certificates that will be sent to the client. Does\n not actually imply any of the certificates are trusted; that must be\n configured separately.\n\n :param bytes cafile: The path to a certificates file in PEM format.\n :return: None"}
{"_id": "q_4521", "text": "Load parameters for Ephemeral Diffie-Hellman\n\n :param dhfile: The file to load EDH parameters from (``bytes`` or\n ``unicode``).\n\n :return: None"}
{"_id": "q_4522", "text": "Set the list of ciphers to be used in this context.\n\n See the OpenSSL manual for more information (e.g.\n :manpage:`ciphers(1)`).\n\n :param bytes cipher_list: An OpenSSL cipher string.\n :return: None"}
{"_id": "q_4523", "text": "Set the list of preferred client certificate signers for this server\n context.\n\n This list of certificate authorities will be sent to the client when\n the server requests a client certificate.\n\n :param certificate_authorities: a sequence of X509Names.\n :return: None\n\n .. versionadded:: 0.10"}
{"_id": "q_4524", "text": "Add the CA certificate to the list of preferred signers for this\n context.\n\n The list of certificate authorities will be sent to the client when the\n server requests a client certificate.\n\n :param certificate_authority: certificate authority's X509 certificate.\n :return: None\n\n .. versionadded:: 0.10"}
{"_id": "q_4525", "text": "Specify the protocols that the client is prepared to speak after the\n TLS connection has been negotiated using Application Layer Protocol\n Negotiation.\n\n :param protos: A list of the protocols to be offered to the server.\n This list should be a Python list of bytestrings representing the\n protocols to offer, e.g. ``[b'http/1.1', b'spdy/2']``."}
{"_id": "q_4526", "text": "Set a callback to provide OCSP data to be stapled to the TLS handshake\n on the server side.\n\n :param callback: The callback function. It will be invoked with two\n arguments: the Connection, and the optional arbitrary data you have\n provided. The callback must return a bytestring that contains the\n OCSP data to staple to the handshake. If no OCSP data is available\n for this connection, return the empty bytestring.\n :param data: Some opaque data that will be passed into the callback\n function when called. This can be used to avoid needing to do\n complex data lookups or to keep track of what context is being\n used. This parameter is optional."}
{"_id": "q_4527", "text": "Set a callback to validate OCSP data stapled to the TLS handshake on\n the client side.\n\n :param callback: The callback function. It will be invoked with three\n arguments: the Connection, a bytestring containing the stapled OCSP\n assertion, and the optional arbitrary data you have provided. The\n callback must return a boolean that indicates the result of\n validating the OCSP data: ``True`` if the OCSP data is valid and\n the certificate can be trusted, or ``False`` if either the OCSP\n data is invalid or the certificate has been revoked.\n :param data: Some opaque data that will be passed into the callback\n function when called. This can be used to avoid needing to do\n complex data lookups or to keep track of what context is being\n used. This parameter is optional."}
{"_id": "q_4528", "text": "Switch this connection to a new session context.\n\n :param context: A :class:`Context` instance giving the new session\n context to use."}
{"_id": "q_4529", "text": "Receive data on the connection and copy it directly into the provided\n buffer, rather than creating a new string.\n\n :param buffer: The buffer to copy into.\n :param nbytes: (optional) The maximum number of bytes to read into the\n buffer. If not present, defaults to the size of the buffer. If\n larger than the size of the buffer, is reduced to the size of the\n buffer.\n :param flags: (optional) The only supported flag is ``MSG_PEEK``,\n all other flags are ignored.\n :return: The number of bytes read into the buffer."}
{"_id": "q_4530", "text": "If the Connection was created with a memory BIO, this method can be\n used to read bytes from the write end of that memory BIO. Many\n Connection methods will add bytes which must be read in this manner or\n the buffer will eventually fill up and the Connection will be able to\n take no further actions.\n\n :param bufsiz: The maximum number of bytes to read\n :return: The string read."}
{"_id": "q_4531", "text": "Renegotiate the session.\n\n :return: True if the renegotiation can be started, False otherwise\n :rtype: bool"}
{"_id": "q_4532", "text": "Send the shutdown message to the Connection.\n\n :return: True if the shutdown completed successfully (i.e. both sides\n have sent closure alerts), False otherwise (in which case you\n call :meth:`recv` or :meth:`send` when the connection becomes\n readable/writeable)."}
{"_id": "q_4533", "text": "Retrieve the list of ciphers used by the Connection object.\n\n :return: A list of native cipher strings."}
{"_id": "q_4534", "text": "Retrieve the random value used with the server hello message.\n\n :return: A string representing the state"}
{"_id": "q_4535", "text": "Retrieve the value of the master key for this session.\n\n :return: A string representing the state"}
{"_id": "q_4536", "text": "Obtain the name of the currently used cipher.\n\n :returns: The name of the currently used cipher or :obj:`None`\n if no connection has been established.\n :rtype: :class:`unicode` or :class:`NoneType`\n\n .. versionadded:: 0.15"}
{"_id": "q_4537", "text": "Obtain the number of secret bits of the currently used cipher.\n\n :returns: The number of secret bits of the currently used cipher\n or :obj:`None` if no connection has been established.\n :rtype: :class:`int` or :class:`NoneType`\n\n .. versionadded:: 0.15"}
{"_id": "q_4538", "text": "Obtain the protocol version of the currently used cipher.\n\n :returns: The protocol name of the currently used cipher\n or :obj:`None` if no connection has been established.\n :rtype: :class:`unicode` or :class:`NoneType`\n\n .. versionadded:: 0.15"}
{"_id": "q_4539", "text": "Retrieve the protocol version of the current connection.\n\n :returns: The TLS version of the current connection, for example\n the value for TLS 1.2 would be ``TLSv1.2``or ``Unknown``\n for connections that were not successfully established.\n :rtype: :class:`unicode`"}
{"_id": "q_4540", "text": "Get the protocol that was negotiated by NPN.\n\n :returns: A bytestring of the protocol name. If no protocol has been\n negotiated yet, returns an empty string.\n\n .. versionadded:: 0.15"}
{"_id": "q_4541", "text": "Get the protocol that was negotiated by ALPN.\n\n :returns: A bytestring of the protocol name. If no protocol has been\n negotiated yet, returns an empty string."}
{"_id": "q_4542", "text": "Allocate a new OpenSSL memory BIO.\n\n Arrange for the garbage collector to clean it up automatically.\n\n :param buffer: None or some bytes to use to put into the BIO so that they\n can be read out."}
{"_id": "q_4543", "text": "Copy the contents of an OpenSSL BIO object into a Python byte string."}
{"_id": "q_4544", "text": "Retrieve the time value of an ASN1 time object.\n\n @param timestamp: An ASN1_GENERALIZEDTIME* (or an object safely castable to\n that type) from which the time value will be retrieved.\n\n @return: The time value from C{timestamp} as a L{bytes} string in a certain\n format. Or C{None} if the object contains no time value."}
{"_id": "q_4545", "text": "Return a single curve object selected by name.\n\n See :py:func:`get_elliptic_curves` for information about curve objects.\n\n :param name: The OpenSSL short name identifying the curve object to\n retrieve.\n :type name: :py:class:`unicode`\n\n If the named curve is not supported then :py:class:`ValueError` is raised."}
{"_id": "q_4546", "text": "Dump a public key to a buffer.\n\n :param type: The file type (one of :data:`FILETYPE_PEM` or\n :data:`FILETYPE_ASN1`).\n :param PKey pkey: The public key to dump\n :return: The buffer with the dumped key in it.\n :rtype: bytes"}
{"_id": "q_4547", "text": "Verify the signature for a data string.\n\n :param cert: signing certificate (X509 object) corresponding to the\n private key which generated the signature.\n :param signature: signature returned by sign function\n :param data: data to be verified\n :param digest: message digest to use\n :return: ``None`` if the signature is correct, raise exception otherwise.\n\n .. versionadded:: 0.11"}
{"_id": "q_4548", "text": "Export as a ``cryptography`` key.\n\n :rtype: One of ``cryptography``'s `key interfaces`_.\n\n .. _key interfaces: https://cryptography.io/en/latest/hazmat/\\\n primitives/asymmetric/rsa/#key-interfaces\n\n .. versionadded:: 16.1.0"}
{"_id": "q_4549", "text": "Generate a key pair of the given type, with the given number of bits.\n\n This generates a key \"into\" the this object.\n\n :param type: The key type.\n :type type: :py:data:`TYPE_RSA` or :py:data:`TYPE_DSA`\n :param bits: The number of bits.\n :type bits: :py:data:`int` ``>= 0``\n :raises TypeError: If :py:data:`type` or :py:data:`bits` isn't\n of the appropriate type.\n :raises ValueError: If the number of bits isn't an integer of\n the appropriate size.\n :return: ``None``"}
{"_id": "q_4550", "text": "Check the consistency of an RSA private key.\n\n This is the Python equivalent of OpenSSL's ``RSA_check_key``.\n\n :return: ``True`` if key is consistent.\n\n :raise OpenSSL.crypto.Error: if the key is inconsistent.\n\n :raise TypeError: if the key is of a type which cannot be checked.\n Only RSA keys can currently be checked."}
{"_id": "q_4551", "text": "Get the curves supported by OpenSSL.\n\n :param lib: The OpenSSL library binding object.\n\n :return: A :py:type:`set` of ``cls`` instances giving the names of the\n elliptic curves the underlying library supports."}
{"_id": "q_4552", "text": "Create a new OpenSSL EC_KEY structure initialized to use this curve.\n\n The structure is automatically garbage collected when the Python object\n is garbage collected."}
{"_id": "q_4553", "text": "Return the DER encoding of this name.\n\n :return: The DER encoded form of this name.\n :rtype: :py:class:`bytes`"}
{"_id": "q_4554", "text": "Returns the short type name of this X.509 extension.\n\n The result is a byte string such as :py:const:`b\"basicConstraints\"`.\n\n :return: The short type name.\n :rtype: :py:data:`bytes`\n\n .. versionadded:: 0.12"}
{"_id": "q_4555", "text": "Returns the data of the X509 extension, encoded as ASN.1.\n\n :return: The ASN.1 encoded data of this X509 extension.\n :rtype: :py:data:`bytes`\n\n .. versionadded:: 0.12"}
{"_id": "q_4556", "text": "Set the public key of the certificate signing request.\n\n :param pkey: The public key to use.\n :type pkey: :py:class:`PKey`\n\n :return: ``None``"}
{"_id": "q_4557", "text": "Get the public key of the certificate signing request.\n\n :return: The public key.\n :rtype: :py:class:`PKey`"}
{"_id": "q_4558", "text": "Return the subject of this certificate signing request.\n\n This creates a new :class:`X509Name` that wraps the underlying subject\n name field on the certificate signing request. Modifying it will modify\n the underlying signing request, and will have the effect of modifying\n any other :class:`X509Name` that refers to this subject.\n\n :return: The subject of this certificate signing request.\n :rtype: :class:`X509Name`"}
{"_id": "q_4559", "text": "Add extensions to the certificate signing request.\n\n :param extensions: The X.509 extensions to add.\n :type extensions: iterable of :py:class:`X509Extension`\n :return: ``None``"}
{"_id": "q_4560", "text": "Get X.509 extensions in the certificate signing request.\n\n :return: The X.509 extensions in this request.\n :rtype: :py:class:`list` of :py:class:`X509Extension` objects.\n\n .. versionadded:: 0.15"}
{"_id": "q_4561", "text": "Export as a ``cryptography`` certificate.\n\n :rtype: ``cryptography.x509.Certificate``\n\n .. versionadded:: 17.1.0"}
{"_id": "q_4562", "text": "Set the version number of the certificate. Note that the\n version value is zero-based, eg. a value of 0 is V1.\n\n :param version: The version number of the certificate.\n :type version: :py:class:`int`\n\n :return: ``None``"}
{"_id": "q_4563", "text": "Get the public key of the certificate.\n\n :return: The public key.\n :rtype: :py:class:`PKey`"}
{"_id": "q_4564", "text": "Set the public key of the certificate.\n\n :param pkey: The public key.\n :type pkey: :py:class:`PKey`\n\n :return: :py:data:`None`"}
{"_id": "q_4565", "text": "Return the digest of the X509 object.\n\n :param digest_name: The name of the digest algorithm to use.\n :type digest_name: :py:class:`bytes`\n\n :return: The digest of the object, formatted as\n :py:const:`b\":\"`-delimited hex pairs.\n :rtype: :py:class:`bytes`"}
{"_id": "q_4566", "text": "Set the serial number of the certificate.\n\n :param serial: The new serial number.\n :type serial: :py:class:`int`\n\n :return: :py:data`None`"}
{"_id": "q_4567", "text": "Return the serial number of this certificate.\n\n :return: The serial number.\n :rtype: int"}
{"_id": "q_4568", "text": "Adjust the time stamp on which the certificate stops being valid.\n\n :param int amount: The number of seconds by which to adjust the\n timestamp.\n :return: ``None``"}
{"_id": "q_4569", "text": "Return the issuer of this certificate.\n\n This creates a new :class:`X509Name` that wraps the underlying issuer\n name field on the certificate. Modifying it will modify the underlying\n certificate, and will have the effect of modifying any other\n :class:`X509Name` that refers to this issuer.\n\n :return: The issuer of this certificate.\n :rtype: :class:`X509Name`"}
{"_id": "q_4570", "text": "Return the subject of this certificate.\n\n This creates a new :class:`X509Name` that wraps the underlying subject\n name field on the certificate. Modifying it will modify the underlying\n certificate, and will have the effect of modifying any other\n :class:`X509Name` that refers to this subject.\n\n :return: The subject of this certificate.\n :rtype: :class:`X509Name`"}
{"_id": "q_4571", "text": "Get a specific extension of the certificate by index.\n\n Extensions on a certificate are kept in order. The index\n parameter selects which extension will be returned.\n\n :param int index: The index of the extension to retrieve.\n :return: The extension at the specified index.\n :rtype: :py:class:`X509Extension`\n :raises IndexError: If the extension index was out of bounds.\n\n .. versionadded:: 0.12"}
{"_id": "q_4572", "text": "Add a certificate revocation list to this store.\n\n The certificate revocation lists added to a store will only be used if\n the associated flags are configured to check certificate revocation\n lists.\n\n .. versionadded:: 16.1.0\n\n :param CRL crl: The certificate revocation list to add to this store.\n :return: ``None`` if the certificate revocation list was added\n successfully."}
{"_id": "q_4573", "text": "Set up the store context for a subsequent verification operation.\n\n Calling this method more than once without first calling\n :meth:`_cleanup` will leak memory."}
{"_id": "q_4574", "text": "Set the serial number.\n\n The serial number is formatted as a hexadecimal number encoded in\n ASCII.\n\n :param bytes hex_str: The new serial number.\n\n :return: ``None``"}
{"_id": "q_4575", "text": "Get the serial number.\n\n The serial number is formatted as a hexadecimal number encoded in\n ASCII.\n\n :return: The serial number.\n :rtype: bytes"}
{"_id": "q_4576", "text": "Set the reason of this revocation.\n\n If :data:`reason` is ``None``, delete the reason instead.\n\n :param reason: The reason string.\n :type reason: :class:`bytes` or :class:`NoneType`\n\n :return: ``None``\n\n .. seealso::\n\n :meth:`all_reasons`, which gives you a list of all supported\n reasons which you might pass to this method."}
{"_id": "q_4577", "text": "Export as a ``cryptography`` CRL.\n\n :rtype: ``cryptography.x509.CertificateRevocationList``\n\n .. versionadded:: 17.1.0"}
{"_id": "q_4578", "text": "Get the CRL's issuer.\n\n .. versionadded:: 16.1.0\n\n :rtype: X509Name"}
{"_id": "q_4579", "text": "Sign the CRL.\n\n Signing a CRL enables clients to associate the CRL itself with an\n issuer. Before a CRL is meaningful to other OpenSSL functions, it must\n be signed by an issuer.\n\n This method implicitly sets the issuer's name based on the issuer\n certificate and private key used to sign the CRL.\n\n .. versionadded:: 16.1.0\n\n :param X509 issuer_cert: The issuer's certificate.\n :param PKey issuer_key: The issuer's private key.\n :param bytes digest: The digest method to sign the CRL with."}
{"_id": "q_4580", "text": "Returns the type name of the PKCS7 structure\n\n :return: A string with the typename"}
{"_id": "q_4581", "text": "Replace or set the CA certificates within the PKCS12 object.\n\n :param cacerts: The new CA certificates, or :py:const:`None` to unset\n them.\n :type cacerts: An iterable of :py:class:`X509` or :py:const:`None`\n\n :return: ``None``"}
{"_id": "q_4582", "text": "Sign the certificate request with this key and digest type.\n\n :param pkey: The private key to sign with.\n :type pkey: :py:class:`PKey`\n\n :param digest: The message digest to use.\n :type digest: :py:class:`bytes`\n\n :return: ``None``"}
{"_id": "q_4583", "text": "Generate a base64 encoded representation of this SPKI object.\n\n :return: The base64 encoded string.\n :rtype: :py:class:`bytes`"}
{"_id": "q_4584", "text": "Get the public key of this certificate.\n\n :return: The public key.\n :rtype: :py:class:`PKey`"}
{"_id": "q_4585", "text": "Set the public key of the certificate\n\n :param pkey: The public key\n :return: ``None``"}
{"_id": "q_4586", "text": "If ``obj`` is text, emit a warning that it should be bytes instead and try\n to convert it to bytes automatically.\n\n :param str label: The name of the parameter from which ``obj`` was taken\n (so a developer can easily find the source of the problem and correct\n it).\n\n :return: If ``obj`` is the text string type, a ``bytes`` object giving the\n UTF-8 encoding of that text is returned. Otherwise, ``obj`` itself is\n returned."}
{"_id": "q_4587", "text": "Returns a generator of \"Path\"s"}
{"_id": "q_4588", "text": "Internal helper to provide color names."}
{"_id": "q_4589", "text": "Serializes str, Attrib, or PathAttrib objects.\n\n Example::\n\n <attribute>foobar</attribute>"}
{"_id": "q_4590", "text": "Serializes a list, where the values are objects of type\n str, Attrib, or PathAttrib.\n\n Example::\n\n <value>text</value>\n <value><attribute>foobar</attribute></value>\n <value><path>foobar</path></value>"}
{"_id": "q_4591", "text": "Parse the event definition node, and return an instance of Event"}
{"_id": "q_4592", "text": "Parse the messageEventDefinition node and return an instance of\n MessageEventDefinition"}
{"_id": "q_4593", "text": "Parse the timerEventDefinition node and return an instance of\n TimerEventDefinition\n\n This currently only supports the timeDate node for specifying an expiry\n time for the timer."}
{"_id": "q_4594", "text": "Called by the weak reference when its target dies.\n In other words, we can assert that self.weak_subscribers is not\n None at this time."}
{"_id": "q_4595", "text": "Connects a taskspec that is executed if the condition DOES match.\n\n condition -- a condition (Condition)\n taskspec -- the conditional task spec"}
{"_id": "q_4596", "text": "Runs the task. Should not be called directly.\n Returns True if completed, False otherwise."}
{"_id": "q_4597", "text": "Returns True if the entire Workflow is completed, False otherwise.\n\n :rtype: bool\n :return: Whether the workflow is completed."}
{"_id": "q_4598", "text": "Cancels all open tasks in the workflow.\n\n :type success: bool\n :param success: Whether the Workflow should be marked as successfully\n completed."}
{"_id": "q_4599", "text": "Returns the task with the given id.\n\n :type id:integer\n :param id: The id of a task.\n :rtype: Task\n :returns: The task with the given id."}
{"_id": "q_4600", "text": "Returns all tasks whose spec has the given name.\n\n :type name: str\n :param name: The name of a task spec.\n :rtype: Task\n :return: The task that relates to the spec with the given name."}
{"_id": "q_4601", "text": "Runs the next task.\n Returns True if completed, False otherwise.\n\n :type pick_up: bool\n :param pick_up: When True, this method attempts to choose the next\n task not by searching beginning at the root, but by\n searching from the position at which the last call\n of complete_next() left off.\n :type halt_on_manual: bool\n :param halt_on_manual: When True, this method will not attempt to\n complete any tasks that have manual=True.\n See :meth:`SpiffWorkflow.specs.TaskSpec.__init__`\n :rtype: bool\n :returns: True if all tasks were completed, False otherwise."}
{"_id": "q_4602", "text": "Create a new workflow instance from the given spec and arguments.\n\n :param workflow_spec: the workflow spec to use\n\n :param read_only: this should be in read only mode\n\n :param kwargs: Any extra kwargs passed to the deserialize_workflow\n method will be passed through here"}
{"_id": "q_4603", "text": "Adds a new child and assigns the given TaskSpec to it.\n\n :type task_spec: TaskSpec\n :param task_spec: The task spec that is assigned to the new child.\n :type state: integer\n :param state: The bitmask of states for the new child.\n :rtype: Task\n :returns: The new child task."}
{"_id": "q_4604", "text": "Assigns a new thread id to the task.\n\n :type recursive: bool\n :param recursive: Whether to assign the id to children recursively.\n :rtype: bool\n :returns: The new thread id."}
{"_id": "q_4605", "text": "Returns the ancestor that has a task with the given task spec\n as a parent.\n If no such ancestor was found, the root task is returned.\n\n :type parent_task_spec: TaskSpec\n :param parent_task_spec: The wanted ancestor.\n :rtype: Task\n :returns: The child of the given ancestor."}
{"_id": "q_4606", "text": "Returns the ancestor that has the given task spec assigned.\n If no such ancestor was found, the root task is returned.\n\n :type task_spec: TaskSpec\n :param task_spec: The wanted task spec.\n :rtype: Task\n :returns: The ancestor."}
{"_id": "q_4607", "text": "Returns the ancestor that has a task with the given name assigned.\n Returns None if no such ancestor was found.\n\n :type name: str\n :param name: The name of the wanted task.\n :rtype: Task\n :returns: The ancestor."}
{"_id": "q_4608", "text": "Returns a textual representation of this Task's state."}
{"_id": "q_4609", "text": "Returns the subtree as a string for debugging.\n\n :rtype: str\n :returns: The debug information."}
{"_id": "q_4610", "text": "Parses args and evaluates any Attrib entries"}
{"_id": "q_4611", "text": "Parses kwargs and evaluates any Attrib entries"}
{"_id": "q_4612", "text": "Sends Celery asynchronous call and stores async call information for\n retrieval laster"}
{"_id": "q_4613", "text": "Abort celery task and retry it"}
{"_id": "q_4614", "text": "Clear celery task data"}
{"_id": "q_4615", "text": "Updates the branch such that all possible future routes are added.\n\n Should NOT be overwritten! Instead, overwrite _predict_hook().\n\n :type my_task: Task\n :param my_task: The associated task in the task tree.\n :type seen: list[taskspec]\n :param seen: A list of already visited tasks.\n :type looked_ahead: integer\n :param looked_ahead: The depth of the predicted path so far."}
{"_id": "q_4616", "text": "Return True on success, False otherwise.\n\n :type my_task: Task\n :param my_task: The associated task in the task tree."}
{"_id": "q_4617", "text": "Creates the package, writing the data out to the provided file-like\n object."}
{"_id": "q_4618", "text": "Writes a local file in to the zip file and adds it to the manifest\n dictionary\n\n :param filename: The zip file name\n\n :param src_filename: the local file name"}
{"_id": "q_4619", "text": "Adds the SVG files to the archive for this BPMN file."}
{"_id": "q_4620", "text": "Utility method to merge an option and config, with the option taking \"\n precedence"}
{"_id": "q_4621", "text": "Parses the specified child task node, and returns the task spec. This\n can be called by a TaskParser instance, that is owned by this\n ProcessParser."}
{"_id": "q_4622", "text": "Reads the \"pre-assign\" or \"post-assign\" tag from the given node.\n\n start_node -- the xml node (xml.dom.minidom.Node)"}
{"_id": "q_4623", "text": "Reads the conditional statement from the given node.\n\n workflow -- the workflow with which the concurrence is associated\n start_node -- the xml structure (xml.dom.minidom.Node)"}
{"_id": "q_4624", "text": "Reads the workflow from the given XML structure and returns a\n WorkflowSpec instance."}
{"_id": "q_4625", "text": "Called by a task spec when it was added into the workflow."}
{"_id": "q_4626", "text": "Checks integrity of workflow and reports any problems with it.\n\n Detects:\n - loops (tasks that wait on each other in a loop)\n :returns: empty list if valid, a list of errors if not"}
{"_id": "q_4627", "text": "Indicate to the workflow that a message has been received. The message\n will be processed by any waiting Intermediate or Boundary Message\n Events, that are waiting for the message."}
{"_id": "q_4628", "text": "Deserializes the trigger using the provided serializer."}
{"_id": "q_4629", "text": "Evaluate the given expression, within the context of the given task and\n return the result."}
{"_id": "q_4630", "text": "Checks whether the preconditions for going to READY state are met.\n Returns True if the threshold was reached, False otherwise.\n Also returns the list of tasks that yet need to be completed."}
{"_id": "q_4631", "text": "Connects the task spec that is executed if no other condition\n matches.\n\n :type task_spec: TaskSpec\n :param task_spec: The following task spec."}
{"_id": "q_4632", "text": "Return extra config options to be passed to the TrelloIssue class"}
{"_id": "q_4633", "text": "A wrapper around get_comments that build the taskwarrior\n annotations."}
{"_id": "q_4634", "text": "Get the list of boards to pull cards from. If the user gave a value to\n trello.include_boards use that, otherwise ask the Trello API for the\n user's boards."}
{"_id": "q_4635", "text": "Build the full url to the API endpoint"}
{"_id": "q_4636", "text": "Grab all issues matching a github query"}
{"_id": "q_4637", "text": "Grab all the pull requests"}
{"_id": "q_4638", "text": "Return a main config value, or default if it does not exist."}
{"_id": "q_4639", "text": "Validate generic options for a particular target"}
{"_id": "q_4640", "text": "Return true if the issue in question should be included"}
{"_id": "q_4641", "text": "Make a RST-compatible table\n\n From http://stackoverflow.com/a/12539081"}
{"_id": "q_4642", "text": "Retrieve password from the given command"}
{"_id": "q_4643", "text": "Accepts both integers and empty values."}
{"_id": "q_4644", "text": "Pull down tasks from forges and add them to your taskwarrior tasks.\n\n Relies on configuration in bugwarriorrc"}
{"_id": "q_4645", "text": "Pages through an object collection from the bitbucket API.\n Returns an iterator that lazily goes through all the 'values'\n of all the pages in the collection."}
{"_id": "q_4646", "text": "Returns a list of UDAs defined by given targets\n\n For all targets in `targets`, build a dictionary of configuration overrides\n representing the UDAs defined by the passed-in services (`targets`).\n\n Given a hypothetical situation in which you have two services, the first\n of which defining a UDA named 'serviceAid' (\"Service A ID\", string) and\n a second service defining two UDAs named 'serviceBproject'\n (\"Service B Project\", string) and 'serviceBnumber'\n (\"Service B Number\", numeric), this would return the following structure::\n\n {\n 'uda': {\n 'serviceAid': {\n 'label': 'Service A ID',\n 'type': 'string',\n },\n 'serviceBproject': {\n 'label': 'Service B Project',\n 'type': 'string',\n },\n 'serviceBnumber': {\n 'label': 'Service B Number',\n 'type': 'numeric',\n }\n }\n }"}
{"_id": "q_4647", "text": "Parse the big ugly sprint string stored by JIRA.\n\n They look like:\n com.atlassian.greenhopper.service.sprint.Sprint@4c9c41a5[id=2322,rapid\n ViewId=1173,state=ACTIVE,name=Sprint 1,startDate=2016-09-06T16:08:07.4\n 55Z,endDate=2016-09-23T16:08:00.000Z,completeDate=<null>,sequence=2322]"}
{"_id": "q_4648", "text": "Initialize a new state file with the given contents.\n This function fails in case the state file already exists."}
{"_id": "q_4649", "text": "Update the current state file with the specified contents"}
{"_id": "q_4650", "text": "Try to load a blockade state file in the current directory"}
{"_id": "q_4651", "text": "Generate a new blockade ID based on the CWD"}
{"_id": "q_4652", "text": "Make sure the state directory exists"}
{"_id": "q_4653", "text": "Try to delete the state.yml file and the folder .blockade"}
{"_id": "q_4654", "text": "Convert blockade ID and container information into\n a state dictionary object."}
{"_id": "q_4655", "text": "Write the given state information into a file"}
{"_id": "q_4656", "text": "Validate the partitions of containers. If there are any containers\n not in any partition, place them in an new partition."}
{"_id": "q_4657", "text": "Get a map of blockade chains IDs -> list of IPs targeted at them\n\n For figuring out which container is in which partition"}
{"_id": "q_4658", "text": "Start the timer waiting for pain"}
{"_id": "q_4659", "text": "Start the blockade event"}
{"_id": "q_4660", "text": "Stop chaos when there is no current blockade operation"}
{"_id": "q_4661", "text": "Stop chaos while there is a blockade event in progress"}
{"_id": "q_4662", "text": "Delete all state associated with the chaos session"}
{"_id": "q_4663", "text": "Sort a dictionary or list of containers into dependency order\n\n Returns a sequence"}
{"_id": "q_4664", "text": "Convert a dictionary of configuration values\n into a sequence of BlockadeContainerConfig instances"}
{"_id": "q_4665", "text": "Start the containers and link them together"}
{"_id": "q_4666", "text": "Destroy all containers and restore networks"}
{"_id": "q_4667", "text": "Kill some or all containers"}
{"_id": "q_4668", "text": "Fetch the logs of a container"}
{"_id": "q_4669", "text": "Start the Blockade REST API"}
{"_id": "q_4670", "text": "Add one or more existing Docker containers to a Blockade group"}
{"_id": "q_4671", "text": "Get the event log for a given blockade"}
{"_id": "q_4672", "text": "Efficient way to compute highly repetitive scoring\n i.e. sequences are involved multiple time\n\n Args:\n sequences(list[str]): list of sequences (either hyp or ref)\n scores_ids(list[tuple(int)]): list of pairs (hyp_id, ref_id)\n ie. scores[i] = rouge_n(scores_ids[i][0],\n scores_ids[i][1])\n\n Returns:\n scores: list of length `len(scores_ids)` containing rouge `n`\n scores as a dict with 'f', 'r', 'p'\n Raises:\n KeyError: if there's a value of i in scores_ids that is not in\n [0, len(sequences)["}
{"_id": "q_4673", "text": "Performs the actual evaluation of Flas-CORS options and actually\n modifies the response object.\n\n This function is used both in the decorator and the after_request\n callback"}
{"_id": "q_4674", "text": "Safely attempts to match a pattern or string to a request origin."}
{"_id": "q_4675", "text": "Compute CORS options for an application by combining the DEFAULT_OPTIONS,\n the app's configuration-specified options and any dictionaries passed. The\n last specified option wins."}
{"_id": "q_4676", "text": "A helper method to serialize and processes the options dictionary."}
{"_id": "q_4677", "text": "This function is the decorator which is used to wrap a Flask route with.\n In the simplest case, simply use the default parameters to allow all\n origins in what is the most permissive configuration. If this method\n modifies state or performs authentication which may be brute-forced, you\n should add some degree of protection, such as Cross Site Forgery\n Request protection.\n\n :param origins:\n The origin, or list of origins to allow requests from.\n The origin(s) may be regular expressions, case-sensitive strings,\n or else an asterisk\n\n Default : '*'\n :type origins: list, string or regex\n\n :param methods:\n The method or list of methods which the allowed origins are allowed to\n access for non-simple requests.\n\n Default : [GET, HEAD, POST, OPTIONS, PUT, PATCH, DELETE]\n :type methods: list or string\n\n :param expose_headers:\n The header or list which are safe to expose to the API of a CORS API\n specification.\n\n Default : None\n :type expose_headers: list or string\n\n :param allow_headers:\n The header or list of header field names which can be used when this\n resource is accessed by allowed origins. The header(s) may be regular\n expressions, case-sensitive strings, or else an asterisk.\n\n Default : '*', allow all headers\n :type allow_headers: list, string or regex\n\n :param supports_credentials:\n Allows users to make authenticated requests. If true, injects the\n `Access-Control-Allow-Credentials` header in responses. This allows\n cookies and credentials to be submitted across domains.\n\n :note: This option cannot be used in conjuction with a '*' origin\n\n Default : False\n :type supports_credentials: bool\n\n :param max_age:\n The maximum time for which this CORS request maybe cached. This value\n is set as the `Access-Control-Max-Age` header.\n\n Default : None\n :type max_age: timedelta, integer, string or None\n\n :param send_wildcard: If True, and the origins parameter is `*`, a wildcard\n `Access-Control-Allow-Origin` header is sent, rather than the\n request's `Origin` header.\n\n Default : False\n :type send_wildcard: bool\n\n :param vary_header:\n If True, the header Vary: Origin will be returned as per the W3\n implementation guidelines.\n\n Setting this header when the `Access-Control-Allow-Origin` is\n dynamically generated (e.g. when there is more than one allowed\n origin, and an Origin than '*' is returned) informs CDNs and other\n caches that the CORS headers are dynamic, and cannot be cached.\n\n If False, the Vary header will never be injected or altered.\n\n Default : True\n :type vary_header: bool\n\n :param automatic_options:\n Only applies to the `cross_origin` decorator. If True, Flask-CORS will\n override Flask's default OPTIONS handling to return CORS headers for\n OPTIONS requests.\n\n Default : True\n :type automatic_options: bool"}
{"_id": "q_4678", "text": "This call returns an array of mutual fund symbols that IEX Cloud supports for API calls.\n\n https://iexcloud.io/docs/api/#mutual-fund-symbols\n 8am, 9am, 12pm, 1pm UTC daily\n\n Args:\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"}
{"_id": "q_4679", "text": "This call returns an array of OTC symbols that IEX Cloud supports for API calls.\n\n https://iexcloud.io/docs/api/#otc-symbols\n 8am, 9am, 12pm, 1pm UTC daily\n\n Args:\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"}
{"_id": "q_4680", "text": "for backwards compat, accepting token and version but ignoring"}
{"_id": "q_4681", "text": "for iex cloud"}
{"_id": "q_4682", "text": "News about market\n\n https://iexcloud.io/docs/api/#news\n Continuous\n\n Args:\n count (int): limit number of results\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"}
{"_id": "q_4683", "text": "Returns the official open and close for whole market.\n\n https://iexcloud.io/docs/api/#news\n 9:30am-5pm ET Mon-Fri\n\n Args:\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"}
{"_id": "q_4684", "text": "This returns previous day adjusted price data for whole market\n\n https://iexcloud.io/docs/api/#previous-day-prices\n Available after 4am ET Tue-Sat\n\n Args:\n symbol (string); Ticker to request\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"}
{"_id": "q_4685", "text": "Stock split history\n\n https://iexcloud.io/docs/api/#splits\n Updated at 9am UTC every day\n\n Args:\n symbol (string); Ticker to request\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"}
{"_id": "q_4686", "text": "This will return an array of quotes for all Cryptocurrencies supported by the IEX API. Each element is a standard quote object with four additional keys.\n\n https://iexcloud.io/docs/api/#crypto\n\n Args:\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"}
{"_id": "q_4687", "text": "benjamini hocheberg fdr correction. inspired by statsmodels"}
{"_id": "q_4688", "text": "Standardize the mean and variance of the data axis Parameters.\n\n :param data2d: DataFrame to normalize.\n :param axis: int, Which axis to normalize across. If 0, normalize across rows,\n if 1, normalize across columns. If None, don't change data\n \n :Returns: Normalized DataFrame. Normalized data with a mean of 0 and variance of 1\n across the specified axis."}
{"_id": "q_4689", "text": "Prepare argparser object. New options will be added in this function first."}
{"_id": "q_4690", "text": "Add function 'prerank' argument parsers."}
{"_id": "q_4691", "text": "Add function 'plot' argument parsers."}
{"_id": "q_4692", "text": "This is the most important function of GSEApy. It has the same algorithm with GSEA and ssGSEA.\n\n :param gene_list: The ordered gene list gene_name_list, rank_metric.index.values\n :param gene_set: gene_sets in gmt file, please use gsea_gmt_parser to get gene_set.\n :param weighted_score_type: It's the same with gsea's weighted_score method. Weighting by the correlation\n is a very reasonable choice that allows significant gene sets with less than perfect coherence.\n options: 0(classic),1,1.5,2. default:1. if one is interested in penalizing sets for lack of\n coherence or to discover sets with any type of nonrandom distribution of tags, a value p < 1\n might be appropriate. On the other hand, if one uses sets with large number of genes and only\n a small subset of those is expected to be coherent, then one could consider using p > 1.\n Our recommendation is to use p = 1 and use other settings only if you are very experienced\n with the method and its behavior.\n\n :param correl_vector: A vector with the correlations (e.g. signal to noise scores) corresponding to the genes in\n the gene list. Or rankings, rank_metric.values\n :param nperm: Only use this parameter when computing esnull for statistical testing. Set the esnull value\n equal to the permutation number.\n :param rs: Random state for initializing gene list shuffling. Default: np.random.RandomState(seed=None)\n\n :return:\n\n ES: Enrichment score (real number between -1 and +1)\n\n ESNULL: Enrichment score calculated from random permutations.\n\n Hits_Indices: Index of a gene in gene_list, if gene is included in gene_set.\n\n RES: Numerical vector containing the running enrichment score for all locations in the gene list ."}
{"_id": "q_4693", "text": "Build shuffled ranking matrix when permutation_type eq to phenotype.\n\n :param exprs: gene_expression DataFrame, gene_name indexed.\n :param str method: calculate correlation or ranking. methods including:\n 1. 'signal_to_noise'.\n 2. 't_test'.\n 3. 'ratio_of_classes' (also referred to as fold change).\n 4. 'diff_of_classes'.\n 5. 'log2_ratio_of_classes'.\n :param int permuation_num: how many times of classes is being shuffled\n :param str pos: one of labels of phenotype's names.\n :param str neg: one of labels of phenotype's names.\n :param list classes: a list of phenotype labels, to specify which column of\n dataframe belongs to what class of phenotype.\n :param bool ascending: bool. Sort ascending vs. descending.\n\n :return:\n returns two 2d ndarray with shape (nperm, gene_num).\n\n | cor_mat_indices: the indices of sorted and permutated (exclude last row) ranking matrix.\n | cor_mat: sorted and permutated (exclude last row) ranking matrix."}
{"_id": "q_4694", "text": "Compute nominal pvals, normalized ES, and FDR q value.\n\n For a given NES(S) = NES* >= 0. The FDR is the ratio of the percentage of all (S,pi) with\n NES(S,pi) >= 0, whose NES(S,pi) >= NES*, divided by the percentage of\n observed S wih NES(S) >= 0, whose NES(S) >= NES*, and similarly if NES(S) = NES* <= 0."}
{"_id": "q_4695", "text": "Get available marts and their names."}
{"_id": "q_4696", "text": "mapping ids using BioMart. \n\n :param dataset: str, default: 'hsapiens_gene_ensembl'\n :param attributes: str, list, tuple\n :param filters: dict, {'filter name': list(filter value)}\n :param host: www.ensembl.org, asia.ensembl.org, useast.ensembl.org\n :return: a dataframe contains all attributes you selected.\n\n **Note**: it will take a couple of minutes to get the results.\n A xml template for querying biomart. (see https://gist.github.com/keithshep/7776579)\n \n exampleTaxonomy = \"mmusculus_gene_ensembl\"\n exampleGene = \"ENSMUSG00000086981,ENSMUSG00000086982,ENSMUSG00000086983\"\n urlTemplate = \\\n '''http://ensembl.org/biomart/martservice?query=''' \\\n '''<?xml version=\"1.0\" encoding=\"UTF-8\"?>''' \\\n '''<!DOCTYPE Query>''' \\\n '''<Query virtualSchemaName=\"default\" formatter=\"CSV\" header=\"0\" uniqueRows=\"0\" count=\"\" datasetConfigVersion=\"0.6\">''' \\\n '''<Dataset name=\"%s\" interface=\"default\"><Filter name=\"ensembl_gene_id\" value=\"%s\"/>''' \\\n '''<Attribute name=\"ensembl_gene_id\"/><Attribute name=\"ensembl_transcript_id\"/>''' \\\n '''<Attribute name=\"transcript_start\"/><Attribute name=\"transcript_end\"/>''' \\\n '''<Attribute name=\"exon_chrom_start\"/><Attribute name=\"exon_chrom_end\"/>''' \\\n '''</Dataset>''' \\\n '''</Query>''' \n \n exampleURL = urlTemplate % (exampleTaxonomy, exampleGene)\n req = requests.get(exampleURL, stream=True)"}
{"_id": "q_4697", "text": "Run Gene Set Enrichment Analysis with single sample GSEA tool\n\n :param data: Expression table, pd.Series, pd.DataFrame, GCT file, or .rnk file format.\n :param gene_sets: Enrichr Library name or .gmt gene sets file or dict of gene sets. Same input with GSEA.\n :param outdir: Results output directory.\n :param str sample_norm_method: \"Sample normalization method. Choose from {'rank', 'log', 'log_rank'}. Default: rank.\n\n 1. 'rank': Rank your expression data, and transform by 10000*rank_dat/gene_numbers\n 2. 'log' : Do not rank, but transform data by log(data + exp(1)), while data = data[data<1] =1.\n 3. 'log_rank': Rank your expression data, and transform by log(10000*rank_dat/gene_numbers+ exp(1))\n 4. 'custom': Do nothing, and use your own rank value to calculate enrichment score.\n \n see here: https://github.com/GSEA-MSigDB/ssGSEAProjection-gpmodule/blob/master/src/ssGSEAProjection.Library.R, line 86\n\n :param int min_size: Minimum allowed number of genes from gene set also the data set. Default: 15.\n :param int max_size: Maximum allowed number of genes from gene set also the data set. Default: 2000.\n :param int permutation_num: Number of permutations for significance computation. Default: 0.\n :param str weighted_score_type: Refer to :func:`algorithm.enrichment_score`. Default:0.25.\n :param bool scale: If True, normalize the scores by number of genes in the gene sets.\n :param bool ascending: Sorting order of rankings. Default: False.\n :param int processes: Number of Processes you are going to use. Default: 1.\n :param list figsize: Matplotlib figsize, accept a tuple or list, e.g. [width,height]. Default: [7,6].\n :param str format: Matplotlib figure format. Default: 'pdf'.\n :param int graph_num: Plot graphs for top sets of each phenotype.\n :param bool no_plot: If equals to True, no figure will be drawn. Default: False.\n :param seed: Random seed. expect an integer. Default:None.\n :param bool verbose: Bool, increase output verbosity, print out progress of your job, Default: False.\n\n :return: Return a ssGSEA obj. \n All results store to a dictionary, access enrichment score by obj.resultsOnSamples,\n and normalized enrichment score by obj.res2d.\n if permutation_num > 0, additional results contain::\n\n | {es: enrichment score,\n | nes: normalized enrichment score,\n | p: P-value,\n | fdr: FDR,\n | size: gene set size,\n | matched_size: genes matched to the data,\n | genes: gene names from the data set\n | ledge_genes: leading edge genes, if permutation_num >0}"}
{"_id": "q_4698", "text": "Run Gene Set Enrichment Analysis with pre-ranked correlation defined by user.\n\n :param rnk: pre-ranked correlation table or pandas DataFrame. Same input with ``GSEA`` .rnk file.\n :param gene_sets: Enrichr Library name or .gmt gene sets file or dict of gene sets. Same input with GSEA.\n :param outdir: results output directory.\n :param int permutation_num: Number of permutations for significance computation. Default: 1000.\n :param int min_size: Minimum allowed number of genes from gene set also the data set. Default: 15.\n :param int max_size: Maximum allowed number of genes from gene set also the data set. Defaults: 500.\n :param str weighted_score_type: Refer to :func:`algorithm.enrichment_score`. Default:1.\n :param bool ascending: Sorting order of rankings. Default: False.\n :param int processes: Number of Processes you are going to use. Default: 1.\n :param list figsize: Matplotlib figsize, accept a tuple or list, e.g. [width,height]. Default: [6.5,6].\n :param str format: Matplotlib figure format. Default: 'pdf'.\n :param int graph_num: Plot graphs for top sets of each phenotype.\n :param bool no_plot: If equals to True, no figure will be drawn. Default: False.\n :param seed: Random seed. expect an integer. Default:None.\n :param bool verbose: Bool, increase output verbosity, print out progress of your job, Default: False.\n\n :return: Return a Prerank obj. All results store to a dictionary, obj.results,\n where contains::\n\n | {es: enrichment score,\n | nes: normalized enrichment score,\n | p: P-value,\n | fdr: FDR,\n | size: gene set size,\n | matched_size: genes matched to the data,\n | genes: gene names from the data set\n | ledge_genes: leading edge genes}"}
{"_id": "q_4699", "text": "The main function to reproduce GSEA desktop outputs.\n\n :param indir: GSEA desktop results directory. In the sub folder, you must contain edb file folder.\n :param outdir: Output directory.\n :param float weighted_score_type: weighted score type. choose from {0,1,1.5,2}. Default: 1.\n :param list figsize: Matplotlib output figure figsize. Default: [6.5,6].\n :param str format: Matplotlib output figure format. Default: 'pdf'.\n :param int min_size: Min size of input genes presented in Gene Sets. Default: 3.\n :param int max_size: Max size of input genes presented in Gene Sets. Default: 5000.\n You are not encouraged to use min_size, or max_size argument in :func:`replot` function.\n Because gmt file has already been filtered.\n :param verbose: Bool, increase output verbosity, print out progress of your job, Default: False.\n\n :return: Generate new figures with selected figure format. Default: 'pdf'."}
{"_id": "q_4700", "text": "load gene set dict"}
{"_id": "q_4701", "text": "download enrichr libraries."}
{"_id": "q_4702", "text": "GSEA main procedure"}
{"_id": "q_4703", "text": "GSEA prerank workflow"}
{"_id": "q_4704", "text": "Single Sample GSEA workflow.\n multiprocessing utility on samples."}
{"_id": "q_4705", "text": "main replot function"}
{"_id": "q_4706", "text": "Enrichr API.\n\n :param gene_list: Flat file with list of genes, one gene id per row, or a python list object\n :param gene_sets: Enrichr Library to query. Required enrichr library name(s). Separate each name by comma.\n :param organism: Enrichr supported organism. Select from (human, mouse, yeast, fly, fish, worm).\n see here for details: https://amp.pharm.mssm.edu/modEnrichr\n :param description: name of analysis. optional.\n :param outdir: Output file directory\n :param float cutoff: Adjusted P-value (benjamini-hochberg correction) cutoff. Default: 0.05\n :param int background: BioMart dataset name for retrieving background gene information.\n This argument only works when gene_sets input is a gmt file or python dict.\n You could also specify a number by yourself, e.g. total expressed genes number.\n In this case, you will skip retrieving background infos from biomart.\n \n Use the code below to see valid background dataset names from BioMart.\n Here are example code:\n >>> from gseapy.parser import Biomart \n >>> bm = Biomart(verbose=False, host=\"asia.ensembl.org\")\n >>> ## view validated marts\n >>> marts = bm.get_marts()\n >>> ## view validated dataset\n >>> datasets = bm.get_datasets(mart='ENSEMBL_MART_ENSEMBL')\n\n :param str format: Output figure format supported by matplotlib,('pdf','png','eps'...). Default: 'pdf'.\n :param list figsize: Matplotlib figsize, accept a tuple or list, e.g. (width,height). Default: (6.5,6).\n :param bool no_plot: If equals to True, no figure will be drawn. Default: False.\n :param bool verbose: Increase output verbosity, print out progress of your job, Default: False.\n\n :return: An Enrichr object, which obj.res2d stores your last query, obj.results stores your all queries."}
{"_id": "q_4707", "text": "parse gene list"}
{"_id": "q_4708", "text": "send gene list to enrichr server"}
{"_id": "q_4709", "text": "Compare the genes sent and received to get successfully recognized genes"}
{"_id": "q_4710", "text": "get background gene"}
{"_id": "q_4711", "text": "Perform the App's actions as configured."}
{"_id": "q_4712", "text": "Initializes client id and client secret based on the settings.\n\n Args:\n settings_instance: An instance of ``django.conf.settings``.\n\n Returns:\n A 2-tuple, the first item is the client id and the second\n item is the client secret."}
{"_id": "q_4713", "text": "Gets a Credentials storage object provided by the Django OAuth2 Helper\n object.\n\n Args:\n request: Reference to the current request object.\n\n Returns:\n An :class:`oauth2.client.Storage` object."}
{"_id": "q_4714", "text": "Helper method to create a redirect response with URL params.\n\n This builds a redirect string that converts kwargs into a\n query string.\n\n Args:\n url_name: The name of the url to redirect to.\n kwargs: the query string param and their values to build.\n\n Returns:\n A properly formatted redirect string."}
{"_id": "q_4715", "text": "Gets the authorized credentials for this flow, if they exist."}
{"_id": "q_4716", "text": "Returns the scopes associated with this object, kept up to\n date for incremental auth."}
{"_id": "q_4717", "text": "Retrieve stored credential.\n\n Returns:\n A :class:`oauth2client.Credentials` instance or `None`."}
{"_id": "q_4718", "text": "Write a credentials to the SQLAlchemy datastore.\n\n Args:\n credentials: :class:`oauth2client.Credentials`"}
{"_id": "q_4719", "text": "Delete credentials from the SQLAlchemy datastore."}
{"_id": "q_4720", "text": "Utility function that creates JSON repr. of a credentials object.\n\n Over-ride is needed since PKCS#12 keys will not in general be JSON\n serializable.\n\n Args:\n strip: array, An array of names of members to exclude from the\n JSON.\n to_serialize: dict, (Optional) The properties for this object\n that will be serialized. This allows callers to\n modify before serializing.\n\n Returns:\n string, a JSON representation of this instance, suitable to pass to\n from_json()."}
{"_id": "q_4721", "text": "Helper for factory constructors from JSON keyfile.\n\n Args:\n keyfile_dict: dict-like object, The parsed dictionary-like object\n containing the contents of the JSON keyfile.\n scopes: List or string, Scopes to use when acquiring an\n access token.\n token_uri: string, URI for OAuth 2.0 provider token endpoint.\n If unset and not present in keyfile_dict, defaults\n to Google's endpoints.\n revoke_uri: string, URI for OAuth 2.0 provider revoke endpoint.\n If unset and not present in keyfile_dict, defaults\n to Google's endpoints.\n\n Returns:\n ServiceAccountCredentials, a credentials object created from\n the keyfile contents.\n\n Raises:\n ValueError, if the credential type is not :data:`SERVICE_ACCOUNT`.\n KeyError, if one of the expected keys is not present in\n the keyfile."}
{"_id": "q_4722", "text": "Factory constructor from JSON keyfile by name.\n\n Args:\n filename: string, The location of the keyfile.\n scopes: List or string, (Optional) Scopes to use when acquiring an\n access token.\n token_uri: string, URI for OAuth 2.0 provider token endpoint.\n If unset and not present in the key file, defaults\n to Google's endpoints.\n revoke_uri: string, URI for OAuth 2.0 provider revoke endpoint.\n If unset and not present in the key file, defaults\n to Google's endpoints.\n\n Returns:\n ServiceAccountCredentials, a credentials object created from\n the keyfile.\n\n Raises:\n ValueError, if the credential type is not :data:`SERVICE_ACCOUNT`.\n KeyError, if one of the expected keys is not present in\n the keyfile."}
{"_id": "q_4723", "text": "Factory constructor from parsed JSON keyfile.\n\n Args:\n keyfile_dict: dict-like object, The parsed dictionary-like object\n containing the contents of the JSON keyfile.\n scopes: List or string, (Optional) Scopes to use when acquiring an\n access token.\n token_uri: string, URI for OAuth 2.0 provider token endpoint.\n If unset and not present in keyfile_dict, defaults\n to Google's endpoints.\n revoke_uri: string, URI for OAuth 2.0 provider revoke endpoint.\n If unset and not present in keyfile_dict, defaults\n to Google's endpoints.\n\n Returns:\n ServiceAccountCredentials, a credentials object created from\n the keyfile.\n\n Raises:\n ValueError, if the credential type is not :data:`SERVICE_ACCOUNT`.\n KeyError, if one of the expected keys is not present in\n the keyfile."}
{"_id": "q_4724", "text": "Generate the assertion that will be used in the request."}
{"_id": "q_4725", "text": "Deserialize a JSON-serialized instance.\n\n Inverse to :meth:`to_json`.\n\n Args:\n json_data: dict or string, Serialized JSON (as a string or an\n already parsed dictionary) representing a credential.\n\n Returns:\n ServiceAccountCredentials from the serialized data."}
{"_id": "q_4726", "text": "Create credentials that specify additional claims.\n\n Args:\n claims: dict, key-value pairs for claims.\n\n Returns:\n ServiceAccountCredentials, a copy of the current service account\n credentials with updated claims to use when obtaining access\n tokens."}
{"_id": "q_4727", "text": "Create a signed jwt.\n\n Args:\n http: unused\n additional_claims: dict, additional claims to add to\n the payload of the JWT.\n Returns:\n An AccessTokenInfo with the signed jwt"}
{"_id": "q_4728", "text": "Determine if the current environment is Compute Engine.\n\n Returns:\n Boolean indicating whether or not the current environment is Google\n Compute Engine."}
{"_id": "q_4729", "text": "Detects if the code is running in the App Engine environment.\n\n Returns:\n True if running in the GAE environment, False otherwise."}
{"_id": "q_4730", "text": "Detect if the code is running in the Compute Engine environment.\n\n Returns:\n True if running in the GCE environment, False otherwise."}
{"_id": "q_4731", "text": "Saves a file with read-write permissions on for the owner.\n\n Args:\n filename: String. Absolute path to file.\n json_contents: JSON serializable object to be saved."}
{"_id": "q_4732", "text": "Save the provided GoogleCredentials to the well known file.\n\n Args:\n credentials: the credentials to be saved to the well known file;\n it should be an instance of GoogleCredentials\n well_known_file: the name of the file where the credentials are to be\n saved; this parameter is supposed to be used for\n testing only"}
{"_id": "q_4733", "text": "Get the well known file produced by command 'gcloud auth login'."}
{"_id": "q_4734", "text": "Build the Application Default Credentials from file."}
{"_id": "q_4735", "text": "Verifies a signed JWT id_token.\n\n This function requires PyOpenSSL and because of that it does not work on\n App Engine.\n\n Args:\n id_token: string, A Signed JWT.\n audience: string, The audience 'aud' that the token should be for.\n http: httplib2.Http, instance to use to make the HTTP request. Callers\n should supply an instance that has caching enabled.\n cert_uri: string, URI of the certificates in JSON format to\n verify the JWT against.\n\n Returns:\n The deserialized JSON in the JWT.\n\n Raises:\n oauth2client.crypt.AppIdentityError: if the JWT fails to verify.\n CryptoUnavailableError: if no crypto library is available."}
{"_id": "q_4736", "text": "Exchanges an authorization code for an OAuth2Credentials object.\n\n Args:\n client_id: string, client identifier.\n client_secret: string, client secret.\n scope: string or iterable of strings, scope(s) to request.\n code: string, An authorization code, most likely passed down from\n the client\n redirect_uri: string, this is generally set to 'postmessage' to match\n the redirect_uri that the client specified\n http: httplib2.Http, optional http instance to use to do the fetch\n token_uri: string, URI for token endpoint. For convenience defaults\n to Google's endpoints but any OAuth 2.0 provider can be\n used.\n auth_uri: string, URI for authorization endpoint. For convenience\n defaults to Google's endpoints but any OAuth 2.0 provider\n can be used.\n revoke_uri: string, URI for revoke endpoint. For convenience\n defaults to Google's endpoints but any OAuth 2.0 provider\n can be used.\n device_uri: string, URI for device authorization endpoint. For\n convenience defaults to Google's endpoints but any OAuth\n 2.0 provider can be used.\n pkce: boolean, default: False, Generate and include a \"Proof Key\n for Code Exchange\" (PKCE) with your authorization and token\n requests. This adds security for installed applications that\n cannot protect a client_secret. See RFC 7636 for details.\n code_verifier: bytestring or None, default: None, parameter passed\n as part of the code exchange when pkce=True. If\n None, a code_verifier will automatically be\n generated as part of step1_get_authorize_url(). See\n RFC 7636 for details.\n\n Returns:\n An OAuth2Credentials object.\n\n Raises:\n FlowExchangeError if the authorization code cannot be exchanged for an\n access token"}
{"_id": "q_4737", "text": "Returns OAuth2Credentials from a clientsecrets file and an auth code.\n\n Will create the right kind of Flow based on the contents of the\n clientsecrets file or will raise InvalidClientSecretsError for unknown\n types of Flows.\n\n Args:\n filename: string, File name of clientsecrets.\n scope: string or iterable of strings, scope(s) to request.\n code: string, An authorization code, most likely passed down from\n the client\n message: string, A friendly string to display to the user if the\n clientsecrets file is missing or invalid. If message is\n provided then sys.exit will be called in the case of an error.\n If message in not provided then\n clientsecrets.InvalidClientSecretsError will be raised.\n redirect_uri: string, this is generally set to 'postmessage' to match\n the redirect_uri that the client specified\n http: httplib2.Http, optional http instance to use to do the fetch\n cache: An optional cache service client that implements get() and set()\n methods. See clientsecrets.loadfile() for details.\n device_uri: string, OAuth 2.0 device authorization endpoint\n pkce: boolean, default: False, Generate and include a \"Proof Key\n for Code Exchange\" (PKCE) with your authorization and token\n requests. This adds security for installed applications that\n cannot protect a client_secret. See RFC 7636 for details.\n code_verifier: bytestring or None, default: None, parameter passed\n as part of the code exchange when pkce=True. If\n None, a code_verifier will automatically be\n generated as part of step1_get_authorize_url(). See\n RFC 7636 for details.\n\n Returns:\n An OAuth2Credentials object.\n\n Raises:\n FlowExchangeError: if the authorization code cannot be exchanged for an\n access token\n UnknownClientSecretsFlowError: if the file describes an unknown kind\n of Flow.\n clientsecrets.InvalidClientSecretsError: if the clientsecrets file is\n invalid."}
{"_id": "q_4738", "text": "Utility class method to instantiate a Credentials subclass from JSON.\n\n Expects the JSON string to have been produced by to_json().\n\n Args:\n json_data: string or bytes, JSON from to_json().\n\n Returns:\n An instance of the subclass of Credentials that was serialized with\n to_json()."}
{"_id": "q_4739", "text": "Write a credential.\n\n The Storage lock must be held when this is called.\n\n Args:\n credentials: Credentials, the credentials to store."}
{"_id": "q_4740", "text": "Verify that the credentials are authorized for the given scopes.\n\n Returns True if the credentials authorized scopes contain all of the\n scopes given.\n\n Args:\n scopes: list or string, the scopes to check.\n\n Notes:\n There are cases where the credentials are unaware of which scopes\n are authorized. Notably, credentials obtained and stored before\n this code was added will not have scopes, AccessTokenCredentials do\n not have scopes. In both cases, you can use refresh_scopes() to\n obtain the canonical set of scopes."}
{"_id": "q_4741", "text": "Return the access token and its expiration information.\n\n If the token does not exist, get one.\n If the token expired, refresh it."}
{"_id": "q_4742", "text": "Return the number of seconds until this token expires.\n\n If token_expiry is in the past, this method will return 0, meaning the\n token has already expired.\n\n If token_expiry is None, this method will return None. Note that\n returning 0 in such a case would not be fair: the token may still be\n valid; we just don't know anything about it."}
{"_id": "q_4743", "text": "Refreshes the access_token.\n\n This method first checks by reading the Storage object if available.\n If a refresh is still needed, it holds the Storage lock until the\n refresh is completed.\n\n Args:\n http: an object to be used to make HTTP requests.\n\n Raises:\n HttpAccessTokenRefreshError: When the refresh fails."}
{"_id": "q_4744", "text": "Refresh the access_token using the refresh_token.\n\n Args:\n http: an object to be used to make HTTP requests.\n\n Raises:\n HttpAccessTokenRefreshError: When the refresh fails."}
{"_id": "q_4745", "text": "Retrieves the list of authorized scopes from the OAuth2 provider.\n\n Args:\n http: an object to be used to make HTTP requests.\n token: A string used as the token to identify the credentials to\n the provider.\n\n Raises:\n Error: When refresh fails, indicating the the access token is\n invalid."}
{"_id": "q_4746", "text": "Attempts to get implicit credentials from local credential files.\n\n First checks if the environment variable GOOGLE_APPLICATION_CREDENTIALS\n is set with a filename and then falls back to a configuration file (the\n \"well known\" file) associated with the 'gcloud' command line tool.\n\n Returns:\n Credentials object associated with the\n GOOGLE_APPLICATION_CREDENTIALS file or the \"well known\" file if\n either exist. If neither file is define, returns None, indicating\n no credentials from a file can detected from the current\n environment."}
{"_id": "q_4747", "text": "Gets credentials implicitly from the environment.\n\n Checks environment in order of precedence:\n - Environment variable GOOGLE_APPLICATION_CREDENTIALS pointing to\n a file with stored credentials information.\n - Stored \"well known\" file associated with `gcloud` command line tool.\n - Google App Engine (production and testing)\n - Google Compute Engine production environment.\n\n Raises:\n ApplicationDefaultCredentialsError: raised when the credentials\n fail to be retrieved."}
{"_id": "q_4748", "text": "Create a Credentials object by reading information from a file.\n\n It returns an object of type GoogleCredentials.\n\n Args:\n credential_filename: the path to the file from where the\n credentials are to be read\n\n Raises:\n ApplicationDefaultCredentialsError: raised when the credentials\n fail to be retrieved."}
{"_id": "q_4749", "text": "Create a DeviceFlowInfo from a server response.\n\n The response should be a dict containing entries as described here:\n\n http://tools.ietf.org/html/draft-ietf-oauth-v2-05#section-3.7.1"}
{"_id": "q_4750", "text": "Returns a URI to redirect to the provider.\n\n Args:\n redirect_uri: string, Either the string 'urn:ietf:wg:oauth:2.0:oob'\n for a non-web-based application, or a URI that\n handles the callback from the authorization server.\n This parameter is deprecated, please move to passing\n the redirect_uri in via the constructor.\n state: string, Opaque state string which is passed through the\n OAuth2 flow and returned to the client as a query parameter\n in the callback.\n\n Returns:\n A URI as a string to redirect the user to begin the authorization\n flow."}
{"_id": "q_4751", "text": "Returns a user code and the verification URL where to enter it\n\n Returns:\n A user code as a string for the user to authorize the application\n An URL as a string where the user has to enter the code"}
{"_id": "q_4752", "text": "Construct an RsaVerifier instance from a string.\n\n Args:\n key_pem: string, public key in PEM format.\n is_x509_cert: bool, True if key_pem is an X509 cert, otherwise it\n is expected to be an RSA key in PEM format.\n\n Returns:\n RsaVerifier instance.\n\n Raises:\n ValueError: if the key_pem can't be parsed. In either case, error\n will begin with 'No PEM start marker'. If\n ``is_x509_cert`` is True, will fail to find the\n \"-----BEGIN CERTIFICATE-----\" error, otherwise fails\n to find \"-----BEGIN RSA PUBLIC KEY-----\"."}
{"_id": "q_4753", "text": "Construct an RsaSigner instance from a string.\n\n Args:\n key: string, private key in PEM format.\n password: string, password for private key file. Unused for PEM\n files.\n\n Returns:\n RsaSigner instance.\n\n Raises:\n ValueError if the key cannot be parsed as PKCS#1 or PKCS#8 in\n PEM format."}
{"_id": "q_4754", "text": "Load credentials from the given file handle.\n\n The file is expected to be in this format:\n\n {\n \"file_version\": 2,\n \"credentials\": {\n \"key\": \"base64 encoded json representation of credentials.\"\n }\n }\n\n This function will warn and return empty credentials instead of raising\n exceptions.\n\n Args:\n credentials_file: An open file handle.\n\n Returns:\n A dictionary mapping user-defined keys to an instance of\n :class:`oauth2client.client.Credentials`."}
{"_id": "q_4755", "text": "Retrieves the current credentials from the store.\n\n Returns:\n An instance of :class:`oauth2client.client.Credentials` or `None`."}
{"_id": "q_4756", "text": "A decorator to declare that only the first N arguments my be positional.\n\n This decorator makes it easy to support Python 3 style keyword-only\n parameters. For example, in Python 3 it is possible to write::\n\n def fn(pos1, *, kwonly1=None, kwonly1=None):\n ...\n\n All named parameters after ``*`` must be a keyword::\n\n fn(10, 'kw1', 'kw2') # Raises exception.\n fn(10, kwonly1='kw1') # Ok.\n\n Example\n ^^^^^^^\n\n To define a function like above, do::\n\n @positional(1)\n def fn(pos1, kwonly1=None, kwonly2=None):\n ...\n\n If no default value is provided to a keyword argument, it becomes a\n required keyword argument::\n\n @positional(0)\n def fn(required_kw):\n ...\n\n This must be called with the keyword parameter::\n\n fn() # Raises exception.\n fn(10) # Raises exception.\n fn(required_kw=10) # Ok.\n\n When defining instance or class methods always remember to account for\n ``self`` and ``cls``::\n\n class MyClass(object):\n\n @positional(2)\n def my_method(self, pos1, kwonly1=None):\n ...\n\n @classmethod\n @positional(2)\n def my_method(cls, pos1, kwonly1=None):\n ...\n\n The positional decorator behavior is controlled by\n ``_helpers.positional_parameters_enforcement``, which may be set to\n ``POSITIONAL_EXCEPTION``, ``POSITIONAL_WARNING`` or\n ``POSITIONAL_IGNORE`` to raise an exception, log a warning, or do\n nothing, respectively, if a declaration is violated.\n\n Args:\n max_positional_arguments: Maximum number of positional arguments. All\n parameters after the this index must be\n keyword only.\n\n Returns:\n A decorator that prevents using arguments after max_positional_args\n from being used as positional parameters.\n\n Raises:\n TypeError: if a key-word only argument is provided as a positional\n parameter, but only if\n _helpers.positional_parameters_enforcement is set to\n POSITIONAL_EXCEPTION."}
{"_id": "q_4757", "text": "Converts stringifed scope value to a list.\n\n If scopes is a list then it is simply passed through. If scopes is an\n string then a list of each individual scope is returned.\n\n Args:\n scopes: a string or iterable of strings, the scopes.\n\n Returns:\n The scopes in a list."}
{"_id": "q_4758", "text": "Parses unique key-value parameters from urlencoded content.\n\n Args:\n content: string, URL-encoded key-value pairs.\n\n Returns:\n dict, The key-value pairs from ``content``.\n\n Raises:\n ValueError: if one of the keys is repeated."}
{"_id": "q_4759", "text": "Updates a URI with new query parameters.\n\n If a given key from ``params`` is repeated in the ``uri``, then\n the URI will be considered invalid and an error will occur.\n\n If the URI is valid, then each value from ``params`` will\n replace the corresponding value in the query parameters (if\n it exists).\n\n Args:\n uri: string, A valid URI, with potential existing query parameters.\n params: dict, A dictionary of query parameters.\n\n Returns:\n The same URI but with the new query parameters added."}
{"_id": "q_4760", "text": "Adds a query parameter to a url.\n\n Replaces the current value if it already exists in the URL.\n\n Args:\n url: string, url to add the query parameter to.\n name: string, query parameter name.\n value: string, query parameter value.\n\n Returns:\n Updated query parameter. Does not update the url if value is None."}
{"_id": "q_4761", "text": "Adds a user-agent to the headers.\n\n Args:\n headers: dict, request headers to add / modify user\n agent within.\n user_agent: str, the user agent to add.\n\n Returns:\n dict, the original headers passed in, but modified if the\n user agent is not None."}
{"_id": "q_4762", "text": "Forces header keys and values to be strings, i.e not unicode.\n\n The httplib module just concats the header keys and values in a way that\n may make the message header a unicode string, which, if it then tries to\n contatenate to a binary request body may result in a unicode decode error.\n\n Args:\n headers: dict, A dictionary of headers.\n\n Returns:\n The same dictionary but with all the keys converted to strings."}
{"_id": "q_4763", "text": "Prepares an HTTP object's request method for auth.\n\n Wraps HTTP requests with logic to catch auth failures (typically\n identified via a 401 status code). In the event of failure, tries\n to refresh the token used and then retry the original request.\n\n Args:\n credentials: Credentials, the credentials used to identify\n the authenticated user.\n http: httplib2.Http, an http object to be used to make\n auth requests."}
{"_id": "q_4764", "text": "Prepares an HTTP object's request method for JWT access.\n\n Wraps HTTP requests with logic to catch auth failures (typically\n identified via a 401 status code). In the event of failure, tries\n to refresh the token used and then retry the original request.\n\n Args:\n credentials: _JWTAccessCredentials, the credentials used to identify\n a service account that uses JWT access tokens.\n http: httplib2.Http, an http object to be used to make\n auth requests."}
{"_id": "q_4765", "text": "Retrieves the flow instance associated with a given CSRF token from\n the Flask session."}
{"_id": "q_4766", "text": "Loads oauth2 configuration in order of priority.\n\n Priority:\n 1. Config passed to the constructor or init_app.\n 2. Config passed via the GOOGLE_OAUTH2_CLIENT_SECRETS_FILE app\n config.\n 3. Config passed via the GOOGLE_OAUTH2_CLIENT_ID and\n GOOGLE_OAUTH2_CLIENT_SECRET app config.\n\n Raises:\n ValueError if no config could be found."}
{"_id": "q_4767", "text": "Flask view that starts the authorization flow.\n\n Starts flow by redirecting the user to the OAuth2 provider."}
{"_id": "q_4768", "text": "The credentials for the current user or None if unavailable."}
{"_id": "q_4769", "text": "Returns True if there are valid credentials for the current user."}
{"_id": "q_4770", "text": "Returns the user's email address or None if there are no credentials.\n\n The email address is provided by the current credentials' id_token.\n This should not be used as unique identifier as the user can change\n their email. If you need a unique identifier, use user_id."}
{"_id": "q_4771", "text": "Fetch an oauth token for the\n\n Args:\n http: an object to be used to make HTTP requests.\n service_account: An email specifying the service account this token\n should represent. Default will be a token for the \"default\" service\n account of the current compute engine instance.\n\n Returns:\n A tuple of (access token, token expiration), where access token is the\n access token as a string and token expiration is a datetime object\n that indicates when the access token will expire."}
{"_id": "q_4772", "text": "Composes the value for the 'state' parameter.\n\n Packs the current request URI and an XSRF token into an opaque string that\n can be passed to the authentication server via the 'state' parameter.\n\n Args:\n request_handler: webapp.RequestHandler, The request.\n user: google.appengine.api.users.User, The current user.\n\n Returns:\n The state value as a string."}
{"_id": "q_4773", "text": "Creates an OAuth2Decorator populated from a clientsecrets file.\n\n Args:\n filename: string, File name of client secrets.\n scope: string or list of strings, scope(s) of the credentials being\n requested.\n message: string, A friendly string to display to the user if the\n clientsecrets file is missing or invalid. The message may\n contain HTML and will be presented on the web interface for\n any method that uses the decorator.\n cache: An optional cache service client that implements get() and set()\n methods. See clientsecrets.loadfile() for details.\n\n Returns: An OAuth2Decorator"}
{"_id": "q_4774", "text": "Get the email for the current service account.\n\n Returns:\n string, The email associated with the Google App Engine\n service account."}
{"_id": "q_4775", "text": "Determine whether the model of the instance is an NDB model.\n\n Returns:\n Boolean indicating whether or not the model is an NDB or DB model."}
{"_id": "q_4776", "text": "Retrieve entity from datastore.\n\n Uses a different model method for db or ndb models.\n\n Returns:\n Instance of the model corresponding to the current storage object\n and stored using the key name of the storage object."}
{"_id": "q_4777", "text": "Delete entity from datastore.\n\n Attempts to delete using the key_name stored on the object, whether or\n not the given key is in the datastore."}
{"_id": "q_4778", "text": "Write a Credentials to the datastore.\n\n Args:\n credentials: Credentials, the credentials to store."}
{"_id": "q_4779", "text": "Delete Credential from datastore."}
{"_id": "q_4780", "text": "Decorator that starts the OAuth 2.0 dance.\n\n Starts the OAuth dance for the logged in user if they haven't already\n granted access for this application.\n\n Args:\n method: callable, to be decorated method of a webapp.RequestHandler\n instance."}
{"_id": "q_4781", "text": "Decorator that sets up for OAuth 2.0 dance, but doesn't do it.\n\n Does all the setup for the OAuth dance, but doesn't initiate it.\n This decorator is useful if you want to create a page that knows\n whether or not the user has granted access to this application.\n From within a method decorated with @oauth_aware the has_credentials()\n and authorize_url() methods can be called.\n\n Args:\n method: callable, to be decorated method of a webapp.RequestHandler\n instance."}
{"_id": "q_4782", "text": "Validate parsed client secrets from a file.\n\n Args:\n clientsecrets_dict: dict, a dictionary holding the client secrets.\n\n Returns:\n tuple, a string of the client type and the information parsed\n from the file."}
{"_id": "q_4783", "text": "Communicate with the Developer Shell server socket."}
{"_id": "q_4784", "text": "Core code for a command-line application.\n\n The ``run()`` function is called from your application and runs\n through all the steps to obtain credentials. It takes a ``Flow``\n argument and attempts to open an authorization server page in the\n user's default web browser. The server asks the user to grant your\n application access to the user's data. If the user grants access,\n the ``run()`` function returns new credentials. The new credentials\n are also stored in the ``storage`` argument, which updates the file\n associated with the ``Storage`` object.\n\n It presumes it is run from a command-line application and supports the\n following flags:\n\n ``--auth_host_name`` (string, default: ``localhost``)\n Host name to use when running a local web server to handle\n redirects during OAuth authorization.\n\n ``--auth_host_port`` (integer, default: ``[8080, 8090]``)\n Port to use when running a local web server to handle redirects\n during OAuth authorization. Repeat this option to specify a list\n of values.\n\n ``--[no]auth_local_webserver`` (boolean, default: ``True``)\n Run a local web server to handle redirects during OAuth\n authorization.\n\n The tools module defines an ``ArgumentParser`` the already contains the\n flag definitions that ``run()`` requires. You can pass that\n ``ArgumentParser`` to your ``ArgumentParser`` constructor::\n\n parser = argparse.ArgumentParser(\n description=__doc__,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n parents=[tools.argparser])\n flags = parser.parse_args(argv)\n\n Args:\n flow: Flow, an OAuth 2.0 Flow to step through.\n storage: Storage, a ``Storage`` to store the credential in.\n flags: ``argparse.Namespace``, (Optional) The command-line flags. This\n is the object returned from calling ``parse_args()`` on\n ``argparse.ArgumentParser`` as described above. Defaults\n to ``argparser.parse_args()``.\n http: An instance of ``httplib2.Http.request`` or something that\n acts like it.\n\n Returns:\n Credentials, the obtained credential."}
{"_id": "q_4785", "text": "Handle a GET request.\n\n Parses the query parameters and prints a message\n if the flow has completed. Note that we can't detect\n if an error occurred."}
{"_id": "q_4786", "text": "Creates a 'code_challenge' as described in section 4.2 of RFC 7636\n by taking the sha256 hash of the verifier and then urlsafe\n base64-encoding it.\n\n Args:\n verifier: bytestring, representing a code_verifier as generated by\n code_verifier().\n\n Returns:\n Bytestring, representing a urlsafe base64-encoded sha256 hash digest,\n without '=' padding."}
{"_id": "q_4787", "text": "Retrieve stored credential from the Django ORM.\n\n Returns:\n oauth2client.Credentials retrieved from the Django ORM, associated\n with the ``model``, ``key_value``->``key_name`` pair used to query\n for the model, and ``property_name`` identifying the\n ``CredentialsProperty`` field, all of which are defined in the\n constructor for this Storage object."}
{"_id": "q_4788", "text": "Delete Credentials from the datastore."}
{"_id": "q_4789", "text": "Retrieve the credentials from the dictionary, if they exist.\n\n Returns: A :class:`oauth2client.client.OAuth2Credentials` instance."}
{"_id": "q_4790", "text": "Save the credentials to the dictionary.\n\n Args:\n credentials: A :class:`oauth2client.client.OAuth2Credentials`\n instance."}
{"_id": "q_4791", "text": "Validates a value as a proper Flow object.\n\n Args:\n value: A value to be set on the property.\n\n Raises:\n TypeError if the value is not an instance of Flow."}
{"_id": "q_4792", "text": "Converts our stored JSON string back to the desired type.\n\n Args:\n value: A value from the datastore to be converted to the\n desired type.\n\n Returns:\n A deserialized Credentials (or subclass) object, else None if\n the value can't be parsed."}
{"_id": "q_4793", "text": "Looks up the flow in session to recover information about requested\n scopes.\n\n Args:\n csrf_token: The token passed in the callback request that should\n match the one previously generated and stored in the request on the\n initial authorization view.\n\n Returns:\n The OAuth2 Flow object associated with this flow based on the\n CSRF token."}
{"_id": "q_4794", "text": "View that handles the user's return from OAuth2 provider.\n\n This view verifies the CSRF state and OAuth authorization code, and on\n success stores the credentials obtained in the storage provider,\n and redirects to the return_url specified in the authorize view and\n stored in the session.\n\n Args:\n request: Django request.\n\n Returns:\n A redirect response back to the return_url."}
{"_id": "q_4795", "text": "View to start the OAuth2 Authorization flow.\n\n This view starts the OAuth2 authorization flow. If scopes is passed in\n as a GET URL parameter, it will authorize those scopes, otherwise the\n default scopes specified in settings. The return_url can also be\n specified as a GET parameter, otherwise the referer header will be\n checked, and if that isn't found it will return to the root path.\n\n Args:\n request: The Django request object.\n\n Returns:\n A redirect to Google OAuth2 Authorization."}
{"_id": "q_4796", "text": "Create an empty file if necessary.\n\n This method will not initialize the file. Instead it implements a\n simple version of \"touch\" to ensure the file has been created."}
{"_id": "q_4797", "text": "Overrides ``models.Field`` method. This is used to convert\n the value from an instances of this class to bytes that can be\n inserted into the database."}
{"_id": "q_4798", "text": "Convert the field value from the provided model to a string.\n\n Used during model serialization.\n\n Args:\n obj: db.Model, model object\n\n Returns:\n string, the serialized field value"}
{"_id": "q_4799", "text": "Make a signed JWT.\n\n See http://self-issued.info/docs/draft-jones-json-web-token.html.\n\n Args:\n signer: crypt.Signer, Cryptographic signer.\n payload: dict, Dictionary of data to convert to JSON and then sign.\n key_id: string, (Optional) Key ID header.\n\n Returns:\n string, The JWT for the payload."}
{"_id": "q_4800", "text": "Verifies signed content using a list of certificates.\n\n Args:\n message: string or bytes, The message to verify.\n signature: string or bytes, The signature on the message.\n certs: iterable, certificates in PEM format.\n\n Raises:\n AppIdentityError: If none of the certificates can verify the message\n against the signature."}
{"_id": "q_4801", "text": "Checks audience field from a JWT payload.\n\n Does nothing if the passed in ``audience`` is null.\n\n Args:\n payload_dict: dict, A dictionary containing a JWT payload.\n audience: string or NoneType, an audience to check for in\n the JWT payload.\n\n Raises:\n AppIdentityError: If there is no ``'aud'`` field in the payload\n dictionary but there is an ``audience`` to check.\n AppIdentityError: If the ``'aud'`` field in the payload dictionary\n does not match the ``audience``."}
{"_id": "q_4802", "text": "Verifies the issued at and expiration from a JWT payload.\n\n Makes sure the current time (in UTC) falls between the issued at and\n expiration for the JWT (with some skew allowed for via\n ``CLOCK_SKEW_SECS``).\n\n Args:\n payload_dict: dict, A dictionary containing a JWT payload.\n\n Raises:\n AppIdentityError: If there is no ``'iat'`` field in the payload\n dictionary.\n AppIdentityError: If there is no ``'exp'`` field in the payload\n dictionary.\n AppIdentityError: If the JWT expiration is too far in the future (i.e.\n if the expiration would imply a token lifetime\n longer than what is allowed.)\n AppIdentityError: If the token appears to have been issued in the\n future (up to clock skew).\n AppIdentityError: If the token appears to have expired in the past\n (up to clock skew)."}
{"_id": "q_4803", "text": "Verify a JWT against public certs.\n\n See http://self-issued.info/docs/draft-jones-json-web-token.html.\n\n Args:\n jwt: string, A JWT.\n certs: dict, Dictionary where values of public keys in PEM format.\n audience: string, The audience, 'aud', that this JWT should contain. If\n None then the JWT's 'aud' parameter is not verified.\n\n Returns:\n dict, The deserialized JSON payload in the JWT.\n\n Raises:\n AppIdentityError: if any checks are failed."}
{"_id": "q_4804", "text": "Create a cube primitive\n\n Note that this is made of 6 quads, not triangles"}
{"_id": "q_4805", "text": "create an icosphere mesh\n\n radius Radius of the sphere\n # subdivisions = Subdivision level; Number of the recursive subdivision of the\n # surface. Default is 3 (a sphere approximation composed by 1280 faces).\n # Admitted values are in the range 0 (an icosahedron) to 8 (a 1.3 MegaTris\n # approximation of a sphere). Formula for number of faces: F=20*4^subdiv\n # color = specify a color name to apply vertex colors to the newly\n # created mesh"}
{"_id": "q_4806", "text": "Create a box with user defined number of segments in each direction.\n\n Grid spacing is the same as its dimensions (spacing = 1) and its\n thickness is one. Intended to be used for e.g. deforming using functions\n or a height map (lithopanes) and can be resized after creation.\n\n Warnings: function uses layers.join\n\n top_option\n 0 open\n 1 full\n 2 simple\n bottom_option\n 0 open\n 1 full\n 2 simple"}
{"_id": "q_4807", "text": "Check if a variable is a list and is the correct length.\n\n If variable is not a list it will make it a list of the correct length with\n all terms identical."}
{"_id": "q_4808", "text": "Write filter to FilterScript object or filename\n\n Args:\n script (FilterScript object or filename str): the FilterScript object\n or script filename to write the filter to.\n filter_xml (str): the xml filter string"}
{"_id": "q_4809", "text": "Apply LS3 Subdivision Surface algorithm using Loop's weights.\n\n This refinement method take normals into account.\n See: Boye', S. Guennebaud, G. & Schlick, C.\n \"Least squares subdivision surfaces\"\n Computer Graphics Forum, 2010.\n\n Alternatives weighting schemes are based on the paper:\n Barthe, L. & Kobbelt, L.\n \"Subdivision scheme tuning around extraordinary vertices\"\n Computer Aided Geometric Design, 2004, 21, 561-583.\n\n The current implementation of these schemes don't handle vertices of\n valence > 12\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n iterations (int): Number of times the model is subdivided.\n loop_weight (int): Change the weights used. Allow to optimize some\n behaviours in spite of others. Valid values are:\n 0 - Loop (default)\n 1 - Enhance regularity\n 2 - Enhance continuity\n edge_threshold (float): All the edges longer than this threshold will\n be refined. Setting this value to zero will force a uniform\n refinement.\n selected (bool): If selected the filter is performed only on the\n selected faces.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4810", "text": "Merge together all the vertices that are nearer than the specified\n threshold. Like a unify duplicate vertices but with some tolerance.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n threshold (float): Merging distance. All the vertices that are closer\n than this threshold are merged together. Use very small values,\n default is zero.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4811", "text": "Split non-manifold vertices until it becomes two-manifold.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n vert_displacement_ratio (float): When a vertex is split it is moved\n along the average vector going from its position to the centroid\n of the FF connected faces sharing it.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4812", "text": "Try to snap together adjacent borders that are slightly mismatched.\n\n This situation can happen on badly triangulated adjacent patches defined by\n high order surfaces. For each border vertex the filter snaps it onto the\n closest boundary edge only if it is closest of edge_legth*threshold. When\n vertex is snapped the corresponding face it split and a new vertex is\n created.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n edge_dist_ratio (float): Collapse edge when the edge / distance ratio\n is greater than this value. E.g. for default value 1000 two\n straight border edges are collapsed if the central vertex dist from\n the straight line composed by the two edges less than a 1/1000 of\n the sum of the edges length. Larger values enforce that only\n vertexes very close to the line are removed.\n unify_vert (bool): If true the snap vertices are welded together.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4813", "text": "An alternative scale implementation that uses a geometric function.\n This is more accurate than the built-in version."}
{"_id": "q_4814", "text": "Deform mesh around cylinder of radius and axis z\n\n y = 0 will be on the surface of radius \"radius\"\n pitch != 0 will create a helix, with distance \"pitch\" traveled in z for each rotation\n taper = change in r over z. E.g. a value of 0.5 will shrink r by 0.5 for every z length of 1"}
{"_id": "q_4815", "text": "Bends mesh around cylinder of radius radius and axis z to a certain angle\n\n straight_ends: Only apply twist (pitch) over the area that is bent\n\n outside_limit_end (bool): should values outside of the bend radius_limit be considered part\n of the end (True) or the start (False)?"}
{"_id": "q_4816", "text": "Transfer vertex colors to texture colors\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n tex_name (str): The texture file to be created\n tex_width (int): The texture width\n tex_height (int): The texture height\n overwrite_tex (bool): If current mesh has a texture will be overwritten (with provided texture dimension)\n assign_tex (bool): Assign the newly created texture\n fill_tex (bool): If enabled the unmapped texture space is colored using a pull push filling algorithm, if false is set to black"}
{"_id": "q_4817", "text": "Transfer mesh colors to face colors\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n all_visible_layers (bool): If true the color mapping is applied to all the meshes"}
{"_id": "q_4818", "text": "This surface reconstruction algorithm creates watertight\n surfaces from oriented point sets.\n\n The filter uses the original code of Michael Kazhdan and Matthew Bolitho\n implementing the algorithm in the following paper:\n\n Michael Kazhdan, Hugues Hoppe,\n \"Screened Poisson surface reconstruction\"\n ACM Trans. Graphics, 32(3), 2013\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n visible_layer (bool): If True all the visible layers will be used for\n providing the points\n depth (int): This integer is the maximum depth of the tree that will\n be used for surface reconstruction. Running at depth d corresponds\n to solving on a voxel grid whose resolution is no larger than\n 2^d x 2^d x 2^d. Note that since the reconstructor adapts the\n octree to the sampling density, the specified reconstruction depth\n is only an upper bound. The default value for this parameter is 8.\n full_depth (int): This integer specifies the depth beyond depth the\n octree will be adapted. At coarser depths, the octree will be\n complete, containing all 2^d x 2^d x 2^d nodes. The default value\n for this parameter is 5.\n cg_depth (int): This integer is the depth up to which a\n conjugate-gradients solver will be used to solve the linear system.\n Beyond this depth Gauss-Seidel relaxation will be used. The default\n value for this parameter is 0.\n scale (float): This floating point value specifies the ratio between\n the diameter of the cube used for reconstruction and the diameter\n of the samples' bounding cube. The default value is 1.1.\n samples_per_node (float): This floating point value specifies the\n minimum number of sample points that should fall within an octree\n node as the octree construction is adapted to sampling density. For\n noise-free samples, small values in the range [1.0 - 5.0] can be\n used. For more noisy samples, larger values in the range\n [15.0 - 20.0] may be needed to provide a smoother, noise-reduced,\n reconstruction. The default value is 1.5.\n point_weight (float): This floating point value specifies the\n importance that interpolation of the point samples is given in the\n formulation of the screened Poisson equation. The results of the\n original (unscreened) Poisson Reconstruction can be obtained by\n setting this value to 0. The default value for this parameter is 4.\n iterations (int): This integer value specifies the number of\n Gauss-Seidel relaxations to be performed at each level of the\n hierarchy. The default value for this parameter is 8.\n confidence (bool): If True this tells the reconstructor to use the\n quality as confidence information; this is done by scaling the unit\n normals with the quality values. When the flag is not enabled, all\n normals are normalized to have unit-length prior to reconstruction.\n pre_clean (bool): If True will force a cleaning pre-pass on the data\n removing all unreferenced vertices or vertices with null normals.\n\n Layer stack:\n Creates 1 new layer 'Poisson mesh'\n Current layer is not changed\n\n MeshLab versions:\n 2016.12"}
{"_id": "q_4819", "text": "Turn a model into a surface with Voronoi style holes in it\n\n References:\n http://meshlabstuff.blogspot.com/2009/03/creating-voronoi-sphere.html\n http://meshlabstuff.blogspot.com/2009/04/creating-voronoi-sphere-2.html\n\n Requires FilterScript object\n\n Args:\n script: the FilterScript object to write the filter to. Does not\n work with a script filename.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4820", "text": "Select all the faces of the current mesh\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n faces (bool): If True the filter will select all the faces.\n verts (bool): If True the filter will select all the vertices.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4821", "text": "Boolean function using muparser lib to perform face selection over\n current mesh.\n\n See help(mlx.muparser_ref) for muparser reference documentation.\n\n It's possible to use parenthesis, per-vertex variables and boolean operator:\n (, ), and, or, <, >, =\n It's possible to use per-face variables like attributes associated to the three\n vertices of every face.\n\n Variables (per face):\n x0, y0, z0 for first vertex; x1,y1,z1 for second vertex; x2,y2,z2 for third vertex\n nx0, ny0, nz0, nx1, ny1, nz1, etc. for vertex normals\n r0, g0, b0, a0, etc. for vertex color\n q0, q1, q2 for quality\n wtu0, wtv0, wtu1, wtv1, wtu2, wtv2 (per wedge texture coordinates)\n ti for face texture index (>= ML2016.12)\n vsel0, vsel1, vsel2 for vertex selection (1 yes, 0 no) (>= ML2016.12)\n fr, fg, fb, fa for face color (>= ML2016.12)\n fq for face quality (>= ML2016.12)\n fnx, fny, fnz for face normal (>= ML2016.12)\n fsel face selection (1 yes, 0 no) (>= ML2016.12)\n\n Args:\n script: the FilterScript object or script filename to write\n the filter] to.\n function (str): a boolean function that will be evaluated in order\n to select a subset of faces.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4822", "text": "Boolean function using muparser lib to perform vertex selection over current mesh.\n\n See help(mlx.muparser_ref) for muparser reference documentation.\n\n It's possible to use parenthesis, per-vertex variables and boolean operator:\n (, ), and, or, <, >, =\n It's possible to use the following per-vertex variables in the expression:\n\n Variables:\n x, y, z (coordinates)\n nx, ny, nz (normal)\n r, g, b, a (color)\n q (quality)\n rad\n vi (vertex index)\n vtu, vtv (texture coordinates)\n ti (texture index)\n vsel (is the vertex selected? 1 yes, 0 no)\n and all custom vertex attributes already defined by user.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter] to.\n function (str): a boolean function that will be evaluated in order\n to select a subset of vertices. Example: (y > 0) and (ny > 0)\n strict_face_select (bool): if True a face is selected if ALL its\n vertices are selected. If False a face is selected if at least\n one of its vertices is selected. ML v1.3.4BETA only; this is\n ignored in 2016.12. In 2016.12 only vertices are selected.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4823", "text": "Select all vertices within a cylindrical radius\n\n Args:\n radius (float): radius of the sphere\n center_pt (3 coordinate tuple or list): center point of the sphere\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4824", "text": "Select all vertices within a spherical radius\n\n Args:\n radius (float): radius of the sphere\n center_pt (3 coordinate tuple or list): center point of the sphere\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4825", "text": "Flatten all or only the visible layers into a single new mesh.\n\n Transformations are preserved. Existing layers can be optionally\n deleted.\n\n Args:\n script: the mlx.FilterScript object or script filename to write\n the filter to.\n merge_visible (bool): merge only visible layers\n merge_vert (bool): merge the vertices that are duplicated among\n different layers. Very useful when the layers are spliced portions\n of a single big mesh.\n delete_layer (bool): delete all the merged layers. If all layers are\n visible only a single layer will remain after the invocation of\n this filter.\n keep_unreferenced_vert (bool): Do not discard unreferenced vertices\n from source layers. Necessary for point-only layers.\n\n Layer stack:\n Creates a new layer \"Merged Mesh\"\n Changes current layer to the new layer\n Optionally deletes all other layers\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA\n\n Bugs:\n UV textures: not currently preserved, however will be in a future\n release. https://github.com/cnr-isti-vclab/meshlab/issues/128\n merge_visible: it is not currently possible to change the layer\n visibility from meshlabserver, however this will be possible\n in the future https://github.com/cnr-isti-vclab/meshlab/issues/123"}
{"_id": "q_4826", "text": "Change the current layer by specifying the new layer number.\n\n Args:\n script: the mlx.FilterScript object or script filename to write\n the filter to.\n layer_num (int): the number of the layer to change to. Default is the\n last layer if script is a mlx.FilterScript object; if script is a\n filename the default is the first layer.\n\n Layer stack:\n Modifies current layer\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4827", "text": "Delete all layers below the specified one.\n\n Useful for MeshLab ver 2016.12, whcih will only output layer 0."}
{"_id": "q_4828", "text": "Subprocess program error handling\n\n Args:\n program_name (str): name of the subprocess program\n\n Returns:\n break_now (bool): indicate whether calling program should break out of loop"}
{"_id": "q_4829", "text": "Create new mlx script and write opening tags.\n\n Performs special processing on stl files.\n\n If no input files are provided this will create a dummy\n file and delete it as the first filter. This works around\n the meshlab limitation that it must be provided an input\n file, even if you will be creating a mesh as the first\n filter."}
{"_id": "q_4830", "text": "Add new mesh layer to the end of the stack\n\n Args:\n label (str): new label for the mesh layer\n change_layer (bool): change to the newly created layer"}
{"_id": "q_4831", "text": "Delete mesh layer"}
{"_id": "q_4832", "text": "Run main script"}
{"_id": "q_4833", "text": "Create a new layer populated with a point sampling of the current mesh.\n\n Samples are generated according to a Poisson-disk distribution, using the\n algorithm described in:\n\n 'Efficient and Flexible Sampling with Blue Noise Properties of Triangular Meshes'\n Massimiliano Corsini, Paolo Cignoni, Roberto Scopigno\n IEEE TVCG 2012\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n sample_num (int): The desired number of samples. The radius of the disk\n is calculated according to the sampling density.\n radius (float): If not zero this parameter overrides the previous\n parameter to allow exact radius specification.\n montecarlo_rate (int): The over-sampling rate that is used to generate\n the intial Monte Carlo samples (e.g. if this parameter is 'K' means\n that 'K * sample_num' points will be used). The generated\n Poisson-disk samples are a subset of these initial Monte Carlo\n samples. Larger numbers slow the process but make it a bit more\n accurate.\n save_montecarlo (bool): If True, it will generate an additional Layer\n with the Monte Carlo sampling that was pruned to build the Poisson\n distribution.\n approx_geodesic_dist (bool): If True Poisson-disk distances are\n computed using an approximate geodesic distance, e.g. an Euclidean\n distance weighted by a function of the difference between the\n normals of the two points.\n subsample (bool): If True the original vertices of the base mesh are\n used as base set of points. In this case the sample_num should be\n obviously much smaller than the original vertex number. Note that\n this option is very useful in the case you want to subsample a\n dense point cloud.\n refine (bool): If True the vertices of the refine_layer mesh layer are\n used as starting vertices, and they will be utterly refined by\n adding more and more points until possible.\n refine_layer (int): Used only if refine is True.\n best_sample (bool): If True it will use a simple heuristic for choosing\n the samples. At a small cost (it can slow the process a bit) it\n usually improves the maximality of the generated sampling.\n best_sample_pool (bool): Used only if best_sample is True. It controls\n the number of attempts that it makes to get the best sample. It is\n reasonable that it is smaller than the Monte Carlo oversampling\n factor.\n exact_num (bool): If True it will try to do a dicotomic search for the\n best Poisson-disk radius that will generate the requested number of\n samples with a tolerance of the 0.5%. Obviously it takes much\n longer.\n radius_variance (float): The radius of the disk is allowed to vary\n between r and r*var. If this parameter is 1 the sampling is the\n same as the Poisson-disk Sampling.\n\n Layer stack:\n Creates new layer 'Poisson-disk Samples'. Current layer is NOT changed\n to the new layer (see Bugs).\n If save_montecarlo is True, creates a new layer 'Montecarlo Samples'.\n Current layer is NOT changed to the new layer (see Bugs).\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA\n\n Bugs:\n Current layer is NOT changed to the new layer, which is inconsistent\n with the majority of filters that create new layers."}
{"_id": "q_4834", "text": "\"Create a new layer populated with a subsampling of the vertexes of the\n current mesh\n\n The subsampling is driven by a simple one-per-gridded cell strategy.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n cell_size (float): The size of the cell of the clustering grid. Smaller the cell finer the resulting mesh. For obtaining a very coarse mesh use larger values.\n strategy (enum 'AVERAGE' or 'CENTER'): &lt;b>Average&lt;/b>: for each cell we take the average of the sample falling into. The resulting point is a new point.&lt;br>&lt;b>Closest to center&lt;/b>: for each cell we take the sample that is closest to the center of the cell. Choosen vertices are a subset of the original ones.\n selected (bool): If true only for the filter is applied only on the selected subset of the mesh.\n\n Layer stack:\n Creates new layer 'Cluster Samples'. Current layer is changed to the new\n layer.\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4835", "text": "Trivial Per-Triangle parameterization"}
{"_id": "q_4836", "text": "Voronoi Atlas parameterization"}
{"_id": "q_4837", "text": "Compute a set of topological measures over a mesh\n\n Args:\n script: the mlx.FilterScript object or script filename to write\n the filter to.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4838", "text": "Parse the ml_log file generated by the measure_topology function.\n\n Args:\n ml_log (str): MeshLab log file to parse\n log (str): filename to log output\n\n Returns:\n dict: dictionary with the following keys:\n vert_num (int): number of vertices\n edge_num (int): number of edges\n face_num (int): number of faces\n unref_vert_num (int): number or unreferenced vertices\n boundry_edge_num (int): number of boundary edges\n part_num (int): number of parts (components) in the mesh.\n manifold (bool): True if mesh is two-manifold, otherwise false.\n non_manifold_edge (int): number of non_manifold edges.\n non_manifold_vert (int): number of non-manifold verices\n genus (int or str): genus of the mesh, either a number or\n 'undefined' if the mesh is non-manifold.\n holes (int or str): number of holes in the mesh, either a number\n or 'undefined' if the mesh is non-manifold."}
{"_id": "q_4839", "text": "Parse the ml_log file generated by the hausdorff_distance function.\n\n Args:\n ml_log (str): MeshLab log file to parse\n log (str): filename to log output\n\n Returns:\n dict: dictionary with the following keys:\n number_points (int): number of points in mesh\n min_distance (float): minimum hausdorff distance\n max_distance (float): maximum hausdorff distance\n mean_distance (float): mean hausdorff distance\n rms_distance (float): root mean square distance"}
{"_id": "q_4840", "text": "Given a Mesh 'M' and a Pointset 'P', the filter projects each vertex of\n P over M and color M according to the geodesic distance from these\n projected points. Projection and coloring are done on a per vertex\n basis.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n target_layer (int): The mesh layer whose surface is colored. For each\n vertex of this mesh we decide the color according to the following\n arguments.\n source_layer (int): The mesh layer whose vertexes are used as seed\n points for the color computation. These seeds point are projected\n onto the target_layer mesh.\n backward (bool): If True the mesh is colored according to the distance\n from the frontier of the voronoi diagram induced by the\n source_layer seeds.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4841", "text": "Color mesh vertices in a repeating sinusiodal rainbow pattern\n\n Sine wave follows the following equation for each color channel (RGBA):\n channel = sin(freq*increment + phase)*amplitude + center\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n direction (str) = the direction that the sine wave will travel; this\n and the start_pt determine the 'increment' of the sine function.\n Valid values are:\n 'sphere' - radiate sine wave outward from start_pt (default)\n 'x' - sine wave travels along the X axis\n 'y' - sine wave travels along the Y axis\n 'z' - sine wave travels along the Z axis\n or define the increment directly using a muparser function, e.g.\n '2x + y'. In this case start_pt will not be used; include it in\n the function directly.\n start_pt (3 coordinate tuple or list): start point of the sine wave. For a\n sphere this is the center of the sphere.\n amplitude (float [0, 255], single value or 4 term tuple or list): amplitude\n of the sine wave, with range between 0-255. If a single value is\n specified it will be used for all channels, otherwise specify each\n channel individually.\n center (float [0, 255], single value or 4 term tuple or list): center\n of the sine wave, with range between 0-255. If a single value is\n specified it will be used for all channels, otherwise specify each\n channel individually.\n freq (float, single value or 4 term tuple or list): frequency of the sine\n wave. If a single value is specified it will be used for all channels,\n otherwise specifiy each channel individually.\n phase (float [0, 360], single value or 4 term tuple or list): phase\n of the sine wave in degrees, with range between 0-360. If a single\n value is specified it will be used for all channels, otherwise specify\n each channel individually.\n alpha (bool): if False the alpha channel will be set to 255 (full opacity).\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4842", "text": "muparser atan2 function\n\n Implements an atan2(y,x) function for older muparser versions (<2.1.0);\n atan2 was added as a built-in function in muparser 2.1.0\n\n Args:\n y (str): y argument of the atan2(y,x) function\n x (str): x argument of the atan2(y,x) function\n\n Returns:\n A muparser string that calculates atan2(y,x)"}
{"_id": "q_4843", "text": "muparser cross product function\n\n Compute the cross product of two 3x1 vectors\n\n Args:\n u (list or tuple of 3 strings): first vector\n v (list or tuple of 3 strings): second vector\n Returns:\n A list containing a muparser string of the cross product"}
{"_id": "q_4844", "text": "Add a new Per-Vertex scalar attribute to current mesh and fill it with\n the defined function.\n\n The specified name can be used in other filter functions.\n\n It's possible to use parenthesis, per-vertex variables and boolean operator:\n (, ), and, or, <, >, =\n It's possible to use the following per-vertex variables in the expression:\n\n Variables:\n x, y, z (coordinates)\n nx, ny, nz (normal)\n r, g, b, a (color)\n q (quality)\n rad\n vi (vertex index)\n ?vtu, vtv (texture coordinates)\n ?ti (texture index)\n ?vsel (is the vertex selected? 1 yes, 0 no)\n and all custom vertex attributes already defined by user.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter] to.\n name (str): the name of new attribute. You can access attribute in\n other filters through this name.\n function (str): function to calculate custom attribute value for each\n vertex\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4845", "text": "Invert faces orientation, flipping the normals of the mesh.\n\n If requested, it tries to guess the right orientation; mainly it decides to\n flip all the faces if the minimum/maximum vertexes have not outward point\n normals for a few directions. Works well for single component watertight\n objects.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n force_flip (bool): If selected, the normals will always be flipped;\n otherwise, the filter tries to set them outside.\n selected (bool): If selected, only selected faces will be affected.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4846", "text": "Compute the normals of the vertices of a mesh without exploiting the\n triangle connectivity, useful for dataset with no faces.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n neighbors (int): The number of neighbors used to estimate normals.\n smooth_iteration (int): The number of smoothing iteration done on the\n p used to estimate and propagate normals.\n flip (bool): Flip normals w.r.t. viewpoint. If the 'viewpoint' (i.e.\n scanner position) is known, it can be used to disambiguate normals\n orientation, so that all the normals will be oriented in the same\n direction.\n viewpoint_pos (single xyz point, tuple or list): Set the x, y, z\n coordinates of the viewpoint position.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"}
{"_id": "q_4847", "text": "Sort separate line segments in obj format into a continuous polyline or polylines.\n NOT FINISHED; DO NOT USE\n\n Also measures the length of each polyline\n\n Return polyline and polylineMeta (lengths)"}
{"_id": "q_4848", "text": "Measures mesh topology\n\n Args:\n fbasename (str): input filename.\n log (str): filename to log output\n\n Returns:\n dict: dictionary with the following keys:\n vert_num (int): number of vertices\n edge_num (int): number of edges\n face_num (int): number of faces\n unref_vert_num (int): number or unreferenced vertices\n boundry_edge_num (int): number of boundary edges\n part_num (int): number of parts (components) in the mesh.\n manifold (bool): True if mesh is two-manifold, otherwise false.\n non_manifold_edge (int): number of non_manifold edges.\n non_manifold_vert (int): number of non-manifold verices\n genus (int or str): genus of the mesh, either a number or\n 'undefined' if the mesh is non-manifold.\n holes (int or str): number of holes in the mesh, either a number\n or 'undefined' if the mesh is non-manifold."}
{"_id": "q_4849", "text": "Measures mesh geometry, aabb and topology."}
{"_id": "q_4850", "text": "Measure a dimension of a mesh"}
{"_id": "q_4851", "text": "This is a helper used by UploadSet.save to provide lowercase extensions for\n all processed files, to compare with configured extensions in the same\n case.\n\n .. versionchanged:: 0.1.4\n Filenames without extensions are no longer lowercased, only the\n extension is returned in lowercase, if an extension exists.\n\n :param filename: The filename to ensure has a lowercase extension."}
{"_id": "q_4852", "text": "By default, Flask will accept uploads to an arbitrary size. While Werkzeug\n switches uploads from memory to a temporary file when they hit 500 KiB,\n it's still possible for someone to overload your disk space with a\n gigantic file.\n\n This patches the app's request class's\n `~werkzeug.BaseRequest.max_content_length` attribute so that any upload\n larger than the given size is rejected with an HTTP error.\n\n .. note::\n\n In Flask 0.6, you can do this by setting the `MAX_CONTENT_LENGTH`\n setting, without patching the request class. To emulate this behavior,\n you can pass `None` as the size (you must pass it explicitly). That is\n the best way to call this function, as it won't break the Flask 0.6\n functionality if it exists.\n\n .. versionchanged:: 0.1.1\n\n :param app: The app to patch the request class of.\n :param size: The maximum size to accept, in bytes. The default is 64 MiB.\n If it is `None`, the app's `MAX_CONTENT_LENGTH` configuration\n setting will be used to patch."}
{"_id": "q_4853", "text": "This is a helper function for `configure_uploads` that extracts the\n configuration for a single set.\n\n :param uset: The upload set.\n :param app: The app to load the configuration from.\n :param defaults: A dict with keys `url` and `dest` from the\n `UPLOADS_DEFAULT_DEST` and `DEFAULT_UPLOADS_URL`\n settings."}
{"_id": "q_4854", "text": "Call this after the app has been configured. It will go through all the\n upload sets, get their configuration, and store the configuration on the\n app. It will also register the uploads module if it hasn't been set. This\n can be called multiple times with different upload sets.\n\n .. versionchanged:: 0.1.3\n The uploads module/blueprint will only be registered if it is needed\n to serve the upload sets.\n\n :param app: The `~flask.Flask` instance to get the configuration from.\n :param upload_sets: The `UploadSet` instances to configure."}
{"_id": "q_4855", "text": "This gets the current configuration. By default, it looks up the\n current application and gets the configuration from there. But if you\n don't want to go to the full effort of setting an application, or it's\n otherwise outside of a request context, set the `_config` attribute to\n an `UploadConfiguration` instance, then set it back to `None` when\n you're done."}
{"_id": "q_4856", "text": "This function gets the URL a file uploaded to this set would be\n accessed at. It doesn't check whether said file exists.\n\n :param filename: The filename to return the URL for."}
{"_id": "q_4857", "text": "This returns the absolute path of a file uploaded to this set. It\n doesn't actually check whether said file exists.\n\n :param filename: The filename to return the path for.\n :param folder: The subfolder within the upload set previously used\n to save to."}
{"_id": "q_4858", "text": "This determines whether a specific extension is allowed. It is called\n by `file_allowed`, so if you override that but still want to check\n extensions, call back into this.\n\n :param ext: The extension to check, without the dot."}
{"_id": "q_4859", "text": "If a file with the selected name already exists in the target folder,\n this method is called to resolve the conflict. It should return a new\n basename for the file.\n\n The default implementation splits the name and extension and adds a\n suffix to the name consisting of an underscore and a number, and tries\n that until it finds one that doesn't exist.\n\n :param target_folder: The absolute path to the target.\n :param basename: The file's original basename."}
{"_id": "q_4860", "text": "Returns actual version specified in filename."}
{"_id": "q_4861", "text": "Removes duplicate objects.\n\n http://www.peterbe.com/plog/uniqifiers-benchmark."}
{"_id": "q_4862", "text": "Returns count difference in two collections of Python objects."}
{"_id": "q_4863", "text": "Checks memory usage when 'line' event occur."}
{"_id": "q_4864", "text": "Returns processed memory usage."}
{"_id": "q_4865", "text": "Returns memory overhead."}
{"_id": "q_4866", "text": "Returns memory stats for a function."}
{"_id": "q_4867", "text": "Returns module filenames from package.\n\n Args:\n package_path: Path to Python package.\n Returns:\n A set of module filenames."}
{"_id": "q_4868", "text": "Runs function in separate process.\n\n This function is used instead of a decorator, since Python multiprocessing\n module can't serialize decorated function on all platforms."}
{"_id": "q_4869", "text": "Initializes profiler with a module."}
{"_id": "q_4870", "text": "Initializes profiler with a function."}
{"_id": "q_4871", "text": "Replaces sys.argv with proper args to pass to script."}
{"_id": "q_4872", "text": "Samples current stack and adds result in self._stats.\n\n Args:\n signum: Signal that activates handler.\n frame: Frame on top of the stack when signal is handled."}
{"_id": "q_4873", "text": "Returns call tree."}
{"_id": "q_4874", "text": "Runs statistical profiler on a package."}
{"_id": "q_4875", "text": "Runs statistical profiler on a module."}
{"_id": "q_4876", "text": "Processes collected stats for UI."}
{"_id": "q_4877", "text": "Runs cProfile on a module."}
{"_id": "q_4878", "text": "Runs cProfile on a function."}
{"_id": "q_4879", "text": "Initializes DB."}
{"_id": "q_4880", "text": "Returns all existing guestbook records."}
{"_id": "q_4881", "text": "Adds single guestbook record."}
{"_id": "q_4882", "text": "Handles index.html requests."}
{"_id": "q_4883", "text": "Handles HTTP POST requests."}
{"_id": "q_4884", "text": "Sends HTTP response code, message and headers."}
{"_id": "q_4885", "text": "Fills code heatmap and execution count dictionaries."}
{"_id": "q_4886", "text": "Skips lines in src_code specified by skip map."}
{"_id": "q_4887", "text": "Calculates heatmap for package."}
{"_id": "q_4888", "text": "Formats heatmap for UI."}
{"_id": "q_4889", "text": "Runs profilers on run_object.\n\n Args:\n run_object: An object (string or tuple) for profiling.\n prof_config: A string with profilers configuration.\n verbose: True if info about running profilers should be shown.\n Returns:\n An ordered dictionary with collected stats.\n Raises:\n AmbiguousConfigurationError: when prof_config is ambiguous.\n BadOptionError: when unknown options are present in configuration."}
{"_id": "q_4890", "text": "Runs profilers on a function.\n\n Args:\n func: A Python function.\n options: A string with profilers configuration (i.e. 'cmh').\n args: func non-keyword arguments.\n kwargs: func keyword arguments.\n host: Host name to send collected data.\n port: Port number to send collected data.\n\n Returns:\n A result of func execution."}
{"_id": "q_4891", "text": "Get information about a specific template.\n\n :param template_id: The unique id for the template.\n :type template_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"}
{"_id": "q_4892", "text": "Delete a specific template.\n\n :param template_id: The unique id for the template.\n :type template_id: :py:class:`str`"}
{"_id": "q_4893", "text": "The MD5 hash of the lowercase version of the list member's email.\n Used as subscriber_hash\n\n :param member_email: The member's email address\n :type member_email: :py:class:`str`\n :returns: The MD5 hash in hex\n :rtype: :py:class:`str`"}
{"_id": "q_4894", "text": "Function that verifies that the string passed is a valid url.\n\n Original regex author Diego Perini (http://www.iport.it)\n regex ported to Python by adamrofer (https://github.com/adamrofer)\n Used under MIT license.\n\n :param url:\n :return: Nothing"}
{"_id": "q_4895", "text": "Given two dicts, x and y, merge them into a new dict as a shallow copy.\n\n The result only differs from `x.update(y)` in the way that it handles list\n values when both x and y have list values for the same key. In which case\n the returned dictionary, z, has a value according to:\n z[key] = x[key] + z[key]\n\n :param x: The first dictionary\n :type x: :py:class:`dict`\n :param y: The second dictionary\n :type y: :py:class:`dict`\n :returns: The merged dictionary\n :rtype: :py:class:`dict`"}
{"_id": "q_4896", "text": "Batch subscribe or unsubscribe list members.\n\n Only the members array is required in the request body parameters.\n Within the members array, each member requires an email_address\n and either a status or status_if_new. The update_existing parameter\n will also be considered required to help prevent accidental updates\n to existing members and will default to false if not present.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"members\": array*\n [\n {\n \"email_address\": string*,\n \"status\": string* (Must be one of 'subscribed', 'unsubscribed', 'cleaned', or 'pending'),\n \"status_if_new\": string* (Must be one of 'subscribed', 'unsubscribed', 'cleaned', or 'pending')\n }\n ],\n \"update_existing\": boolean*\n }"}
{"_id": "q_4897", "text": "Add a new line item to an existing order.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param order_id: The id for the order in a store.\n :type order_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"id\": string*,\n \"product_id\": string*,\n \"product_variant_id\": string*,\n \"quantity\": integer*,\n \"price\": number*\n }"}
{"_id": "q_4898", "text": "Get links to all other resources available in the API.\n\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"}
{"_id": "q_4899", "text": "Retrieve OAuth2-based credentials to associate API calls with your\n application.\n\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"client_id\": string*,\n \"client_secret\": string*\n }"}
{"_id": "q_4900", "text": "Get information about a specific authorized application\n\n :param app_id: The unique id for the connected authorized application\n :type app_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"}
{"_id": "q_4901", "text": "Add new promo rule to a store\n\n :param store_id: The store id\n :type store_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict'\n data = {\n \"id\": string*,\n \"title\": string,\n \"description\": string*,\n \"starts_at\": string,\n \"ends_at\": string,\n \"amount\": number*,\n \"type\": string*,\n \"target\": string*,\n \"enabled\": boolean,\n \"created_at_foreign\": string,\n \"updated_at_foreign\": string,\n }"}
{"_id": "q_4902", "text": "Get information about a specific folder used to organize campaigns.\n\n :param folder_id: The unique id for the campaign folder.\n :type folder_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"}
{"_id": "q_4903", "text": "Get information about an individual Automation workflow email.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`"}
{"_id": "q_4904", "text": "Upload a new image or file to the File Manager.\n\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"name\": string*,\n \"file_data\": string*\n }"}
{"_id": "q_4905", "text": "Get information about a specific file in the File Manager.\n\n :param file_id: The unique id for the File Manager file.\n :type file_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"}
{"_id": "q_4906", "text": "Update a file in the File Manager.\n\n :param file_id: The unique id for the File Manager file.\n :type file_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"name\": string*,\n \"file_data\": string*\n }"}
{"_id": "q_4907", "text": "Remove a specific file from the File Manager.\n\n :param file_id: The unique id for the File Manager file.\n :type file_id: :py:class:`str`"}
{"_id": "q_4908", "text": "Get information about subscribers who were removed from an Automation\n workflow.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`"}
{"_id": "q_4909", "text": "Create a new webhook for a specific list.\n\n The documentation does not include any required request body\n parameters but the url parameter is being listed here as a required\n parameter in documentation and error-checking based on the description\n of the method\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"url\": string*\n }"}
{"_id": "q_4910", "text": "Get information about a specific webhook.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param webhook_id: The unique id for the webhook.\n :type webhook_id: :py:class:`str`"}
{"_id": "q_4911", "text": "Update the settings for an existing webhook.\n\n :param list_id: The unique id for the list\n :type list_id: :py:class:`str`\n :param webhook_id: The unique id for the webhook\n :type webhook_id: :py:class:`str`"}
{"_id": "q_4912", "text": "Delete a specific webhook in a list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param webhook_id: The unique id for the webhook.\n :type webhook_id: :py:class:`str`"}
{"_id": "q_4913", "text": "returns the specified list segment."}
{"_id": "q_4914", "text": "updates an existing list segment."}
{"_id": "q_4915", "text": "removes an existing list segment from the list. This cannot be undone."}
{"_id": "q_4916", "text": "adds a new segment to the list."}
{"_id": "q_4917", "text": "Get the metadata returned after authentication"}
{"_id": "q_4918", "text": "Get details about an individual conversation.\n\n :param conversation_id: The unique id for the conversation.\n :type conversation_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"}
{"_id": "q_4919", "text": "Get information about members who have unsubscribed from a specific\n campaign.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`\n :param get_all: Should the query get all results\n :type get_all: :py:class:`bool`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []\n queryparams['count'] = integer\n queryparams['offset'] = integer"}
{"_id": "q_4920", "text": "Get information about an Automation email queue.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`"}
{"_id": "q_4921", "text": "Get information about a specific subscriber in an Automation email\n queue.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`"}
{"_id": "q_4922", "text": "Pause an RSS-Driven campaign.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`"}
{"_id": "q_4923", "text": "Replicate a campaign in saved or send status.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`"}
{"_id": "q_4924", "text": "Resume an RSS-Driven campaign.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`"}
{"_id": "q_4925", "text": "Send a MailChimp campaign. For RSS Campaigns, the campaign will send\n according to its schedule. All other campaigns will send immediately.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`"}
{"_id": "q_4926", "text": "Add a new customer to a store.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"id\": string*,\n \"email_address\": string*,\n \"opt_in_status\": boolean*\n }"}
{"_id": "q_4927", "text": "Add or update a product variant.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param product_id: The id for the product of a store.\n :type product_id: :py:class:`str`\n :param variant_id: The id for the product variant.\n :type variant_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"id\": string*,\n \"title\": string*\n }"}
{"_id": "q_4928", "text": "Update a specific feedback message for a campaign.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`\n :param feedback_id: The unique id for the feedback message.\n :type feedback_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"message\": string*\n }"}
{"_id": "q_4929", "text": "Get information about a specific merge field in a list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param merge_id: The id for the merge field.\n :type merge_id: :py:class:`str`"}
{"_id": "q_4930", "text": "Get information about a specific batch webhook.\n\n :param batch_webhook_id: The unique id for the batch webhook.\n :type batch_webhook_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"}
{"_id": "q_4931", "text": "Update a webhook that will fire whenever any batch request completes\n processing.\n\n :param batch_webhook_id: The unique id for the batch webhook.\n :type batch_webhook_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"url\": string*\n }"}
{"_id": "q_4932", "text": "Add a new image to the product.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param product_id: The id for the product of a store.\n :type product_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"id\": string*,\n \"url\": string*\n }"}
{"_id": "q_4933", "text": "Get information about a specific product image.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param product_id: The id for the product of a store.\n :type product_id: :py:class:`str`\n :param image_id: The id for the product image.\n :type image_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"}
{"_id": "q_4934", "text": "Post a new message to a conversation.\n\n :param conversation_id: The unique id for the conversation.\n :type conversation_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"from_email\": string*,\n \"read\": boolean*\n }"}
{"_id": "q_4935", "text": "Add a new order to a store.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"id\": string*,\n \"customer\": object*\n {\n \"'id\": string*\n },\n \"curency_code\": string*,\n \"order_total\": number*,\n \"lines\": array*\n [\n {\n \"id\": string*,\n \"product_id\": string*,\n \"product_variant_id\": string*,\n \"quantity\": integer*,\n \"price\": number*\n }\n ]\n }"}
{"_id": "q_4936", "text": "Update tags for a specific subscriber.\n\n The documentation lists only the tags request body parameter so it is\n being documented and error-checked as if it were required based on the\n description of the method.\n\n The data list needs to include a \"status\" key. This determines if the\n tag should be added or removed from the user:\n\n data = {\n 'tags': [\n {'name': 'foo', 'status': 'active'},\n {'name': 'bar', 'status': 'inactive'}\n ]\n }\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"tags\": list*\n }"}
{"_id": "q_4937", "text": "Update a specific segment in a list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param segment_id: The unique id for the segment.\n :type segment_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"name\": string*\n }"}
{"_id": "q_4938", "text": "Update a specific folder used to organize templates.\n\n :param folder_id: The unique id for the File Manager folder.\n :type folder_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"name\": string*\n }"}
{"_id": "q_4939", "text": "Add a new member to the list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"status\": string*, (Must be one of 'subscribed', 'unsubscribed', 'cleaned',\n 'pending', or 'transactional')\n \"email_address\": string*\n }"}
{"_id": "q_4940", "text": "Update information for a specific list member.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`"}
{"_id": "q_4941", "text": "Add or update a list member.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"email_address\": string*,\n \"status_if_new\": string* (Must be one of 'subscribed',\n 'unsubscribed', 'cleaned', 'pending', or 'transactional')\n }"}
{"_id": "q_4942", "text": "Delete a member from a list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`"}
{"_id": "q_4943", "text": "Delete permanently a member from a list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`"}
{"_id": "q_4944", "text": "Pause an automated email.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`"}
{"_id": "q_4945", "text": "Start an automated email.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`"}
{"_id": "q_4946", "text": "Removes an individual Automation workflow email.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`"}
{"_id": "q_4947", "text": "Create a new MailChimp campaign.\n\n The ValueError raised by an invalid type in data does not mention\n 'absplit' as a potential value because the documentation indicates\n that the absplit type has been deprecated.\n\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"recipients\": object*\n {\n \"list_id\": string*\n },\n \"settings\": object*\n {\n \"subject_line\": string*,\n \"from_name\": string*,\n \"reply_to\": string*\n },\n \"variate_settings\": object* (Required if type is \"variate\")\n {\n \"winner_criteria\": string* (Must be one of \"opens\", \"clicks\", \"total_revenue\", or \"manual\")\n },\n \"rss_opts\": object* (Required if type is \"rss\")\n {\n \"feed_url\": string*,\n \"frequency\": string* (Must be one of \"daily\", \"weekly\", or \"monthly\")\n },\n \"type\": string* (Must be one of \"regular\", \"plaintext\", \"rss\", \"variate\", or \"absplit\")\n }"}
{"_id": "q_4948", "text": "Update some or all of the settings for a specific campaign.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"settings\": object*\n {\n \"subject_line\": string*,\n \"from_name\": string*,\n \"reply_to\": string*\n },\n }"}
{"_id": "q_4949", "text": "Remove a campaign from your MailChimp account.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`"}
{"_id": "q_4950", "text": "Delete a cart.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param cart_id: The id for the cart.\n :type cart_id: :py:class:`str`\n :param line_id: The id for the line item of a cart.\n :type line_id: :py:class:`str`"}
{"_id": "q_4951", "text": "Get a summary of batch requests that have been made.\n\n :param get_all: Should the query get all results\n :type get_all: :py:class:`bool`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []\n queryparams['count'] = integer\n queryparams['offset'] = integer"}
{"_id": "q_4952", "text": "Get the status of a batch request.\n\n :param batch_id: The unique id for the batch operation.\n :type batch_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"}
{"_id": "q_4953", "text": "Policies returned from boto3 are massive, ugly, and difficult to read.\n This method flattens and reformats the policy.\n\n :param policy: Result from invoking describe_load_balancer_policies(...)\n :return: Returns a tuple containing policy_name and the reformatted policy dict."}
{"_id": "q_4954", "text": "Retrieve key from Cache.\n\n :param key: key to look up in cache.\n :type key: ``object``\n\n :param delete_if_expired: remove value from cache if it is expired.\n Default is True.\n :type delete_if_expired: ``bool``\n\n :returns: value from cache or None\n :rtype: varies or None"}
{"_id": "q_4955", "text": "Get access details in cache."}
{"_id": "q_4956", "text": "Gets the VPC Flow Logs for a VPC"}
{"_id": "q_4957", "text": "Gets the Classic Link details about a VPC"}
{"_id": "q_4958", "text": "Gets the VPC Route Tables"}
{"_id": "q_4959", "text": "Private GCP client builder.\n\n :param project: Google Cloud project string.\n :type project: ``str``\n\n :param mod_name: Module name to load. Should be found in sys.path.\n :type mod_name: ``str``\n\n :param pkg_name: package name that mod_name is part of. Default is 'google.cloud' .\n :type pkg_name: ``str``\n\n :param key_file: Default is None.\n :type key_file: ``str`` or None\n\n :param http_auth: httplib2 authorized client. Default is None.\n :type http_auth: :class: `HTTPLib2`\n\n :param user_agent: User Agent string to use in requests. Default is None.\n :type http_auth: ``str`` or None\n\n :return: GCP client\n :rtype: ``object``"}
{"_id": "q_4960", "text": "Google http_auth helper.\n\n If key_file is not specified, default credentials will be used.\n\n If scopes is specified (and key_file), will be used instead of DEFAULT_SCOPES\n\n :param key_file: path to key file to use. Default is None\n :type key_file: ``str``\n\n :param scopes: scopes to set. Default is DEFAUL_SCOPES\n :type scopes: ``list``\n\n :param user_agent: User Agent string to use in requests. Default is None.\n :type http_auth: ``str`` or None\n\n :return: HTTPLib2 authorized client.\n :rtype: :class: `HTTPLib2`"}
{"_id": "q_4961", "text": "Google build client helper.\n\n :param service: service to build client for\n :type service: ``str``\n\n :param api_version: API version to use.\n :type api_version: ``str``\n\n :param http_auth: Initialized HTTP client to use.\n :type http_auth: ``object``\n\n :return: google-python-api client initialized to use 'service'\n :rtype: ``object``"}
{"_id": "q_4962", "text": "Call decorated function for each item in project list.\n\n Note: the function 'decorated' is expected to return a value plus a dictionary of exceptions.\n\n If item in list is a dictionary, we look for a 'project' and 'key_file' entry, respectively.\n If item in list is of type string_types, we assume it is the project string. Default credentials\n will be used by the underlying client library.\n\n :param projects: list of project strings or list of dictionaries\n Example: {'project':..., 'keyfile':...}. Required.\n :type projects: ``list`` of ``str`` or ``list`` of ``dict``\n\n :param key_file: path on disk to keyfile, for use with all projects\n :type key_file: ``str``\n\n :returns: tuple containing a list of function output and an exceptions map\n :rtype: ``tuple of ``list``, ``dict``"}
{"_id": "q_4963", "text": "Helper to get creds out of kwargs."}
{"_id": "q_4964", "text": "Manipulate connection keywords.\n \n Modifieds keywords based on connection type.\n\n There is an assumption here that the client has\n already been created and that these keywords are being\n passed into methods for interacting with various services.\n\n Current modifications:\n - if conn_type is not cloud and module is 'compute', \n then rewrite project as name.\n - if conn_type is cloud and module is 'storage',\n then remove 'project' from dict.\n\n :param conn_type: E.g. 'cloud' or 'general'\n :type conn_type: ``str``\n\n :param kwargs: Dictionary of keywords sent in by user.\n :type kwargs: ``dict``\n\n :param module_name: Name of specific module that will be loaded.\n Default is None.\n :type conn_type: ``str`` or None\n\n :returns kwargs with client and module specific changes\n :rtype: ``dict``"}
{"_id": "q_4965", "text": "General aggregated list function for the GCE service."}
{"_id": "q_4966", "text": "General list function for the GCE service."}
{"_id": "q_4967", "text": "General list function for Google APIs."}
{"_id": "q_4968", "text": "Retrieve detailed cache information."}
{"_id": "q_4969", "text": "Get default User Agent String.\n\n Try to import pkg_name to get an accurate version number.\n \n return: string"}
{"_id": "q_4970", "text": "Rule='string'"}
{"_id": "q_4971", "text": "List objects in bucket.\n\n :param Bucket: name of bucket\n :type Bucket: ``str``\n\n :returns list of objects in bucket\n :rtype: ``list``"}
{"_id": "q_4972", "text": "Calls _modify and either passes the inflection.camelize method or the inflection.underscore method.\n\n :param item: dictionary representing item to be modified\n :param output: string 'camelized' or 'underscored'\n :return:"}
{"_id": "q_4973", "text": "Retrieve the currently active policy version document for every managed policy that is attached to the role."}
{"_id": "q_4974", "text": "Fetch the base IAM Server Certificate."}
{"_id": "q_4975", "text": "Used to obtain a boto3 client or resource connection.\n For cross account, provide both account_number and assume_role.\n\n :usage:\n\n # Same Account:\n client = boto3_cached_conn('iam')\n resource = boto3_cached_conn('iam', service_type='resource')\n\n # Cross Account Client:\n client = boto3_cached_conn('iam', account_number='000000000000', assume_role='role_name')\n\n # Cross Account Resource:\n resource = boto3_cached_conn('iam', service_type='resource', account_number='000000000000', assume_role='role_name')\n\n :param service: AWS service (i.e. 'iam', 'ec2', 'kms')\n :param service_type: 'client' or 'resource'\n :param future_expiration_minutes: Connections will expire from the cache\n when their expiration is within this many minutes of the present time. [Default 15]\n :param account_number: Required if assume_role is provided.\n :param assume_role: Name of the role to assume into for account described by account_number.\n :param session_name: Session name to attach to requests. [Default 'cloudaux']\n :param region: Region name for connection. [Default us-east-1]\n :param return_credentials: Indicates if the STS credentials should be returned with the client [Default False]\n :param external_id: Optional external id to pass to sts:AssumeRole.\n See https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html\n :param arn_partition: Optional parameter to specify other aws partitions such as aws-us-gov for aws govcloud\n :return: boto3 client or resource connection"}
{"_id": "q_4976", "text": "just store the AWS formatted rules"}
{"_id": "q_4977", "text": "Get the inline policies for the group."}
{"_id": "q_4978", "text": "Get a list of the managed policy names that are attached to the group."}
{"_id": "q_4979", "text": "Gets a list of the usernames that are a part of this group."}
{"_id": "q_4980", "text": "Fetch the base IAM Group."}
{"_id": "q_4981", "text": "Returns a list of stores in the catalog. If workspaces is specified will only return stores in those workspaces.\n If names is specified, will only return stores that match.\n names can either be a comma delimited string or an array.\n Will return an empty list if no stores are found."}
{"_id": "q_4982", "text": "Returns a single store object.\n Will return None if no store is found.\n Will raise an error if more than one store with the same name is found."}
{"_id": "q_4983", "text": "List granules of an imagemosaic"}
{"_id": "q_4984", "text": "Returns all coverages in a coverage store"}
{"_id": "q_4985", "text": "Publish a featuretype from data in an existing store"}
{"_id": "q_4986", "text": "returns a single resource object.\n Will return None if no resource is found.\n Will raise an error if more than one resource with the same name is found."}
{"_id": "q_4987", "text": "returns a single layergroup object.\n Will return None if no layergroup is found.\n Will raise an error if more than one layergroup with the same name is found."}
{"_id": "q_4988", "text": "returns a single style object.\n Will return None if no style is found.\n Will raise an error if more than one style with the same name is found."}
{"_id": "q_4989", "text": "Returns a list of workspaces in the catalog.\n If names is specified, will only return workspaces that match.\n names can either be a comma delimited string or an array.\n Will return an empty list if no workspaces are found."}
{"_id": "q_4990", "text": "returns a single workspace object.\n Will return None if no workspace is found.\n Will raise an error if more than one workspace with the same name is found."}
{"_id": "q_4991", "text": "Extract a metadata link tuple from an xml node"}
{"_id": "q_4992", "text": "Create a URL from a list of path segments and an optional dict of query\n parameters."}
{"_id": "q_4993", "text": "GeoServer's REST API uses ZIP archives as containers for file formats such\n as Shapefile and WorldImage which include several 'boxcar' files alongside\n the main data. In such archives, GeoServer assumes that all of the relevant\n files will have the same base name and appropriate extensions, and live in\n the root of the ZIP archive. This method produces a zip file that matches\n these expectations, based on a basename, and a dict of extensions to paths or\n file-like objects. The client code is responsible for deleting the zip\n archive when it's done."}
{"_id": "q_4994", "text": "Extract metadata Dimension Info from an xml node"}
{"_id": "q_4995", "text": "Extract metadata Dynamic Default Values from an xml node"}
{"_id": "q_4996", "text": "Change semantic of MOVE to change resource tags."}
{"_id": "q_4997", "text": "Return _VirtualResource object for path.\n\n path is expected to be\n categoryType/category/name/artifact\n for example:\n 'by_tag/cool/My doc 2/info.html'\n\n See DAVProvider.get_resource_inst()"}
{"_id": "q_4998", "text": "Add a provider to the provider_map routing table."}
{"_id": "q_4999", "text": "Get the registered DAVProvider for a given path.\n\n Returns:\n tuple: (share, provider)"}
{"_id": "q_5000", "text": "Computes digest hash.\n\n Calculation of the A1 (HA1) part is delegated to the dc interface method\n `digest_auth_user()`.\n\n Args:\n realm (str):\n user_name (str):\n method (str): WebDAV Request Method\n uri (str):\n nonce (str): server generated nonce value\n cnonce (str): client generated cnonce value\n qop (str): quality of protection\n nc (str) (number), nonce counter incremented by client\n Returns:\n MD5 hash string\n or False if user rejected by domain controller"}
{"_id": "q_5001", "text": "Handle a COPY request natively."}
{"_id": "q_5002", "text": "Read log entries into a list of dictionaries."}
{"_id": "q_5003", "text": "Return a dictionary containing all files under source control.\n\n dirinfos:\n Dictionary containing direct members for every collection.\n {folderpath: (collectionlist, filelist), ...}\n files:\n Sorted list of all file paths in the manifest.\n filedict:\n Dictionary containing all files under source control.\n\n ::\n\n {'dirinfos': {'': (['wsgidav',\n 'tools',\n 'WsgiDAV.egg-info',\n 'tests'],\n ['index.rst',\n 'wsgidav MAKE_DAILY_BUILD.launch',\n 'wsgidav run_server.py DEBUG.launch',\n 'wsgidav-paste.conf',\n ...\n 'setup.py']),\n 'wsgidav': (['addons', 'samples', 'server', 'interfaces'],\n ['__init__.pyc',\n 'dav_error.pyc',\n 'dav_provider.pyc',\n ...\n 'wsgidav_app.py']),\n },\n 'files': ['.hgignore',\n 'ADDONS.txt',\n 'wsgidav/samples/mysql_dav_provider.py',\n ...\n ],\n 'filedict': {'.hgignore': True,\n 'README.txt': True,\n 'WsgiDAV.egg-info/PKG-INFO': True,\n }\n }"}
{"_id": "q_5004", "text": "Return preferred mapping for a resource mapping.\n\n Different URLs may map to the same resource, e.g.:\n '/a/b' == '/A/b' == '/a/b/'\n get_preferred_path() returns the same value for all these variants, e.g.:\n '/a/b/' (assuming resource names considered case insensitive)\n\n @param path: a UTF-8 encoded, unquoted byte string.\n @return: a UTF-8 encoded, unquoted byte string."}
{"_id": "q_5005", "text": "Convert path to a URL that can be passed to XML responses.\n\n Byte string, UTF-8 encoded, quoted.\n\n See http://www.webdav.org/specs/rfc4918.html#rfc.section.8.3\n We are using the path-absolute option. i.e. starting with '/'.\n URI ; See section 3.2.1 of [RFC2068]"}
{"_id": "q_5006", "text": "Remove all associated dead properties."}
{"_id": "q_5007", "text": "Set application location for this resource provider.\n\n @param share_path: a UTF-8 encoded, unquoted byte string."}
{"_id": "q_5008", "text": "Convert a refUrl to a path, by stripping the share prefix.\n\n Used to calculate the <path> from a storage key by inverting get_ref_url()."}
{"_id": "q_5009", "text": "Return True, if path maps to an existing collection resource.\n\n This method should only be used, if no other information is queried\n for <path>. Otherwise a _DAVResource should be created first."}
{"_id": "q_5010", "text": "Convert XML string into etree.Element."}
{"_id": "q_5011", "text": "Wrapper for etree.tostring, that takes care of unsupported pretty_print\n option and prepends an encoding header."}
{"_id": "q_5012", "text": "Serialize etree.Element.\n\n Note: element may contain more than one child or only text (i.e. no child\n at all). Therefore the resulting string may raise an exception, when\n passed back to etree.XML()."}
{"_id": "q_5013", "text": "Convert path to absolute if not None."}
{"_id": "q_5014", "text": "Read configuration file options into a dictionary."}
{"_id": "q_5015", "text": "Run WsgiDAV using paste.httpserver, if Paste is installed.\n\n See http://pythonpaste.org/modules/httpserver.html for more options"}
{"_id": "q_5016", "text": "Run WsgiDAV using gevent if gevent is installed.\n\n See\n https://github.com/gevent/gevent/blob/master/src/gevent/pywsgi.py#L1356\n https://github.com/gevent/gevent/blob/master/src/gevent/server.py#L38\n for more options"}
{"_id": "q_5017", "text": "Run WsgiDAV using cherrypy.wsgiserver if CherryPy is installed."}
{"_id": "q_5018", "text": "Run WsgiDAV using flup.server.fcgi if Flup is installed."}
{"_id": "q_5019", "text": "Run WsgiDAV using ext_wsgiutils_server from the wsgidav package."}
{"_id": "q_5020", "text": "Handle PROPPATCH request to set or remove a property.\n\n @see http://www.webdav.org/specs/rfc4918.html#METHOD_PROPPATCH"}
{"_id": "q_5021", "text": "Handle MKCOL request to create a new collection.\n\n @see http://www.webdav.org/specs/rfc4918.html#METHOD_MKCOL"}
{"_id": "q_5022", "text": "Get the data from a chunked transfer."}
{"_id": "q_5023", "text": "Get the data from a non-chunked transfer."}
{"_id": "q_5024", "text": "Return properties document for path."}
{"_id": "q_5025", "text": "Computes digest hash A1 part."}
{"_id": "q_5026", "text": "Return a lock dictionary for a token.\n\n If the lock does not exist or is expired, None is returned.\n\n token:\n lock token\n Returns:\n Lock dictionary or <None>\n\n Side effect: if lock is expired, it will be purged and None is returned."}
{"_id": "q_5027", "text": "Create a direct lock for a resource path.\n\n path:\n Normalized path (utf8 encoded string, no trailing '/')\n lock:\n lock dictionary, without a token entry\n Returns:\n New unique lock token.: <lock\n\n **Note:** the lock dictionary may be modified on return:\n\n - lock['root'] is ignored and set to the normalized <path>\n - lock['timeout'] may be normalized and shorter than requested\n - lock['token'] is added"}
{"_id": "q_5028", "text": "Delete lock.\n\n Returns True on success. False, if token does not exist, or is expired."}
{"_id": "q_5029", "text": "Delete all entries."}
{"_id": "q_5030", "text": "Return readable rep."}
{"_id": "q_5031", "text": "Acquire lock and return lock_dict.\n\n principal\n Name of the principal.\n lock_type\n Must be 'write'.\n lock_scope\n Must be 'shared' or 'exclusive'.\n lock_depth\n Must be '0' or 'infinity'.\n lock_owner\n String identifying the owner.\n path\n Resource URL.\n timeout\n Seconds to live\n\n This function does NOT check, if the new lock creates a conflict!"}
{"_id": "q_5032", "text": "Check for permissions and acquire a lock.\n\n On success return new lock dictionary.\n On error raise a DAVError with an embedded DAVErrorCondition."}
{"_id": "q_5033", "text": "Set new timeout for lock, if existing and valid."}
{"_id": "q_5034", "text": "Return lock_dict, or None, if not found or invalid.\n\n Side effect: if lock is expired, it will be purged and None is returned.\n\n key:\n name of lock attribute that will be returned instead of a dictionary."}
{"_id": "q_5035", "text": "Acquire a read lock for the current thread, waiting at most\n timeout seconds or doing a non-blocking check in case timeout is <= 0.\n\n In case timeout is None, the call to acquire_read blocks until the\n lock request can be serviced.\n\n In case the timeout expires before the lock could be serviced, a\n RuntimeError is thrown."}
{"_id": "q_5036", "text": "Acquire a write lock for the current thread, waiting at most\n timeout seconds or doing a non-blocking check in case timeout is <= 0.\n\n In case the write lock cannot be serviced due to the deadlock\n condition mentioned above, a ValueError is raised.\n\n In case timeout is None, the call to acquire_write blocks until the\n lock request can be serviced.\n\n In case the timeout expires before the lock could be serviced, a\n RuntimeError is thrown."}
{"_id": "q_5037", "text": "Release the currently held lock.\n\n In case the current thread holds no lock, a ValueError is thrown."}
{"_id": "q_5038", "text": "Initialize base logger named 'wsgidav'.\n\n The base logger is filtered by the `verbose` configuration option.\n Log entries will have a time stamp and thread id.\n\n :Parameters:\n verbose : int\n Verbosity configuration (0..5)\n enable_loggers : string list\n List of module logger names, that will be switched to DEBUG level.\n\n Module loggers\n ~~~~~~~~~~~~~~\n Module loggers (e.g 'wsgidav.lock_manager') are named loggers, that can be\n independently switched to DEBUG mode.\n\n Except for verbosity, they will inherit settings from the base logger.\n\n They will suppress DEBUG level messages, unless they are enabled by passing\n their name to util.init_logging().\n\n If enabled, module loggers will print DEBUG messages, even if verbose == 3.\n\n Example initialize and use a module logger, that will generate output,\n if enabled (and verbose >= 2)::\n\n _logger = util.get_module_logger(__name__)\n [..]\n _logger.debug(\"foo: '{}'\".format(s))\n\n This logger would be enabled by passing its name to init_logging()::\n\n enable_loggers = [\"lock_manager\",\n \"property_manager\",\n ]\n util.init_logging(2, enable_loggers)\n\n\n Log Level Matrix\n ~~~~~~~~~~~~~~~~\n\n +---------+--------+---------------------------------------------------------------+\n | Verbose | Option | Log level |\n | level | +-------------+------------------------+------------------------+\n | | | base logger | module logger(default) | module logger(enabled) |\n +=========+========+=============+========================+========================+\n | 0 | -qqq | CRITICAL | CRITICAL | CRITICAL |\n +---------+--------+-------------+------------------------+------------------------+\n | 1 | -qq | ERROR | ERROR | ERROR |\n +---------+--------+-------------+------------------------+------------------------+\n | 2 | -q | WARN | WARN | WARN |\n +---------+--------+-------------+------------------------+------------------------+\n | 3 | | INFO | INFO | **DEBUG** |\n +---------+--------+-------------+------------------------+------------------------+\n | 4 | -v | DEBUG | DEBUG | DEBUG |\n +---------+--------+-------------+------------------------+------------------------+\n | 5 | -vv | DEBUG | DEBUG | DEBUG |\n +---------+--------+-------------+------------------------+------------------------+"}
{"_id": "q_5039", "text": "Read 1 byte from wsgi.input, if this has not been done yet.\n\n Returning a response without reading from a request body might confuse the\n WebDAV client.\n This may happen, if an exception like '401 Not authorized', or\n '500 Internal error' was raised BEFORE anything was read from the request\n stream.\n\n See GC issue 13, issue 23\n See http://groups.google.com/group/paste-users/browse_frm/thread/fc0c9476047e9a47?hl=en\n\n Note that with persistent sessions (HTTP/1.1) we must make sure, that the\n 'Connection: closed' header is set with the response, to prevent reusing\n the current stream."}
{"_id": "q_5040", "text": "Append segments to URI.\n\n Example: join_uri(\"/a/b\", \"c\", \"d\")"}
{"_id": "q_5041", "text": "Return True, if childUri is a child of parentUri.\n\n This function accounts for the fact that '/a/b/c' and 'a/b/c/' are\n children of '/a/b' (and also of '/a/b/').\n Note that '/a/b/cd' is NOT a child of 'a/b/c'."}
{"_id": "q_5042", "text": "Read request body XML into an etree.Element.\n\n Return None, if no request body was sent.\n Raise HTTP_BAD_REQUEST, if something else went wrong.\n\n TODO: this is a very relaxed interpretation: should we raise HTTP_BAD_REQUEST\n instead, if CONTENT_LENGTH is missing, invalid, or 0?\n\n RFC: For compatibility with HTTP/1.0 applications, HTTP/1.1 requests containing\n a message-body MUST include a valid Content-Length header field unless the\n server is known to be HTTP/1.1 compliant.\n If a request contains a message-body and a Content-Length is not given, the\n server SHOULD respond with 400 (bad request) if it cannot determine the\n length of the message, or with 411 (length required) if it wishes to insist\n on receiving a valid Content-Length.\"\n\n So I'd say, we should accept a missing CONTENT_LENGTH, and try to read the\n content anyway.\n But WSGI doesn't guarantee to support input.read() without length(?).\n At least it locked, when I tried it with a request that had a missing\n content-type and no body.\n\n Current approach: if CONTENT_LENGTH is\n\n - valid and >0:\n read body (exactly <CONTENT_LENGTH> bytes) and parse the result.\n - 0:\n Assume empty body and return None or raise exception.\n - invalid (negative or not a number:\n raise HTTP_BAD_REQUEST\n - missing:\n NOT: Try to read body until end and parse the result.\n BUT: assume '0'\n - empty string:\n WSGI allows it to be empty or absent: treated like 'missing'."}
{"_id": "q_5043", "text": "Start a WSGI response for a DAVError or status code."}
{"_id": "q_5044", "text": "Return base64 encoded binarystring."}
{"_id": "q_5045", "text": "Use the mimetypes module to lookup the type for an extension.\n\n This function also adds some extensions required for HTML5"}
{"_id": "q_5046", "text": "Return probability estimates for the RDD containing test vector X.\n\n Parameters\n ----------\n X : RDD containing array-like items, shape = [m_samples, n_features]\n\n Returns\n -------\n C : RDD with array-like items , shape = [n_samples, n_classes]\n Returns the probability of the samples for each class in\n the models for each RDD block. The columns correspond to the classes\n in sorted order, as they appear in the attribute `classes_`."}
{"_id": "q_5047", "text": "Return log-probability estimates for the RDD containing the\n test vector X.\n\n Parameters\n ----------\n X : RDD containing array-like items, shape = [m_samples, n_features]\n\n Returns\n -------\n C : RDD with array-like items, shape = [n_samples, n_classes]\n Returns the log-probability of the samples for each class in\n the model for each RDD block. The columns correspond to the classes\n in sorted order, as they appear in the attribute `classes_`."}
{"_id": "q_5048", "text": "Fit Gaussian Naive Bayes according to X, y\n\n Parameters\n ----------\n X : array-like, shape (n_samples, n_features)\n Training vectors, where n_samples is the number of samples\n and n_features is the number of features.\n\n y : array-like, shape (n_samples,)\n Target values.\n\n Returns\n -------\n self : object\n Returns self."}
{"_id": "q_5049", "text": "Sort features by name\n\n Returns a reordered matrix and modifies the vocabulary in place"}
{"_id": "q_5050", "text": "Learn the vocabulary dictionary and return term-document matrix.\n\n This is equivalent to fit followed by transform, but more efficiently\n implemented.\n\n Parameters\n ----------\n Z : iterable or DictRDD with column 'X'\n An iterable of raw_documents which yields either str, unicode or\n file objects; or a DictRDD with column 'X' containing such\n iterables.\n\n Returns\n -------\n X : array, [n_samples, n_features] or DictRDD\n Document-term matrix."}
{"_id": "q_5051", "text": "Transform documents to document-term matrix.\n\n Extract token counts out of raw text documents using the vocabulary\n fitted with fit or the one provided to the constructor.\n\n Parameters\n ----------\n raw_documents : iterable\n An iterable which yields either str, unicode or file objects.\n\n Returns\n -------\n X : sparse matrix, [n_samples, n_features]\n Document-term matrix."}
{"_id": "q_5052", "text": "Wraps a Scikit-learn Linear model's fit method to use with RDD\n input.\n\n Parameters\n ----------\n cls : class object\n The sklearn linear model's class to wrap.\n Z : TupleRDD or DictRDD\n The distributed train data in a DictRDD.\n\n Returns\n -------\n self: the wrapped class"}
{"_id": "q_5053", "text": "Wraps a Scikit-learn Linear model's predict method to use with RDD\n input.\n\n Parameters\n ----------\n cls : class object\n The sklearn linear model's class to wrap.\n Z : ArrayRDD\n The distributed data to predict in a DictRDD.\n\n Returns\n -------\n self: the wrapped class"}
{"_id": "q_5054", "text": "Fit linear model.\n\n Parameters\n ----------\n Z : DictRDD with (X, y) values\n X containing numpy array or sparse matrix - The training data\n y containing the target values\n\n Returns\n -------\n self : returns an instance of self."}
{"_id": "q_5055", "text": "Fit all the transforms one after the other and transform the\n data, then fit the transformed data using the final estimator.\n\n Parameters\n ----------\n Z : ArrayRDD, TupleRDD or DictRDD\n Input data in blocked distributed format.\n\n Returns\n -------\n self : SparkPipeline"}
{"_id": "q_5056", "text": "Applies transforms to the data, and the score method of the\n final estimator. Valid only if the final estimator implements\n score."}
{"_id": "q_5057", "text": "Actual fitting, performing the search over parameters."}
{"_id": "q_5058", "text": "Compute the score of an estimator on a given test set."}
{"_id": "q_5059", "text": "Predict the closest cluster each sample in X belongs to.\n\n In the vector quantization literature, `cluster_centers_` is called\n the code book and each value returned by `predict` is the index of\n the closest code in the code book.\n\n Parameters\n ----------\n X : ArrayRDD containing array-like, sparse matrix\n New data to predict.\n\n Returns\n -------\n labels : ArrayRDD with predictions\n Index of the cluster each sample belongs to."}
{"_id": "q_5060", "text": "Distributed method to predict class labels for samples in X.\n\n Parameters\n ----------\n X : ArrayRDD containing {array-like, sparse matrix}\n Samples.\n\n Returns\n -------\n C : ArrayRDD\n Predicted class label per sample."}
{"_id": "q_5061", "text": "Learn a list of feature name -> indices mappings.\n\n Parameters\n ----------\n Z : DictRDD with column 'X'\n Dict(s) or Mapping(s) from feature names (arbitrary Python\n objects) to feature values (strings or convertible to dtype).\n\n Returns\n -------\n self"}
{"_id": "q_5062", "text": "Fit LSI model to X and perform dimensionality reduction on X.\n\n Parameters\n ----------\n X : {array-like, sparse matrix}, shape (n_samples, n_features)\n Training data.\n\n Returns\n -------\n X_new : array, shape (n_samples, n_components)\n Reduced version of X. This will always be a dense array."}
{"_id": "q_5063", "text": "Pack rdd with a specific collection constructor."}
{"_id": "q_5064", "text": "Pack rdd of tuples as tuples of arrays or scipy.sparse matrices."}
{"_id": "q_5065", "text": "Block an RDD\n\n Parameters\n ----------\n\n rdd : RDD\n RDD of data points to block into either numpy arrays,\n scipy sparse matrices, or pandas data frames.\n Type of data point will be automatically inferred\n and blocked accordingly.\n\n bsize : int, optional, default None\n Size of each block (number of elements), if None all data points\n from each partition will be combined in a block.\n\n Returns\n -------\n\n rdd : ArrayRDD or TupleRDD or DictRDD\n The transformed rdd with added functionality"}
{"_id": "q_5066", "text": "Returns the shape of the data."}
{"_id": "q_5067", "text": "Returns the data as numpy.array from each partition."}
{"_id": "q_5068", "text": "Execute a transformation on a column or columns. Returns the modified\n DictRDD.\n\n Parameters\n ----------\n f : function\n The function to execute on the columns.\n column : {str, list or None}\n The column(s) to transform. If None is specified the method is\n equivalent to map.\n column : {str, list or None}\n The dtype of the column(s) to transform.\n\n Returns\n -------\n result : DictRDD\n DictRDD with transformed column(s).\n\n TODO: optimize"}
{"_id": "q_5069", "text": "Remove objects from the group.\n\n Parameters\n ----------\n to_remove : list\n A list of cobra objects to remove from the group"}
{"_id": "q_5070", "text": "Perform geometric FBA to obtain a unique, centered flux distribution.\n\n Geometric FBA [1]_ formulates the problem as a polyhedron and\n then solves it by bounding the convex hull of the polyhedron.\n The bounding forms a box around the convex hull which reduces\n with every iteration and extracts a unique solution in this way.\n\n Parameters\n ----------\n model: cobra.Model\n The model to perform geometric FBA on.\n epsilon: float, optional\n The convergence tolerance of the model (default 1E-06).\n max_tries: int, optional\n Maximum number of iterations (default 200).\n processes : int, optional\n The number of parallel processes to run. If not explicitly passed,\n will be set from the global configuration singleton.\n\n Returns\n -------\n cobra.Solution\n The solution object containing all the constraints required\n for geometric FBA.\n\n References\n ----------\n .. [1] Smallbone, Kieran & Simeonidis, Vangelis. (2009).\n Flux balance analysis: A geometric perspective.\n Journal of theoretical biology.258. 311-5.\n 10.1016/j.jtbi.2009.01.027."}
{"_id": "q_5071", "text": "Query the list\n\n Parameters\n ----------\n search_function : a string, regular expression or function\n Used to find the matching elements in the list.\n - a regular expression (possibly compiled), in which case the\n given attribute of the object should match the regular expression.\n - a function which takes one argument and returns True for\n desired values\n\n attribute : string or None\n the name attribute of the object to passed as argument to the\n `search_function`. If this is None, the object itself is used.\n\n Returns\n -------\n DictList\n a new list of objects which match the query\n\n Examples\n --------\n >>> import cobra.test\n >>> model = cobra.test.create_test_model('textbook')\n >>> model.reactions.query(lambda x: x.boundary)\n >>> import re\n >>> regex = re.compile('^g', flags=re.IGNORECASE)\n >>> model.metabolites.query(regex, attribute='name')"}
{"_id": "q_5072", "text": "adds elements with id's not already in the model"}
{"_id": "q_5073", "text": "extend list by appending elements from the iterable"}
{"_id": "q_5074", "text": "extends without checking for uniqueness\n\n This function should only be used internally by DictList when it\n can guarantee elements are already unique (as in when coming from\n self or other DictList). It will be faster because it skips these\n checks."}
{"_id": "q_5075", "text": "Determine the position in the list\n\n id: A string or a :class:`~cobra.core.Object.Object`"}
{"_id": "q_5076", "text": "insert object before index"}
{"_id": "q_5077", "text": "The shadow price in the most recent solution.\n\n Shadow price is the dual value of the corresponding constraint in the\n model.\n\n Warnings\n --------\n * Accessing shadow prices through a `Solution` object is the safer,\n preferred, and only guaranteed to be correct way. You can see how to\n do so easily in the examples.\n * Shadow price is retrieved from the currently defined\n `self._model.solver`. The solver status is checked but there are no\n guarantees that the current solver state is the one you are looking\n for.\n * If you modify the underlying model after an optimization, you will\n retrieve the old optimization values.\n\n Raises\n ------\n RuntimeError\n If the underlying model was never optimized beforehand or the\n metabolite is not part of a model.\n OptimizationError\n If the solver status is anything other than 'optimal'.\n\n Examples\n --------\n >>> import cobra\n >>> import cobra.test\n >>> model = cobra.test.create_test_model(\"textbook\")\n >>> solution = model.optimize()\n >>> model.metabolites.glc__D_e.shadow_price\n -0.09166474637510488\n >>> solution.shadow_prices.glc__D_e\n -0.091664746375104883"}
{"_id": "q_5078", "text": "Load a cobra model from a file in YAML format.\n\n Parameters\n ----------\n filename : str or file-like\n File path or descriptor that contains the YAML document describing the\n cobra model.\n\n Returns\n -------\n cobra.Model\n The cobra model as represented in the YAML document.\n\n See Also\n --------\n from_yaml : Load from a string."}
{"_id": "q_5079", "text": "Some common methods for processing a database of flux information into\n print-ready formats. Used in both model_summary and metabolite_summary."}
{"_id": "q_5080", "text": "Coefficient for the reactions in a linear objective.\n\n Parameters\n ----------\n model : cobra model\n the model object that defined the objective\n reactions : list\n an optional list for the reactions to get the coefficients for. All\n reactions if left missing.\n\n Returns\n -------\n dict\n A dictionary where the key is the reaction object and the value is\n the corresponding coefficient. Empty dictionary if there are no\n linear terms in the objective."}
{"_id": "q_5081", "text": "Check whether a sympy expression references the correct variables.\n\n Parameters\n ----------\n model : cobra.Model\n The model in which to check for variables.\n expression : sympy.Basic\n A sympy expression.\n\n Returns\n -------\n boolean\n True if all referenced variables are contained in model, False\n otherwise."}
{"_id": "q_5082", "text": "Set the model objective.\n\n Parameters\n ----------\n model : cobra model\n The model to set the objective for\n value : model.problem.Objective,\n e.g. optlang.glpk_interface.Objective, sympy.Basic or dict\n\n If the model objective is linear, the value can be a new Objective\n object or a dictionary with linear coefficients where each key is a\n reaction and the element the new coefficient (float).\n\n If the objective is not linear and `additive` is true, only values\n of class Objective.\n\n additive : boolmodel.reactions.Biomass_Ecoli_core.bounds = (0.1, 0.1)\n If true, add the terms to the current objective, otherwise start with\n an empty objective."}
{"_id": "q_5083", "text": "Give a string representation for an optlang interface.\n\n Parameters\n ----------\n interface : string, ModuleType\n Full name of the interface in optlang or cobra representation.\n For instance 'optlang.glpk_interface' or 'optlang-glpk'.\n\n Returns\n -------\n string\n The name of the interface as a string"}
{"_id": "q_5084", "text": "Choose a solver given a solver name and model.\n\n This will choose a solver compatible with the model and required\n capabilities. Also respects model.solver where it can.\n\n Parameters\n ----------\n model : a cobra model\n The model for which to choose the solver.\n solver : str, optional\n The name of the solver to be used.\n qp : boolean, optional\n Whether the solver needs Quadratic Programming capabilities.\n\n Returns\n -------\n solver : an optlang solver interface\n Returns a valid solver for the problem.\n\n Raises\n ------\n SolverNotFound\n If no suitable solver could be found."}
{"_id": "q_5085", "text": "Add variables and constraints to a Model's solver object.\n\n Useful for variables and constraints that can not be expressed with\n reactions and lower/upper bounds. Will integrate with the Model's context\n manager in order to revert changes upon leaving the context.\n\n Parameters\n ----------\n model : a cobra model\n The model to which to add the variables and constraints.\n what : list or tuple of optlang variables or constraints.\n The variables or constraints to add to the model. Must be of class\n `model.problem.Variable` or\n `model.problem.Constraint`.\n **kwargs : keyword arguments\n passed to solver.add()"}
{"_id": "q_5086", "text": "Remove variables and constraints from a Model's solver object.\n\n Useful to temporarily remove variables and constraints from a Models's\n solver object.\n\n Parameters\n ----------\n model : a cobra model\n The model from which to remove the variables and constraints.\n what : list or tuple of optlang variables or constraints.\n The variables or constraints to remove from the model. Must be of\n class `model.problem.Variable` or\n `model.problem.Constraint`."}
{"_id": "q_5087", "text": "Fix current objective as an additional constraint.\n\n When adding constraints to a model, such as done in pFBA which\n minimizes total flux, these constraints can become too powerful,\n resulting in solutions that satisfy optimality but sacrifices too\n much for the original objective function. To avoid that, we can fix\n the current objective value as a constraint to ignore solutions that\n give a lower (or higher depending on the optimization direction)\n objective value than the original model.\n\n When done with the model as a context, the modification to the\n objective will be reverted when exiting that context.\n\n Parameters\n ----------\n model : cobra.Model\n The model to operate on\n fraction : float\n The fraction of the optimum the objective is allowed to reach.\n bound : float, None\n The bound to use instead of fraction of maximum optimal value. If\n not None, fraction is ignored.\n name : str\n Name of the objective. May contain one `{}` placeholder which is filled\n with the name of the old objective.\n\n Returns\n -------\n The value of the optimized objective * fraction"}
{"_id": "q_5088", "text": "Perform standard checks on a solver's status."}
{"_id": "q_5089", "text": "Add a new objective and variables to ensure a feasible solution.\n\n The optimized objective will be zero for a feasible solution and otherwise\n represent the distance from feasibility (please see [1]_ for more\n information).\n\n Parameters\n ----------\n model : cobra.Model\n The model whose feasibility is to be tested.\n\n References\n ----------\n .. [1] Gomez, Jose A., Kai H\u00f6ffner, and Paul I. Barton.\n \u201cDFBAlab: A Fast and Reliable MATLAB Code for Dynamic Flux Balance\n Analysis.\u201d BMC Bioinformatics 15, no. 1 (December 18, 2014): 409.\n https://doi.org/10.1186/s12859-014-0409-8."}
{"_id": "q_5090", "text": "Successively optimize separate targets in a specific order.\n\n For each objective, optimize the model and set the optimal value as a\n constraint. Proceed in the order of the objectives given. Due to the\n specific order this is called lexicographic FBA [1]_. This\n procedure is useful for returning unique solutions for a set of important\n fluxes. Typically this is applied to exchange fluxes.\n\n Parameters\n ----------\n model : cobra.Model\n The model to be optimized.\n objectives : list\n A list of reactions (or objectives) in the model for which unique\n fluxes are to be determined.\n objective_direction : str or list, optional\n The desired objective direction for each reaction (if a list) or the\n objective direction to use for all reactions (default maximize).\n\n Returns\n -------\n optimized_fluxes : pandas.Series\n A vector containing the optimized fluxes for each of the given\n reactions in `objectives`.\n\n References\n ----------\n .. [1] Gomez, Jose A., Kai H\u00f6ffner, and Paul I. Barton.\n \u201cDFBAlab: A Fast and Reliable MATLAB Code for Dynamic Flux Balance\n Analysis.\u201d BMC Bioinformatics 15, no. 1 (December 18, 2014): 409.\n https://doi.org/10.1186/s12859-014-0409-8."}
{"_id": "q_5091", "text": "Create a new numpy array that resides in shared memory.\n\n Parameters\n ----------\n shape : tuple of ints\n The shape of the new array.\n data : numpy.array\n Data to copy to the new array. Has to have the same shape.\n integer : boolean\n Whether to use an integer array. Defaults to False which means\n float array."}
{"_id": "q_5092", "text": "Reproject a point into the feasibility region.\n\n This function is guaranteed to return a new feasible point. However,\n no guarantees in terms of proximity to the original point can be made.\n\n Parameters\n ----------\n p : numpy.array\n The current sample point.\n\n Returns\n -------\n numpy.array\n A new feasible point. If `p` was feasible it wil return p."}
{"_id": "q_5093", "text": "Find an approximately random point in the flux cone."}
{"_id": "q_5094", "text": "Identify rdeundant rows in a matrix that can be removed."}
{"_id": "q_5095", "text": "Get the lower and upper bound distances. Negative is bad."}
{"_id": "q_5096", "text": "Create a batch generator.\n\n This is useful to generate n batches of m samples each.\n\n Parameters\n ----------\n batch_size : int\n The number of samples contained in each batch (m).\n batch_num : int\n The number of batches in the generator (n).\n fluxes : boolean\n Whether to return fluxes or the internal solver variables. If set\n to False will return a variable for each forward and backward flux\n as well as all additional variables you might have defined in the\n model.\n\n Yields\n ------\n pandas.DataFrame\n A DataFrame with dimensions (batch_size x n_r) containing\n a valid flux sample for a total of n_r reactions (or variables if\n fluxes=False) in each row."}
{"_id": "q_5097", "text": "Validate a set of samples for equality and inequality feasibility.\n\n Can be used to check whether the generated samples and warmup points\n are feasible.\n\n Parameters\n ----------\n samples : numpy.matrix\n Must be of dimension (n_samples x n_reactions). Contains the\n samples to be validated. Samples must be from fluxes.\n\n Returns\n -------\n numpy.array\n A one-dimensional numpy array of length containing\n a code of 1 to 3 letters denoting the validation result:\n\n - 'v' means feasible in bounds and equality constraints\n - 'l' means a lower bound violation\n - 'u' means a lower bound validation\n - 'e' means and equality constraint violation"}
{"_id": "q_5098", "text": "Remove metabolites that are not involved in any reactions and\n returns pruned model\n\n Parameters\n ----------\n cobra_model: class:`~cobra.core.Model.Model` object\n the model to remove unused metabolites from\n\n Returns\n -------\n output_model: class:`~cobra.core.Model.Model` object\n input model with unused metabolites removed\n inactive_metabolites: list of class:`~cobra.core.reaction.Reaction`\n list of metabolites that were removed"}
{"_id": "q_5099", "text": "Remove reactions with no assigned metabolites, returns pruned model\n\n Parameters\n ----------\n cobra_model: class:`~cobra.core.Model.Model` object\n the model to remove unused reactions from\n\n Returns\n -------\n output_model: class:`~cobra.core.Model.Model` object\n input model with unused reactions removed\n reactions_to_prune: list of class:`~cobra.core.reaction.Reaction`\n list of reactions that were removed"}
{"_id": "q_5100", "text": "Undoes the effects of a call to delete_model_genes in place.\n\n cobra_model: A cobra.Model which will be modified in place"}
{"_id": "q_5101", "text": "identify reactions which will be disabled when the genes are knocked out\n\n cobra_model: :class:`~cobra.core.Model.Model`\n\n gene_list: iterable of :class:`~cobra.core.Gene.Gene`\n\n compiled_gene_reaction_rules: dict of {reaction_id: compiled_string}\n If provided, this gives pre-compiled gene_reaction_rule strings.\n The compiled rule strings can be evaluated much faster. If a rule\n is not provided, the regular expression evaluation will be used.\n Because not all gene_reaction_rule strings can be evaluated, this\n dict must exclude any rules which can not be used with eval."}
{"_id": "q_5102", "text": "Perform gapfilling on a model.\n\n See documentation for the class GapFiller.\n\n Parameters\n ----------\n model : cobra.Model\n The model to perform gap filling on.\n universal : cobra.Model, None\n A universal model with reactions that can be used to complete the\n model. Only gapfill considering demand and exchange reactions if\n left missing.\n lower_bound : float\n The minimally accepted flux for the objective in the filled model.\n penalties : dict, None\n A dictionary with keys being 'universal' (all reactions included in\n the universal model), 'exchange' and 'demand' (all additionally\n added exchange and demand reactions) for the three reaction types.\n Can also have reaction identifiers for reaction specific costs.\n Defaults are 1, 100 and 1 respectively.\n iterations : int\n The number of rounds of gapfilling to perform. For every iteration,\n the penalty for every used reaction increases linearly. This way,\n the algorithm is encouraged to search for alternative solutions\n which may include previously used reactions. I.e., with enough\n iterations pathways including 10 steps will eventually be reported\n even if the shortest pathway is a single reaction.\n exchange_reactions : bool\n Consider adding exchange (uptake) reactions for all metabolites\n in the model.\n demand_reactions : bool\n Consider adding demand reactions for all metabolites.\n\n Returns\n -------\n iterable\n list of lists with on set of reactions that completes the model per\n requested iteration.\n\n Examples\n --------\n >>> import cobra.test as ct\n >>> from cobra import Model\n >>> from cobra.flux_analysis import gapfill\n >>> model = ct.create_test_model(\"salmonella\")\n >>> universal = Model('universal')\n >>> universal.add_reactions(model.reactions.GF6PTA.copy())\n >>> model.remove_reactions([model.reactions.GF6PTA])\n >>> gapfill(model, universal)"}
{"_id": "q_5103", "text": "Update the coefficients for the indicator variables in the objective.\n\n Done incrementally so that second time the function is called,\n active indicators in the current solutions gets higher cost than the\n unused indicators."}
{"_id": "q_5104", "text": "Perform the gapfilling by iteratively solving the model, updating\n the costs and recording the used reactions.\n\n\n Parameters\n ----------\n iterations : int\n The number of rounds of gapfilling to perform. For every\n iteration, the penalty for every used reaction increases\n linearly. This way, the algorithm is encouraged to search for\n alternative solutions which may include previously used\n reactions. I.e., with enough iterations pathways including 10\n steps will eventually be reported even if the shortest pathway\n is a single reaction.\n\n Returns\n -------\n iterable\n A list of lists where each element is a list reactions that were\n used to gapfill the model.\n\n Raises\n ------\n RuntimeError\n If the model fails to be validated (i.e. the original model with\n the proposed reactions added, still cannot get the required flux\n through the objective)."}
{"_id": "q_5105", "text": "Check whether a reaction is an exchange reaction.\n\n Arguments\n ---------\n reaction : cobra.Reaction\n The reaction to check.\n boundary_type : str\n What boundary type to check for. Must be one of\n \"exchange\", \"demand\", or \"sink\".\n external_compartment : str\n The id for the external compartment.\n\n Returns\n -------\n boolean\n Whether the reaction looks like the requested type. Might be based\n on a heuristic."}
{"_id": "q_5106", "text": "Find specific boundary reactions.\n\n Arguments\n ---------\n model : cobra.Model\n A cobra model.\n boundary_type : str\n What boundary type to check for. Must be one of\n \"exchange\", \"demand\", or \"sink\".\n external_compartment : str or None\n The id for the external compartment. If None it will be detected\n automatically.\n\n Returns\n -------\n list of cobra.reaction\n A list of likely boundary reactions of a user defined type."}
{"_id": "q_5107", "text": "Sample a single chain for OptGPSampler.\n\n center and n_samples are updated locally and forgotten afterwards."}
{"_id": "q_5108", "text": "parse gpr into AST\n\n Parameters\n ----------\n str_expr : string\n string with the gene reaction rule to parse\n\n Returns\n -------\n tuple\n elements ast_tree and gene_ids as a set"}
{"_id": "q_5109", "text": "Knockout gene by marking it as non-functional and setting all\n associated reactions bounds to zero.\n\n The change is reverted upon exit if executed within the model as\n context."}
{"_id": "q_5110", "text": "r\"\"\"Add constraints and objective representing for MOMA.\n\n This adds variables and constraints for the minimization of metabolic\n adjustment (MOMA) to the model.\n\n Parameters\n ----------\n model : cobra.Model\n The model to add MOMA constraints and objective to.\n solution : cobra.Solution, optional\n A previous solution to use as a reference. If no solution is given,\n one will be computed using pFBA.\n linear : bool, optional\n Whether to use the linear MOMA formulation or not (default True).\n\n Notes\n -----\n In the original MOMA [1]_ specification one looks for the flux distribution\n of the deletion (v^d) closest to the fluxes without the deletion (v).\n In math this means:\n\n minimize \\sum_i (v^d_i - v_i)^2\n s.t. Sv^d = 0\n lb_i <= v^d_i <= ub_i\n\n Here, we use a variable transformation v^t := v^d_i - v_i. Substituting\n and using the fact that Sv = 0 gives:\n\n minimize \\sum_i (v^t_i)^2\n s.t. Sv^d = 0\n v^t = v^d_i - v_i\n lb_i <= v^d_i <= ub_i\n\n So basically we just re-center the flux space at the old solution and then\n find the flux distribution closest to the new zero (center). This is the\n same strategy as used in cameo.\n\n In the case of linear MOMA [2]_, we instead minimize \\sum_i abs(v^t_i). The\n linear MOMA is typically significantly faster. Also quadratic MOMA tends\n to give flux distributions in which all fluxes deviate from the reference\n fluxes a little bit whereas linear MOMA tends to give flux distributions\n where the majority of fluxes are the same reference with few fluxes\n deviating a lot (typical effect of L2 norm vs L1 norm).\n\n The former objective function is saved in the optlang solver interface as\n ``\"moma_old_objective\"`` and this can be used to immediately extract the\n value of the former objective after MOMA optimization.\n\n See Also\n --------\n pfba : parsimonious FBA\n\n References\n ----------\n .. [1] Segr\u00e8, Daniel, Dennis Vitkup, and George M. Church. \u201cAnalysis of\n Optimality in Natural and Perturbed Metabolic Networks.\u201d\n Proceedings of the National Academy of Sciences 99, no. 23\n (November 12, 2002): 15112. https://doi.org/10.1073/pnas.232349399.\n .. [2] Becker, Scott A, Adam M Feist, Monica L Mo, Gregory Hannum,\n Bernhard \u00d8 Palsson, and Markus J Herrgard. \u201cQuantitative\n Prediction of Cellular Metabolism with Constraint-Based Models:\n The COBRA Toolbox.\u201d Nature Protocols 2 (March 29, 2007): 727."}
{"_id": "q_5111", "text": "convert possible types to str, float, and bool"}
{"_id": "q_5112", "text": "update new_dict with optional attributes from cobra_object"}
{"_id": "q_5113", "text": "Convert model to a dict.\n\n Parameters\n ----------\n model : cobra.Model\n The model to reformulate as a dict.\n sort : bool, optional\n Whether to sort the metabolites, reactions, and genes or maintain the\n order defined in the model.\n\n Returns\n -------\n OrderedDict\n A dictionary with elements, 'genes', 'compartments', 'id',\n 'metabolites', 'notes' and 'reactions'; where 'metabolites', 'genes'\n and 'metabolites' are in turn lists with dictionaries holding all\n attributes to form the corresponding object.\n\n See Also\n --------\n cobra.io.model_from_dict"}
{"_id": "q_5114", "text": "Build a model from a dict.\n\n Models stored in json are first formulated as a dict that can be read to\n cobra model using this function.\n\n Parameters\n ----------\n obj : dict\n A dictionary with elements, 'genes', 'compartments', 'id',\n 'metabolites', 'notes' and 'reactions'; where 'metabolites', 'genes'\n and 'metabolites' are in turn lists with dictionaries holding all\n attributes to form the corresponding object.\n\n Returns\n -------\n cora.core.Model\n The generated model.\n\n See Also\n --------\n cobra.io.model_to_dict"}
{"_id": "q_5115", "text": "extract the compartment from the id string"}
{"_id": "q_5116", "text": "translate an array x into a MATLAB cell array"}
{"_id": "q_5117", "text": "Load a cobra model stored as a .mat file\n\n Parameters\n ----------\n infile_path: str\n path to the file to to read\n variable_name: str, optional\n The variable name of the model in the .mat file. If this is not\n specified, then the first MATLAB variable which looks like a COBRA\n model will be used\n inf: value\n The value to use for infinite bounds. Some solvers do not handle\n infinite values so for using those, set this to a high numeric value.\n\n Returns\n -------\n cobra.core.Model.Model:\n The resulting cobra model"}
{"_id": "q_5118", "text": "Save the cobra model as a .mat file.\n\n This .mat file can be used directly in the MATLAB version of COBRA.\n\n Parameters\n ----------\n model : cobra.core.Model.Model object\n The model to save\n file_name : str or file-like object\n The file to save to\n varname : string\n The name of the variable within the workspace"}
{"_id": "q_5119", "text": "Search for a context manager"}
{"_id": "q_5120", "text": "A decorator to simplify the context management of simple object\n attributes. Gets the value of the attribute prior to setting it, and stores\n a function to set the value to the old value in the HistoryManager."}
{"_id": "q_5121", "text": "Get or set the constraints on the model exchanges.\n\n `model.medium` returns a dictionary of the bounds for each of the\n boundary reactions, in the form of `{rxn_id: bound}`, where `bound`\n specifies the absolute value of the bound in direction of metabolite\n creation (i.e., lower_bound for `met <--`, upper_bound for `met -->`)\n\n Parameters\n ----------\n medium: dictionary-like\n The medium to initialize. medium should be a dictionary defining\n `{rxn_id: bound}` pairs."}
{"_id": "q_5122", "text": "Add a boundary reaction for a given metabolite.\n\n There are three different types of pre-defined boundary reactions:\n exchange, demand, and sink reactions.\n An exchange reaction is a reversible, unbalanced reaction that adds\n to or removes an extracellular metabolite from the extracellular\n compartment.\n A demand reaction is an irreversible reaction that consumes an\n intracellular metabolite.\n A sink is similar to an exchange but specifically for intracellular\n metabolites.\n\n If you set the reaction `type` to something else, you must specify the\n desired identifier of the created reaction along with its upper and\n lower bound. The name will be given by the metabolite name and the\n given `type`.\n\n Parameters\n ----------\n metabolite : cobra.Metabolite\n Any given metabolite. The compartment is not checked but you are\n encouraged to stick to the definition of exchanges and sinks.\n type : str, {\"exchange\", \"demand\", \"sink\"}\n Using one of the pre-defined reaction types is easiest. If you\n want to create your own kind of boundary reaction choose\n any other string, e.g., 'my-boundary'.\n reaction_id : str, optional\n The ID of the resulting reaction. This takes precedence over the\n auto-generated identifiers but beware that it might make boundary\n reactions harder to identify afterwards when using `model.boundary`\n or specifically `model.exchanges` etc.\n lb : float, optional\n The lower bound of the resulting reaction.\n ub : float, optional\n The upper bound of the resulting reaction.\n sbo_term : str, optional\n A correct SBO term is set for the available types. If a custom\n type is chosen, a suitable SBO term should also be set.\n\n Returns\n -------\n cobra.Reaction\n The created boundary reaction.\n\n Examples\n --------\n >>> import cobra.test\n >>> model = cobra.test.create_test_model(\"textbook\")\n >>> demand = model.add_boundary(model.metabolites.atp_c, type=\"demand\")\n >>> demand.id\n 'DM_atp_c'\n >>> demand.name\n 'ATP demand'\n >>> demand.bounds\n (0, 1000.0)\n >>> demand.build_reaction_string()\n 'atp_c --> '"}
{"_id": "q_5123", "text": "Add reactions to the model.\n\n Reactions with identifiers identical to a reaction already in the\n model are ignored.\n\n The change is reverted upon exit when using the model as a context.\n\n Parameters\n ----------\n reaction_list : list\n A list of `cobra.Reaction` objects"}
{"_id": "q_5124", "text": "Remove reactions from the model.\n\n The change is reverted upon exit when using the model as a context.\n\n Parameters\n ----------\n reactions : list\n A list with reactions (`cobra.Reaction`), or their id's, to remove\n\n remove_orphans : bool\n Remove orphaned genes and metabolites from the model as well"}
{"_id": "q_5125", "text": "Add groups to the model.\n\n Groups with identifiers identical to a group already in the model are\n ignored.\n\n If any group contains members that are not in the model, these members\n are added to the model as well. Only metabolites, reactions, and genes\n can have groups.\n\n Parameters\n ----------\n group_list : list\n A list of `cobra.Group` objects to add to the model."}
{"_id": "q_5126", "text": "Populate attached solver with constraints and variables that\n model the provided reactions."}
{"_id": "q_5127", "text": "Optimize model without creating a solution object.\n\n Creating a full solution object implies fetching shadow prices and\n flux values for all reactions and metabolites from the solver\n object. This necessarily takes some time and in cases where only one\n or two values are of interest, it is recommended to instead use this\n function which does not create a solution object returning only the\n value of the objective. Note however that the `optimize()` function\n uses efficient means to fetch values so if you need fluxes/shadow\n prices for more than say 4 reactions/metabolites, then the total\n speed increase of `slim_optimize` versus `optimize` is expected to\n be small or even negative depending on how you fetch the values\n after optimization.\n\n Parameters\n ----------\n error_value : float, None\n The value to return if optimization failed due to e.g.\n infeasibility. If None, raise `OptimizationError` if the\n optimization fails.\n message : string\n Error message to use if the model optimization did not succeed.\n\n Returns\n -------\n float\n The objective value."}
{"_id": "q_5128", "text": "Optimize the model using flux balance analysis.\n\n Parameters\n ----------\n objective_sense : {None, 'maximize' 'minimize'}, optional\n Whether fluxes should be maximized or minimized. In case of None,\n the previous direction is used.\n raise_error : bool\n If true, raise an OptimizationError if solver status is not\n optimal.\n\n Notes\n -----\n Only the most commonly used parameters are presented here. Additional\n parameters for cobra.solvers may be available and specified with the\n appropriate keyword argument."}
{"_id": "q_5129", "text": "Update all indexes and pointers in a model\n\n Parameters\n ----------\n rebuild_index : bool\n rebuild the indices kept in reactions, metabolites and genes\n rebuild_relationships : bool\n reset all associations between genes, metabolites, model and\n then re-add them."}
{"_id": "q_5130", "text": "Merge two models to create a model with the reactions from both\n models.\n\n Custom constraints and variables from right models are also copied\n to left model, however note that, constraints and variables are\n assumed to be the same if they have the same name.\n\n right : cobra.Model\n The model to add reactions from\n prefix_existing : string\n Prefix the reaction identifier in the right that already exist\n in the left model with this string.\n inplace : bool\n Add reactions from right directly to left model object.\n Otherwise, create a new model leaving the left model untouched.\n When done within the model as context, changes to the models are\n reverted upon exit.\n objective : string\n One of 'left', 'right' or 'sum' for setting the objective of the\n resulting model to that of the corresponding model or the sum of\n both."}
{"_id": "q_5131", "text": "makes all ids SBML compliant"}
{"_id": "q_5132", "text": "renames genes in a model from the rename_dict"}
{"_id": "q_5133", "text": "Return the model as a JSON document.\n\n ``kwargs`` are passed on to ``json.dumps``.\n\n Parameters\n ----------\n model : cobra.Model\n The cobra model to represent.\n sort : bool, optional\n Whether to sort the metabolites, reactions, and genes or maintain the\n order defined in the model.\n\n Returns\n -------\n str\n String representation of the cobra model as a JSON document.\n\n See Also\n --------\n save_json_model : Write directly to a file.\n json.dumps : Base function."}
{"_id": "q_5134", "text": "Load a cobra model from a file in JSON format.\n\n Parameters\n ----------\n filename : str or file-like\n File path or descriptor that contains the JSON document describing the\n cobra model.\n\n Returns\n -------\n cobra.Model\n The cobra model as represented in the JSON document.\n\n See Also\n --------\n from_json : Load from a string."}
{"_id": "q_5135", "text": "Add a mixed-integer version of a minimal medium to the model.\n\n Changes the optimization objective to finding the medium with the least\n components::\n\n minimize size(R) where R part of import_reactions\n\n Arguments\n ---------\n model : cobra.model\n The model to modify."}
{"_id": "q_5136", "text": "Convert a solution to medium.\n\n Arguments\n ---------\n exchanges : list of cobra.reaction\n The exchange reactions to consider.\n tolerance : positive double\n The absolute tolerance for fluxes. Fluxes with an absolute value\n smaller than this number will be ignored.\n exports : bool\n Whether to return export fluxes as well.\n\n Returns\n -------\n pandas.Series\n The \"medium\", meaning all active import fluxes in the solution."}
{"_id": "q_5137", "text": "Find the minimal growth medium for the model.\n\n Finds the minimal growth medium for the model which allows for\n model as well as individual growth. Here, a minimal medium can either\n be the medium requiring the smallest total import flux or the medium\n requiring the least components (ergo ingredients), which will be much\n slower due to being a mixed integer problem (MIP).\n\n Arguments\n ---------\n model : cobra.model\n The model to modify.\n min_objective_value : positive float or array-like object\n The minimum growth rate (objective) that has to be achieved.\n exports : boolean\n Whether to include export fluxes in the returned medium. Defaults to\n False which will only return import fluxes.\n minimize_components : boolean or positive int\n Whether to minimize the number of components instead of the total\n import flux. Might be more intuitive if set to True but may also be\n slow to calculate for large communities. If set to a number `n` will\n return up to `n` alternative solutions all with the same number of\n components.\n open_exchanges : boolean or number\n Whether to ignore currently set bounds and make all exchange reactions\n in the model possible. If set to a number all exchange reactions will\n be opened with (-number, number) as bounds.\n\n Returns\n -------\n pandas.Series, pandas.DataFrame or None\n A series giving the import flux for each required import\n reaction and (optionally) the associated export fluxes. All exchange\n fluxes are oriented into the import reaction e.g. positive fluxes\n denote imports and negative fluxes exports. If `minimize_components`\n is a number larger 1 may return a DataFrame where each column is a\n minimal medium. Returns None if the minimization is infeasible\n (for instance if min_growth > maximum growth rate).\n\n Notes\n -----\n Due to numerical issues the `minimize_components` option will usually only\n minimize the number of \"large\" import fluxes. Specifically, the detection\n limit is given by ``integrality_tolerance * max_bound`` where ``max_bound``\n is the largest bound on an import reaction. Thus, if you are interested\n in small import fluxes as well you may have to adjust the integrality\n tolerance at first with\n `model.solver.configuration.tolerances.integrality = 1e-7` for instance.\n However, this will be *very* slow for large models especially with GLPK."}
{"_id": "q_5138", "text": "Initialize a global model object for multiprocessing."}
{"_id": "q_5139", "text": "Determine the minimum and maximum possible flux value for each reaction.\n\n Parameters\n ----------\n model : cobra.Model\n The model for which to run the analysis. It will *not* be modified.\n reaction_list : list of cobra.Reaction or str, optional\n The reactions for which to obtain min/max fluxes. If None will use\n all reactions in the model (default).\n loopless : boolean, optional\n Whether to return only loopless solutions. This is significantly\n slower. Please also refer to the notes.\n fraction_of_optimum : float, optional\n Must be <= 1.0. Requires that the objective value is at least the\n fraction times maximum objective value. A value of 0.85 for instance\n means that the objective has to be at least at 85% percent of its\n maximum.\n pfba_factor : float, optional\n Add an additional constraint to the model that requires the total sum\n of absolute fluxes must not be larger than this value times the\n smallest possible sum of absolute fluxes, i.e., by setting the value\n to 1.1 the total sum of absolute fluxes must not be more than\n 10% larger than the pFBA solution. Since the pFBA solution is the\n one that optimally minimizes the total flux sum, the ``pfba_factor``\n should, if set, be larger than one. Setting this value may lead to\n more realistic predictions of the effective flux bounds.\n processes : int, optional\n The number of parallel processes to run. If not explicitly passed,\n will be set from the global configuration singleton.\n\n Returns\n -------\n pandas.DataFrame\n A data frame with reaction identifiers as the index and two columns:\n - maximum: indicating the highest possible flux\n - minimum: indicating the lowest possible flux\n\n Notes\n -----\n This implements the fast version as described in [1]_. Please note that\n the flux distribution containing all minimal/maximal fluxes does not have\n to be a feasible solution for the model. Fluxes are minimized/maximized\n individually and a single minimal flux might require all others to be\n suboptimal.\n\n Using the loopless option will lead to a significant increase in\n computation time (about a factor of 100 for large models). However, the\n algorithm used here (see [2]_) is still more than 1000x faster than the\n \"naive\" version using ``add_loopless(model)``. Also note that if you have\n included constraints that force a loop (for instance by setting all fluxes\n in a loop to be non-zero) this loop will be included in the solution.\n\n References\n ----------\n .. [1] Computationally efficient flux variability analysis.\n Gudmundsson S, Thiele I.\n BMC Bioinformatics. 2010 Sep 29;11:489.\n doi: 10.1186/1471-2105-11-489, PMID: 20920235\n\n .. [2] CycleFreeFlux: efficient removal of thermodynamically infeasible\n loops from flux distributions.\n Desouki AA, Jarre F, Gelius-Dietrich G, Lercher MJ.\n Bioinformatics. 2015 Jul 1;31(13):2159-65.\n doi: 10.1093/bioinformatics/btv096."}
{"_id": "q_5140", "text": "Find reactions that cannot carry any flux.\n\n The question whether or not a reaction is blocked is highly dependent\n on the current exchange reaction settings for a COBRA model. Hence an\n argument is provided to open all exchange reactions.\n\n Notes\n -----\n Sink and demand reactions are left untouched. Please modify them manually.\n\n Parameters\n ----------\n model : cobra.Model\n The model to analyze.\n reaction_list : list, optional\n List of reactions to consider, the default includes all model\n reactions.\n zero_cutoff : float, optional\n Flux value which is considered to effectively be zero\n (default model.tolerance).\n open_exchanges : bool, optional\n Whether or not to open all exchange reactions to very high flux ranges.\n processes : int, optional\n The number of parallel processes to run. Can speed up the computations\n if the number of reactions is large. If not explicitly\n passed, it will be set from the global configuration singleton.\n\n Returns\n -------\n list\n List with the identifiers of blocked reactions."}
{"_id": "q_5141", "text": "Return a set of essential reactions.\n\n A reaction is considered essential if restricting its flux to zero\n causes the objective, e.g., the growth rate, to also be zero, below the\n threshold, or infeasible.\n\n\n Parameters\n ----------\n model : cobra.Model\n The model to find the essential reactions for.\n threshold : float, optional\n Minimal objective flux to be considered viable. By default this is\n 1% of the maximal objective.\n processes : int, optional\n The number of parallel processes to run. Can speed up the computations\n if the number of knockouts to perform is large. If not explicitly\n passed, it will be set from the global configuration singleton.\n\n Returns\n -------\n set\n Set of essential reactions"}
{"_id": "q_5142", "text": "adds SBO terms for demands and exchanges\n\n This works for models which follow the standard convention for\n constructing and naming these reactions.\n\n The reaction should only contain the single metabolite being exchanged,\n and the id should be EX_metid or DM_metid"}
{"_id": "q_5143", "text": "Knock out each gene pair from the combination of two given lists.\n\n We say 'pair' here but the order order does not matter.\n\n Parameters\n ----------\n model : cobra.Model\n The metabolic model to perform deletions in.\n gene_list1 : iterable, optional\n First iterable of ``cobra.Gene``s to be deleted. If not passed,\n all the genes from the model are used.\n gene_list2 : iterable, optional\n Second iterable of ``cobra.Gene``s to be deleted. If not passed,\n all the genes from the model are used.\n method: {\"fba\", \"moma\", \"linear moma\", \"room\", \"linear room\"}, optional\n Method used to predict the growth rate.\n solution : cobra.Solution, optional\n A previous solution to use as a reference for (linear) MOMA or ROOM.\n processes : int, optional\n The number of parallel processes to run. Can speed up the computations\n if the number of knockouts to perform is large. If not passed,\n will be set to the number of CPUs found.\n kwargs :\n Keyword arguments are passed on to underlying simulation functions\n such as ``add_room``.\n\n Returns\n -------\n pandas.DataFrame\n A representation of all combinations of gene deletions. The\n columns are 'growth' and 'status', where\n\n index : frozenset([str])\n The gene identifiers that were knocked out.\n growth : float\n The growth rate of the adjusted model.\n status : str\n The solution's status."}
{"_id": "q_5144", "text": "Generate the id of reverse_variable from the reaction's id."}
{"_id": "q_5145", "text": "The flux value in the most recent solution.\n\n Flux is the primal value of the corresponding variable in the model.\n\n Warnings\n --------\n * Accessing reaction fluxes through a `Solution` object is the safer,\n preferred, and only guaranteed to be correct way. You can see how to\n do so easily in the examples.\n * Reaction flux is retrieved from the currently defined\n `self._model.solver`. The solver status is checked but there are no\n guarantees that the current solver state is the one you are looking\n for.\n * If you modify the underlying model after an optimization, you will\n retrieve the old optimization values.\n\n Raises\n ------\n RuntimeError\n If the underlying model was never optimized beforehand or the\n reaction is not part of a model.\n OptimizationError\n If the solver status is anything other than 'optimal'.\n AssertionError\n If the flux value is not within the bounds.\n\n Examples\n --------\n >>> import cobra.test\n >>> model = cobra.test.create_test_model(\"textbook\")\n >>> solution = model.optimize()\n >>> model.reactions.PFK.flux\n 7.477381962160283\n >>> solution.fluxes.PFK\n 7.4773819621602833"}
{"_id": "q_5146", "text": "Display gene_reaction_rule with names intead.\n\n Do NOT use this string for computation. It is intended to give a\n representation of the rule using more familiar gene names instead of\n the often cryptic ids."}
{"_id": "q_5147", "text": "All required enzymes for reaction are functional.\n\n Returns\n -------\n bool\n True if the gene-protein-reaction (GPR) rule is fulfilled for\n this reaction, or if reaction is not associated to a model,\n otherwise False."}
{"_id": "q_5148", "text": "Make sure all metabolites and genes that are associated with\n this reaction are aware of it."}
{"_id": "q_5149", "text": "Copy a reaction\n\n The referenced metabolites and genes are also copied."}
{"_id": "q_5150", "text": "Add metabolites and stoichiometric coefficients to the reaction.\n If the final coefficient for a metabolite is 0 then it is removed\n from the reaction.\n\n The change is reverted upon exit when using the model as a context.\n\n Parameters\n ----------\n metabolites_to_add : dict\n Dictionary with metabolite objects or metabolite identifiers as\n keys and coefficients as values. If keys are strings (name of a\n metabolite) the reaction must already be part of a model and a\n metabolite with the given name must exist in the model.\n\n combine : bool\n Describes behavior a metabolite already exists in the reaction.\n True causes the coefficients to be added.\n False causes the coefficient to be replaced.\n\n reversibly : bool\n Whether to add the change to the context to make the change\n reversibly or not (primarily intended for internal use)."}
{"_id": "q_5151", "text": "Generate a human readable reaction string"}
{"_id": "q_5152", "text": "Compute mass and charge balance for the reaction\n\n returns a dict of {element: amount} for unbalanced elements.\n \"charge\" is treated as an element in this dict\n This should be empty for balanced reactions."}
{"_id": "q_5153", "text": "Dissociates a cobra.Gene object with a cobra.Reaction.\n\n Parameters\n ----------\n cobra_gene : cobra.core.Gene.Gene"}
{"_id": "q_5154", "text": "Builds reaction from reaction equation reaction_str using parser\n\n Takes a string and using the specifications supplied in the optional\n arguments infers a set of metabolites, metabolite compartments and\n stoichiometries for the reaction. It also infers the reversibility\n of the reaction from the reaction arrow.\n\n Changes to the associated model are reverted upon exit when using\n the model as a context.\n\n Parameters\n ----------\n reaction_str : string\n a string containing a reaction formula (equation)\n verbose: bool\n setting verbosity of function\n fwd_arrow : re.compile\n for forward irreversible reaction arrows\n rev_arrow : re.compile\n for backward irreversible reaction arrows\n reversible_arrow : re.compile\n for reversible reaction arrows\n term_split : string\n dividing individual metabolite entries"}
{"_id": "q_5155", "text": "Reads SBML model from given filename.\n\n If the given filename ends with the suffix ''.gz'' (for example,\n ''myfile.xml.gz'),' the file is assumed to be compressed in gzip\n format and will be automatically decompressed upon reading. Similarly,\n if the given filename ends with ''.zip'' or ''.bz2',' the file is\n assumed to be compressed in zip or bzip2 format (respectively). Files\n whose names lack these suffixes will be read uncompressed. Note that\n if the file is in zip format but the archive contains more than one\n file, only the first file in the archive will be read and the rest\n ignored.\n\n To read a gzip/zip file, libSBML needs to be configured and linked\n with the zlib library at compile time. It also needs to be linked\n with the bzip2 library to read files in bzip2 format. (Both of these\n are the default configurations for libSBML.)\n\n This function supports SBML with FBC-v1 and FBC-v2. FBC-v1 models\n are converted to FBC-v2 models before reading.\n\n The parser tries to fall back to information in notes dictionaries\n if information is not available in the FBC packages, e.g.,\n CHARGE, FORMULA on species, or GENE_ASSOCIATION, SUBSYSTEM on reactions.\n\n Parameters\n ----------\n filename : path to SBML file, or SBML string, or SBML file handle\n SBML which is read into cobra model\n number: data type of stoichiometry: {float, int}\n In which data type should the stoichiometry be parsed.\n f_replace : dict of replacement functions for id replacement\n Dictionary of replacement functions for gene, specie, and reaction.\n By default the following id changes are performed on import:\n clip G_ from genes, clip M_ from species, clip R_ from reactions\n If no replacements should be performed, set f_replace={}, None\n set_missing_bounds : boolean flag to set missing bounds\n Missing bounds are set to default bounds in configuration.\n\n Returns\n -------\n cobra.core.Model\n\n Notes\n -----\n Provided file handles cannot be opened in binary mode, i.e., use\n with open(path, \"r\" as f):\n read_sbml_model(f)\n File handles to compressed files are not supported yet."}
{"_id": "q_5156", "text": "Get SBMLDocument from given filename.\n\n Parameters\n ----------\n filename : path to SBML, or SBML string, or filehandle\n\n Returns\n -------\n libsbml.SBMLDocument"}
{"_id": "q_5157", "text": "Writes cobra model to filename.\n\n The created model is SBML level 3 version 1 (L1V3) with\n fbc package v2 (fbc-v2).\n\n If the given filename ends with the suffix \".gz\" (for example,\n \"myfile.xml.gz\"), libSBML assumes the caller wants the file to be\n written compressed in gzip format. Similarly, if the given filename\n ends with \".zip\" or \".bz2\", libSBML assumes the caller wants the\n file to be compressed in zip or bzip2 format (respectively). Files\n whose names lack these suffixes will be written uncompressed. Special\n considerations for the zip format: If the given filename ends with\n \".zip\", the file placed in the zip archive will have the suffix\n \".xml\" or \".sbml\". For example, the file in the zip archive will\n be named \"test.xml\" if the given filename is \"test.xml.zip\" or\n \"test.zip\". Similarly, the filename in the archive will be\n \"test.sbml\" if the given filename is \"test.sbml.zip\".\n\n Parameters\n ----------\n cobra_model : cobra.core.Model\n Model instance which is written to SBML\n filename : string\n path to which the model is written\n use_fbc_package : boolean {True, False}\n should the fbc package be used\n f_replace: dict of replacement functions for id replacement"}
{"_id": "q_5158", "text": "Creates bound in model for given reaction.\n\n Adds the parameters for the bounds to the SBML model.\n\n Parameters\n ----------\n model : libsbml.Model\n SBML model instance\n reaction : cobra.core.Reaction\n Cobra reaction instance from which the bounds are read.\n bound_type : {LOWER_BOUND, UPPER_BOUND}\n Type of bound\n f_replace : dict of id replacement functions\n units : flux units\n\n Returns\n -------\n Id of bound parameter."}
{"_id": "q_5159", "text": "Create parameter in SBML model."}
{"_id": "q_5160", "text": "Checks the libsbml return value and logs error messages.\n\n If 'value' is None, logs an error message constructed using\n 'message' and then exits with status code 1. If 'value' is an integer,\n it assumes it is a libSBML return status code. If the code value is\n LIBSBML_OPERATION_SUCCESS, returns without further action; if it is not,\n prints an error message constructed using 'message' along with text from\n libSBML explaining the meaning of the code, and exits with status code 1."}
{"_id": "q_5161", "text": "Creates dictionary of COBRA notes.\n\n Parameters\n ----------\n sbase : libsbml.SBase\n\n Returns\n -------\n dict of notes"}
{"_id": "q_5162", "text": "Set SBase notes based on dictionary.\n\n Parameters\n ----------\n sbase : libsbml.SBase\n SBML object to set notes on\n notes : notes object\n notes information from cobra object"}
{"_id": "q_5163", "text": "Parses cobra annotations from a given SBase object.\n\n Annotations are dictionaries with the providers as keys.\n\n Parameters\n ----------\n sbase : libsbml.SBase\n SBase from which the SBML annotations are read\n\n Returns\n -------\n dict (annotation dictionary)\n\n FIXME: annotation format must be updated (this is a big collection of\n fixes) - see: https://github.com/opencobra/cobrapy/issues/684)"}
{"_id": "q_5164", "text": "Set SBase annotations based on cobra annotations.\n\n Parameters\n ----------\n sbase : libsbml.SBase\n SBML object to annotate\n annotation : cobra annotation structure\n cobra object with annotation information\n\n FIXME: annotation format must be updated\n (https://github.com/opencobra/cobrapy/issues/684)"}
{"_id": "q_5165", "text": "String representation of SBMLError.\n\n Parameters\n ----------\n error : libsbml.SBMLError\n k : index of error\n\n Returns\n -------\n string representation of error"}
{"_id": "q_5166", "text": "Calculate the objective value conditioned on all combinations of\n fluxes for a set of chosen reactions\n\n The production envelope can be used to analyze a model's ability to\n produce a given compound conditional on the fluxes for another set of\n reactions, such as the uptake rates. The model is alternately optimized\n with respect to minimizing and maximizing the objective and the\n obtained fluxes are recorded. Ranges to compute production is set to the\n effective\n bounds, i.e., the minimum / maximum fluxes that can be obtained given\n current reaction bounds.\n\n Parameters\n ----------\n model : cobra.Model\n The model to compute the production envelope for.\n reactions : list or string\n A list of reactions, reaction identifiers or a single reaction.\n objective : string, dict, model.solver.interface.Objective, optional\n The objective (reaction) to use for the production envelope. Use the\n model's current objective if left missing.\n carbon_sources : list or string, optional\n One or more reactions or reaction identifiers that are the source of\n carbon for computing carbon (mol carbon in output over mol carbon in\n input) and mass yield (gram product over gram output). Only objectives\n with a carbon containing input and output metabolite is supported.\n Will identify active carbon sources in the medium if none are specified.\n points : int, optional\n The number of points to calculate production for.\n threshold : float, optional\n A cut-off under which flux values will be considered to be zero\n (default model.tolerance).\n\n Returns\n -------\n pandas.DataFrame\n A data frame with one row per evaluated point and\n\n - reaction id : one column per input reaction indicating the flux at\n each given point,\n - carbon_source: identifiers of carbon exchange reactions\n\n A column for the maximum and minimum each for the following types:\n\n - flux: the objective flux\n - carbon_yield: if carbon source is defined and the product is a\n single metabolite (mol carbon product per mol carbon feeding source)\n - mass_yield: if carbon source is defined and the product is a\n single metabolite (gram product per 1 g of feeding source)\n\n Examples\n --------\n >>> import cobra.test\n >>> from cobra.flux_analysis import production_envelope\n >>> model = cobra.test.create_test_model(\"textbook\")\n >>> production_envelope(model, [\"EX_glc__D_e\", \"EX_o2_e\"])"}
{"_id": "q_5167", "text": "Compute total output per input unit.\n\n Units are typically mol carbon atoms or gram of source and product.\n\n Parameters\n ----------\n input_fluxes : list\n A list of input reaction fluxes in the same order as the\n ``input_components``.\n input_elements : list\n A list of reaction components which are in turn list of numbers.\n output_flux : float\n The output flux value.\n output_elements : list\n A list of stoichiometrically weighted output reaction components.\n\n Returns\n -------\n float\n The ratio between output (mol carbon atoms or grams of product) and\n input (mol carbon atoms or grams of source compounds)."}
{"_id": "q_5168", "text": "Split metabolites into the atoms times their stoichiometric coefficients.\n\n Parameters\n ----------\n reaction : Reaction\n The metabolic reaction whose components are desired.\n\n Returns\n -------\n list\n Each of the reaction's metabolites' desired carbon elements (if any)\n times that metabolite's stoichiometric coefficient."}
{"_id": "q_5169", "text": "Return the metabolite weight times its stoichiometric coefficient."}
{"_id": "q_5170", "text": "Find all active carbon source reactions.\n\n Parameters\n ----------\n model : Model\n A genome-scale metabolic model.\n\n Returns\n -------\n list\n The medium reactions with carbon input flux."}
{"_id": "q_5171", "text": "Assesses production capacity.\n\n Assesses the capacity of the model to produce the precursors for the\n reaction and absorb the production of the reaction while the reaction is\n operating at, or above, the specified cutoff.\n\n Parameters\n ----------\n model : cobra.Model\n The cobra model to assess production capacity for\n\n reaction : reaction identifier or cobra.Reaction\n The reaction to assess\n\n flux_coefficient_cutoff : float\n The minimum flux that reaction must carry to be considered active.\n\n solver : basestring\n Solver name. If None, the default solver will be used.\n\n Returns\n -------\n bool or dict\n True if the model can produce the precursors and absorb the products\n for the reaction operating at, or above, flux_coefficient_cutoff.\n Otherwise, a dictionary of {'precursor': Status, 'product': Status}.\n Where Status is the results from assess_precursors and\n assess_products, respectively."}
{"_id": "q_5172", "text": "Assesses the ability of the model to provide sufficient precursors for\n a reaction operating at, or beyond, the specified cutoff.\n\n Deprecated: use assess_component instead\n\n Parameters\n ----------\n model : cobra.Model\n The cobra model to assess production capacity for\n\n reaction : reaction identifier or cobra.Reaction\n The reaction to assess\n\n flux_coefficient_cutoff : float\n The minimum flux that reaction must carry to be considered active.\n\n solver : basestring\n Solver name. If None, the default solver will be used.\n\n Returns\n -------\n bool or dict\n True if the precursors can be simultaneously produced at the\n specified cutoff. False, if the model has the capacity to produce\n each individual precursor at the specified threshold but not all\n precursors at the required level simultaneously. Otherwise a\n dictionary of the required and the produced fluxes for each reactant\n that is not produced in sufficient quantities."}
{"_id": "q_5173", "text": "Modify a model so all feasible flux distributions are loopless.\n\n In most cases you probably want to use the much faster `loopless_solution`.\n May be used in cases where you want to add complex constraints and\n objecives (for instance quadratic objectives) to the model afterwards\n or use an approximation of Gibbs free energy directions in you model.\n Adds variables and constraints to a model which will disallow flux\n distributions with loops. The used formulation is described in [1]_.\n This function *will* modify your model.\n\n Parameters\n ----------\n model : cobra.Model\n The model to which to add the constraints.\n zero_cutoff : positive float, optional\n Cutoff used for null space. Coefficients with an absolute value smaller\n than `zero_cutoff` are considered to be zero (default model.tolerance).\n\n Returns\n -------\n Nothing\n\n References\n ----------\n .. [1] Elimination of thermodynamically infeasible loops in steady-state\n metabolic models. Schellenberger J, Lewis NE, Palsson BO. Biophys J.\n 2011 Feb 2;100(3):544-53. doi: 10.1016/j.bpj.2010.12.3707. Erratum\n in: Biophys J. 2011 Mar 2;100(5):1381."}
{"_id": "q_5174", "text": "Add constraints for CycleFreeFlux."}
{"_id": "q_5175", "text": "Convert an existing solution to a loopless one.\n\n Removes as many loops as possible (see Notes).\n Uses the method from CycleFreeFlux [1]_ and is much faster than\n `add_loopless` and should therefore be the preferred option to get loopless\n flux distributions.\n\n Parameters\n ----------\n model : cobra.Model\n The model to which to add the constraints.\n fluxes : dict\n A dictionary {rxn_id: flux} that assigns a flux to each reaction. If\n not None will use the provided flux values to obtain a close loopless\n solution.\n\n Returns\n -------\n cobra.Solution\n A solution object containing the fluxes with the least amount of\n loops possible or None if the optimization failed (usually happening\n if the flux distribution in `fluxes` is infeasible).\n\n Notes\n -----\n The returned flux solution has the following properties:\n\n - it contains the minimal number of loops possible and no loops at all if\n all flux bounds include zero\n - it has an objective value close to the original one and the same\n objective value id the objective expression can not form a cycle\n (which is usually true since it consumes metabolites)\n - it has the same exact exchange fluxes as the previous solution\n - all fluxes have the same sign (flow in the same direction) as the\n previous solution\n\n References\n ----------\n .. [1] CycleFreeFlux: efficient removal of thermodynamically infeasible\n loops from flux distributions. Desouki AA, Jarre F, Gelius-Dietrich\n G, Lercher MJ. Bioinformatics. 2015 Jul 1;31(13):2159-65. doi:\n 10.1093/bioinformatics/btv096."}
{"_id": "q_5176", "text": "Plugin to get a loopless FVA solution from single FVA iteration.\n\n Assumes the following about `model` and `reaction`:\n 1. the model objective is set to be `reaction`\n 2. the model has been optimized and contains the minimum/maximum flux for\n `reaction`\n 3. the model contains an auxiliary variable called \"fva_old_objective\"\n denoting the previous objective\n\n Parameters\n ----------\n model : cobra.Model\n The model to be used.\n reaction : cobra.Reaction\n The reaction currently minimized/maximized.\n solution : boolean, optional\n Whether to return the entire solution or only the minimum/maximum for\n `reaction`.\n zero_cutoff : positive float, optional\n Cutoff used for loop removal. Fluxes with an absolute value smaller\n than `zero_cutoff` are considered to be zero (default model.tolerance).\n\n Returns\n -------\n single float or dict\n Returns the minimized/maximized flux through `reaction` if\n all_fluxes == False (default). Otherwise returns a loopless flux\n solution containing the minimum/maximum flux for `reaction`."}
{"_id": "q_5177", "text": "Return a stoichiometric array representation of the given model.\n\n The the columns represent the reactions and rows represent\n metabolites. S[i,j] therefore contains the quantity of metabolite `i`\n produced (negative for consumed) by reaction `j`.\n\n Parameters\n ----------\n model : cobra.Model\n The cobra model to construct the matrix for.\n array_type : string\n The type of array to construct. if 'dense', return a standard\n numpy.array, 'dok', or 'lil' will construct a sparse array using\n scipy of the corresponding type and 'DataFrame' will give a\n pandas `DataFrame` with metabolite indices and reaction columns\n dtype : data-type\n The desired data-type for the array. If not given, defaults to float.\n\n Returns\n -------\n matrix of class `dtype`\n The stoichiometric matrix for the given model."}
{"_id": "q_5178", "text": "r\"\"\"\n Add constraints and objective for ROOM.\n\n This function adds variables and constraints for applying regulatory\n on/off minimization (ROOM) to the model.\n\n Parameters\n ----------\n model : cobra.Model\n The model to add ROOM constraints and objective to.\n solution : cobra.Solution, optional\n A previous solution to use as a reference. If no solution is given,\n one will be computed using pFBA.\n linear : bool, optional\n Whether to use the linear ROOM formulation or not (default False).\n delta: float, optional\n The relative tolerance range which is additive in nature\n (default 0.03).\n epsilon: float, optional\n The absolute range of tolerance which is multiplicative\n (default 0.001).\n\n Notes\n -----\n The formulation used here is the same as stated in the original paper [1]_.\n The mathematical expression is given below:\n\n minimize \\sum_{i=1}^m y^i\n s.t. Sv = 0\n v_min <= v <= v_max\n v_j = 0\n j \u2208 A\n for 1 <= i <= m\n v_i - y_i(v_{max,i} - w_i^u) <= w_i^u (1)\n v_i - y_i(v_{min,i} - w_i^l) <= w_i^l (2)\n y_i \u2208 {0,1} (3)\n w_i^u = w_i + \\delta|w_i| + \\epsilon\n w_i^l = w_i - \\delta|w_i| - \\epsilon\n\n So, for the linear version of the ROOM , constraint (3) is relaxed to\n 0 <= y_i <= 1.\n\n See Also\n --------\n pfba : parsimonious FBA\n\n References\n ----------\n .. [1] Tomer Shlomi, Omer Berkman and Eytan Ruppin, \"Regulatory on/off\n minimization of metabolic flux changes after genetic perturbations\",\n PNAS 2005 102 (21) 7695-7700; doi:10.1073/pnas.0406346102"}
{"_id": "q_5179", "text": "Sample valid flux distributions from a cobra model.\n\n The function samples valid flux distributions from a cobra model.\n Currently we support two methods:\n\n 1. 'optgp' (default) which uses the OptGPSampler that supports parallel\n sampling [1]_. Requires large numbers of samples to be performant\n (n < 1000). For smaller samples 'achr' might be better suited.\n\n or\n\n 2. 'achr' which uses artificial centering hit-and-run. This is a single\n process method with good convergence [2]_.\n\n Parameters\n ----------\n model : cobra.Model\n The model from which to sample flux distributions.\n n : int\n The number of samples to obtain. When using 'optgp' this must be a\n multiple of `processes`, otherwise a larger number of samples will be\n returned.\n method : str, optional\n The sampling algorithm to use.\n thinning : int, optional\n The thinning factor of the generated sampling chain. A thinning of 10\n means samples are returned every 10 steps. Defaults to 100 which in\n benchmarks gives approximately uncorrelated samples. If set to one\n will return all iterates.\n processes : int, optional\n Only used for 'optgp'. The number of processes used to generate\n samples.\n seed : int > 0, optional\n The random number seed to be used. Initialized to current time stamp\n if None.\n\n Returns\n -------\n pandas.DataFrame\n The generated flux samples. Each row corresponds to a sample of the\n fluxes and the columns are the reactions.\n\n Notes\n -----\n The samplers have a correction method to ensure equality feasibility for\n long-running chains, however this will only work for homogeneous models,\n meaning models with no non-zero fixed variables or constraints (\n right-hand side of the equalities are zero).\n\n References\n ----------\n .. [1] Megchelenbrink W, Huynen M, Marchiori E (2014)\n optGpSampler: An Improved Tool for Uniformly Sampling the Solution-Space\n of Genome-Scale Metabolic Networks.\n PLoS ONE 9(2): e86587.\n .. [2] Direction Choice for Accelerated Convergence in Hit-and-Run Sampling\n David E. Kaufman Robert L. Smith\n Operations Research 199846:1 , 84-95"}
{"_id": "q_5180", "text": "Optimizely template tag.\n\n Renders Javascript code to set-up A/B testing. You must supply\n your Optimizely account number in the ``OPTIMIZELY_ACCOUNT_NUMBER``\n setting."}
{"_id": "q_5181", "text": "Clicky tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your Clicky Site ID (as a string) in the ``CLICKY_SITE_ID``\n setting."}
{"_id": "q_5182", "text": "Bottom Chartbeat template tag.\n\n Render the bottom Javascript code for Chartbeat. You must supply\n your Chartbeat User ID (as a string) in the ``CHARTBEAT_USER_ID``\n setting."}
{"_id": "q_5183", "text": "Spring Metrics tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your Spring Metrics Tracking ID in the\n ``SPRING_METRICS_TRACKING_ID`` setting."}
{"_id": "q_5184", "text": "SnapEngage set-up template tag.\n\n Renders Javascript code to set-up SnapEngage chat. You must supply\n your widget ID in the ``SNAPENGAGE_WIDGET_ID`` setting."}
{"_id": "q_5185", "text": "Coerce strings to hashable bytes."}
{"_id": "q_5186", "text": "Return a SHA-256 HMAC `user_hash` as expected by Intercom, if configured.\n\n Return None if the `INTERCOM_HMAC_SECRET_KEY` setting is not configured."}
{"_id": "q_5187", "text": "Intercom.io template tag.\n\n Renders Javascript code to intercom.io testing. You must supply\n your APP ID account number in the ``INTERCOM_APP_ID``\n setting."}
{"_id": "q_5188", "text": "UserVoice tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your UserVoice Widget Key in the ``USERVOICE_WIDGET_KEY``\n setting or the ``uservoice_widget_key`` template context variable."}
{"_id": "q_5189", "text": "Piwik tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your Piwik domain (plus optional URI path), and tracked site ID\n in the ``PIWIK_DOMAIN_PATH`` and the ``PIWIK_SITE_ID`` setting.\n\n Custom variables can be passed in the ``piwik_vars`` context\n variable. It is an iterable of custom variables as tuples like:\n ``(index, name, value[, scope])`` where scope may be ``'page'``\n (default) or ``'visit'``. Index should be an integer and the\n other parameters should be strings."}
{"_id": "q_5190", "text": "Return a constant from ``django.conf.settings``. The `setting`\n argument is the constant name, the `value_re` argument is a regular\n expression used to validate the setting value and the `invalid_msg`\n argument is used as exception message if the value is not valid."}
{"_id": "q_5191", "text": "Return whether the visitor is coming from an internal IP address,\n based on information from the template context.\n\n The prefix is used to allow different analytics services to have\n different notions of internal addresses."}
{"_id": "q_5192", "text": "Mixpanel tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your Mixpanel token in the ``MIXPANEL_API_TOKEN`` setting."}
{"_id": "q_5193", "text": "Olark set-up template tag.\n\n Renders Javascript code to set-up Olark chat. You must supply\n your site ID in the ``OLARK_SITE_ID`` setting."}
{"_id": "q_5194", "text": "Clickmap tracker template tag.\n\n Renders Javascript code to track page visits. You must supply\n your clickmap tracker ID (as a string) in the ``CLICKMAP_TRACKER_ID``\n setting."}
{"_id": "q_5195", "text": "Gaug.es template tag.\n\n Renders Javascript code to gaug.es testing. You must supply\n your Site ID account number in the ``GAUGES_SITE_ID``\n setting."}
{"_id": "q_5196", "text": "HubSpot tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your portal ID (as a string) in the ``HUBSPOT_PORTAL_ID`` setting."}
{"_id": "q_5197", "text": "Manage the printing and in-place updating of a line of characters\n\n .. note::\n If the string is longer than a line, then in-place updating may not\n work (it will print a new line at each refresh)."}
{"_id": "q_5198", "text": "Open a subprocess and stream its output without hard-blocking.\n\n :param cmd: the command to execute within the subprocess\n :type cmd: str\n\n :param callback: function that intakes the subprocess' stdout line by line.\n It is called for each line received from the subprocess' stdout stream.\n :type callback: Callable[[Context], bool]\n\n :param timeout: the timeout time of the subprocess\n :type timeout: float\n\n :raises TimeoutError: if the subprocess' execution time exceeds\n the timeout time\n\n :return: the return code of the executed subprocess\n :rtype: int"}
{"_id": "q_5199", "text": "Compute an exit code for mutmut mutation testing\n\n The following exit codes are available for mutmut:\n * 0 if all mutants were killed (OK_KILLED)\n * 1 if a fatal error occurred\n * 2 if one or more mutants survived (BAD_SURVIVED)\n * 4 if one or more mutants timed out (BAD_TIMEOUT)\n * 8 if one or more mutants caused tests to take twice as long (OK_SUSPICIOUS)\n\n Exit codes 1 to 8 will be bit-ORed so that it is possible to know what\n different mutant statuses occurred during mutation testing.\n\n :param exception:\n :type exception: Exception\n :param config:\n :type config: Config\n\n :return: integer noting the exit code of the mutation tests.\n :rtype: int"}
{"_id": "q_5200", "text": "Called when the specified characteristic has changed its value."}
{"_id": "q_5201", "text": "Called when the specified descriptor has changed its value."}
{"_id": "q_5202", "text": "Start scanning for BLE devices."}
{"_id": "q_5203", "text": "Stop scanning for BLE devices."}
{"_id": "q_5204", "text": "Power on Bluetooth."}
{"_id": "q_5205", "text": "Power off Bluetooth."}
{"_id": "q_5206", "text": "Find the first available device that supports this service and return\n it, or None if no device is found. Will wait for up to timeout_sec\n seconds to find the device."}
{"_id": "q_5207", "text": "Wait until the specified device has discovered the expected services\n and characteristics for this service. Should be called once before other\n calls are made on the service. Returns true if the service has been\n discovered in the specified timeout, or false if not discovered."}
{"_id": "q_5208", "text": "Return the first child service found that has the specified\n UUID. Will return None if no service that matches is found."}
{"_id": "q_5209", "text": "Return a list of GattService objects that have been discovered for\n this device."}
{"_id": "q_5210", "text": "Return a list of UUIDs for services that are advertised by this\n device."}
{"_id": "q_5211", "text": "Return the first child descriptor found that has the specified\n UUID. Will return None if no descriptor that matches is found."}
{"_id": "q_5212", "text": "Read the value of this characteristic."}
{"_id": "q_5213", "text": "Read the value of this descriptor."}
{"_id": "q_5214", "text": "Called when the BLE adapter found a device while scanning, or has\n new advertisement data for a device."}
{"_id": "q_5215", "text": "Called when a device is connected."}
{"_id": "q_5216", "text": "Called when descriptor value was read or updated."}
{"_id": "q_5217", "text": "Called when a new RSSI value for the peripheral is available."}
{"_id": "q_5218", "text": "Clear any internally cached BLE device data. Necessary in some cases\n to prevent issues with stale device data getting cached by the OS."}
{"_id": "q_5219", "text": "Disconnect any connected devices that have the specified list of\n service UUIDs. The default is an empty list which means all devices\n are disconnected."}
{"_id": "q_5220", "text": "Print tree of all bluez objects, useful for debugging."}
{"_id": "q_5221", "text": "Return the first device that advertises the specified service UUIDs or\n has the specified name. Will wait up to timeout_sec seconds for the device\n to be found, and if the timeout is zero then it will not wait at all and\n immediately return a result. When no device is found a value of None is\n returned."}
{"_id": "q_5222", "text": "Retrieve a list of metadata objects associated with the specified\n list of CoreBluetooth objects. If an object cannot be found then an\n exception is thrown."}
{"_id": "q_5223", "text": "Add the specified CoreBluetooth item with the associated metadata if\n it doesn't already exist. Returns the newly created or preexisting\n metadata item."}
{"_id": "q_5224", "text": "Convert Objective-C CBUUID type to native Python UUID type."}
{"_id": "q_5225", "text": "Return an instance of the BLE provider for the current platform."}
{"_id": "q_5226", "text": "Convert the byte array to a BigInteger"}
{"_id": "q_5227", "text": "Return the default set of request headers, which\n can later be expanded, based on the request type"}
{"_id": "q_5228", "text": "Search the play store for an app.\n\n nb_result (int): is the maximum number of result to be returned\n\n offset (int): is used to take result starting from an index."}
{"_id": "q_5229", "text": "Get app details from a package name.\n\n packageName is the app unique ID (usually starting with 'com.')."}
{"_id": "q_5230", "text": "Get several apps details from a list of package names.\n\n This is much more efficient than calling N times details() since it\n requires only one request. If an item is not found it returns an empty object\n instead of throwing a RequestError('Item not found') like the details() function\n\n Args:\n packageNames (list): a list of app IDs (usually starting with 'com.').\n\n Returns:\n a list of dictionaries containing docv2 data, or None\n if the app doesn't exist"}
{"_id": "q_5231", "text": "List all possible subcategories for a specific category. If\n also a subcategory is provided, list apps from this category.\n\n Args:\n cat (str): category id\n ctr (str): subcategory id\n nb_results (int): if a subcategory is specified, limit number\n of results to this number\n offset (int): if a subcategory is specified, start counting from this\n result\n Returns:\n A list of categories. If subcategory is specified, a list of apps in this\n category."}
{"_id": "q_5232", "text": "Browse reviews for an application\n\n Args:\n packageName (str): app unique ID.\n filterByDevice (bool): filter results for current device\n sort (int): sorting criteria (values are unknown)\n nb_results (int): max number of reviews to return\n offset (int): return reviews starting from an offset value\n\n Returns:\n dict object containing all the protobuf data returned from\n the api"}
{"_id": "q_5233", "text": "Download an already purchased app.\n\n Args:\n packageName (str): app unique ID (usually starting with 'com.')\n versionCode (int): version to download\n offerType (int): different type of downloads (mostly unused for apks)\n downloadToken (str): download token returned by 'purchase' API\n progress_bar (bool): wether or not to print a progress bar to stdout\n\n Returns:\n Dictionary containing apk data and a list of expansion files. As stated\n in android documentation, there can be at most 2 expansion files, one with\n main content, and one for patching the main content. Their names should\n follow this format:\n\n [main|patch].<expansion-version>.<package-name>.obb\n\n Data to build this name string is provided in the dict object. For more\n info check https://developer.android.com/google/play/expansion-files.html"}
{"_id": "q_5234", "text": "Decorator function that injects a requests.Session instance into\n the decorated function's actual parameters if not given."}
{"_id": "q_5235", "text": "Generates a secure authentication token.\n\n Our token format follows the JSON Web Token (JWT) standard:\n header.claims.signature\n\n Where:\n 1) 'header' is a stringified, base64-encoded JSON object containing version and algorithm information.\n 2) 'claims' is a stringified, base64-encoded JSON object containing a set of claims:\n Library-generated claims:\n 'iat' -> The issued at time in seconds since the epoch as a number\n 'd' -> The arbitrary JSON object supplied by the user.\n User-supplied claims (these are all optional):\n 'exp' (optional) -> The expiration time of this token, as a number of seconds since the epoch.\n 'nbf' (optional) -> The 'not before' time before which the token should be rejected (seconds since the epoch)\n 'admin' (optional) -> If set to true, this client will bypass all security rules (use this to authenticate servers)\n 'debug' (optional) -> 'set to true to make this client receive debug information about security rule execution.\n 'simulate' (optional, internal-only for now) -> Set to true to neuter all API operations (listens / puts\n will run security rules but not actually write or return data).\n 3) A signature that proves the validity of this token (see: http://tools.ietf.org/html/draft-ietf-jose-json-web-signature-07)\n\n For base64-encoding we use URL-safe base64 encoding. This ensures that the entire token is URL-safe\n and could, for instance, be placed as a query argument without any encoding (and this is what the JWT spec requires).\n\n Args:\n data - a json serializable object of data to be included in the token\n options - An optional dictionary of additional claims for the token. Possible keys include:\n a) 'expires' -- A timestamp (as a number of seconds since the epoch) denoting a time after which\n this token should no longer be valid.\n b) 'notBefore' -- A timestamp (as a number of seconds since the epoch) denoting a time before\n which this token should be rejected by the server.\n c) 'admin' -- Set to true to bypass all security rules (use this for your trusted servers).\n d) 'debug' -- Set to true to enable debug mode (so you can see the results of Rules API operations)\n e) 'simulate' -- (internal-only for now) Set to true to neuter all API operations (listens / puts\n will run security rules but not actually write or return data)\n Returns:\n A signed Firebase Authentication Token\n Raises:\n ValueError: if an invalid key is specified in options"}
{"_id": "q_5236", "text": "Method that simply adjusts authentication credentials for the\n request.\n `params` is the querystring of the request.\n `headers` is the header of the request.\n\n If auth instance is not provided to this class, this method simply\n returns without doing anything."}
{"_id": "q_5237", "text": "Synchronous GET request."}
{"_id": "q_5238", "text": "Asynchronous GET request with the process pool."}
{"_id": "q_5239", "text": "Returns zero if there are no permissions for a bit of the perm. of a file. Otherwise it returns a positive value\n\n :param os.stat_result s: os.stat(file) object\n :param str perm: R (Read) or W (Write) or X (eXecute)\n :param str pos: USR (USeR) or GRP (GRouP) or OTH (OTHer)\n :return: mask value\n :rtype: int"}
{"_id": "q_5240", "text": "File is only writable by root\n\n :param str path: Path to file\n :return: True if only root can write\n :rtype: bool"}
{"_id": "q_5241", "text": "Command to check configuration file. Raises InvalidConfig on error\n\n :param str file: path to config file\n :param printfn: print function for success message\n :return: None"}
{"_id": "q_5242", "text": "Parse and validate the config file. The read data is accessible as a dictionary in this instance\n\n :return: None"}
{"_id": "q_5243", "text": "Excecute command on thread\n\n :param cmd: Command to execute\n :param cwd: current working directory\n :return: None"}
{"_id": "q_5244", "text": "Excecute command on remote machine using SSH\n\n :param cmd: Command to execute\n :param ssh: Server to connect. Port is optional\n :param cwd: current working directory\n :return: None"}
{"_id": "q_5245", "text": "Get HTTP Headers to send. By default default_headers\n\n :return: HTTP Headers\n :rtype: dict"}
{"_id": "q_5246", "text": "Return \"data\" value on self.data\n\n :return: data to send\n :rtype: str"}
{"_id": "q_5247", "text": "Return source mac address for this Scapy Packet\n\n :param scapy.packet.Packet pkt: Scapy Packet\n :return: Mac address. Include (Amazon Device) for these devices\n :rtype: str"}
{"_id": "q_5248", "text": "Scandevice callback. Register src mac to avoid src repetition.\n Print device on screen.\n\n :param scapy.packet.Packet pkt: Scapy Packet\n :return: None"}
{"_id": "q_5249", "text": "Print help and scan devices on screen.\n\n :return: None"}
{"_id": "q_5250", "text": "Send success or error message to configured confirmation\n\n :param str message: Body message to send\n :param bool success: Device executed successfully to personalize message\n :return: None"}
{"_id": "q_5251", "text": "Start daemon mode\n\n :param bool root_allowed: Only used for ExecuteCmd\n :return: loop"}
{"_id": "q_5252", "text": "Filter queryset based on keywords.\n Support for multiple-selected parent values."}
{"_id": "q_5253", "text": "Return True if according to should_index the object should be indexed."}
{"_id": "q_5254", "text": "Returns the settings of the index."}
{"_id": "q_5255", "text": "Registers the given model with Algolia engine.\n\n If the given model is already registered with Algolia engine, a\n RegistrationError will be raised."}
{"_id": "q_5256", "text": "Returns the adapter associated with the given model."}
{"_id": "q_5257", "text": "Signal handler for when a registered model has been saved."}
{"_id": "q_5258", "text": "Encode a position given in float arguments latitude, longitude to\n a geohash which will have the character count precision."}
{"_id": "q_5259", "text": "Pad a string to the target length in characters, or return the original\n string if it's longer than the target length."}
{"_id": "q_5260", "text": "Pad short rows to the length of the longest row to help render \"jagged\"\n CSV files"}
{"_id": "q_5261", "text": "Pad each cell to the size of the largest cell in its column."}
{"_id": "q_5262", "text": "Add dividers and padding to a row of cells and return a string."}
{"_id": "q_5263", "text": "Calculate base id and version from a resource id.\n\n :params resource_id: Resource id.\n :params return_version: (optional) True if You need version, returns (resource_id, version)."}
{"_id": "q_5264", "text": "Make a bid.\n\n :params trade_id: Trade id.\n :params bid: Amount of credits You want to spend.\n :params fast: True for fastest bidding (skips trade status & credits check)."}
{"_id": "q_5265", "text": "Return items in your club, excluding consumables.\n\n :param ctype: [development / ? / ?] Card type.\n :param level: (optional) [?/?/gold] Card level.\n :param category: (optional) [fitness/?/?] Card category.\n :param assetId: (optional) Asset id.\n :param defId: (optional) Definition id.\n :param min_price: (optional) Minimal price.\n :param max_price: (optional) Maximum price.\n :param min_buy: (optional) Minimal buy now price.\n :param max_buy: (optional) Maximum buy now price.\n :param league: (optional) League id.\n :param club: (optional) Club id.\n :param position: (optional) Position.\n :param nationality: (optional) Nation id.\n :param rare: (optional) [boolean] True for searching special cards.\n :param playStyle: (optional) Play style.\n :param start: (optional) Start page sent to server so it supposed to be 12/15, 24/30 etc. (default platform page_size*n)\n :param page_size: (optional) Page size (items per page)"}
{"_id": "q_5266", "text": "Return all consumables from club."}
{"_id": "q_5267", "text": "Return items in tradepile."}
{"_id": "q_5268", "text": "Start auction. Returns trade_id.\n\n :params item_id: Item id.\n :params bid: Stard bid.\n :params buy_now: Buy now price.\n :params duration: Auction duration in seconds (Default: 3600)."}
{"_id": "q_5269", "text": "Quick sell.\n\n :params item_id: Item id."}
{"_id": "q_5270", "text": "Send to watchlist.\n\n :params trade_id: Trade id."}
{"_id": "q_5271", "text": "Send card FROM CLUB to first free slot in sbs squad."}
{"_id": "q_5272", "text": "Apply consumable on player.\n\n :params item_id: Item id of player.\n :params resource_id: Resource id of consumable."}
{"_id": "q_5273", "text": "Return active messages."}
{"_id": "q_5274", "text": "Runs its worker method.\n\n This method will be terminated once its parent's is_running\n property turns False."}
{"_id": "q_5275", "text": "Returns a NumPy array that represents the 2D pixel location,\n which is defined by PFNC, of the original image data.\n\n You may use the returned NumPy array for a calculation to map the\n original image to another format.\n\n :return: A NumPy array that represents the 2D pixel location."}
{"_id": "q_5276", "text": "Starts image acquisition.\n\n :return: None."}
{"_id": "q_5277", "text": "Stops image acquisition.\n\n :return: None."}
{"_id": "q_5278", "text": "Adds a CTI file to work with to the CTI file list.\n\n :param file_path: Set a file path to the target CTI file.\n\n :return: None."}
{"_id": "q_5279", "text": "Removes the specified CTI file from the CTI file list.\n\n :param file_path: Set a file path to the target CTI file.\n\n :return: None."}
{"_id": "q_5280", "text": "Releases all external resources including the controlling device."}
{"_id": "q_5281", "text": "Run the unit test suite with each support library and Python version."}
{"_id": "q_5282", "text": "Transform README.md into a usable long description.\n\n Replaces relative references to svg images to absolute https references."}
{"_id": "q_5283", "text": "Return a PrecalculatedTextMeasurer given a JSON stream.\n\n See precalculate_text.py for details on the required format."}
{"_id": "q_5284", "text": "Returns a reasonable default PrecalculatedTextMeasurer."}
{"_id": "q_5285", "text": "Creates a github-style badge as an SVG image.\n\n >>> badge(left_text='coverage', right_text='23%', right_color='red')\n '<svg...</svg>'\n >>> badge(left_text='build', right_text='green', right_color='green',\n ... whole_link=\"http://www.example.com/\")\n '<svg...</svg>'\n\n Args:\n left_text: The text that should appear on the left-hand-side of the\n badge e.g. \"coverage\".\n right_text: The text that should appear on the right-hand-side of the\n badge e.g. \"23%\".\n left_link: The URL that should be redirected to when the left-hand text\n is selected.\n right_link: The URL that should be redirected to when the right-hand\n text is selected.\n whole_link: The link that should be redirected to when the badge is\n selected. If set then left_link and right_right may not be set.\n logo: A url representing a logo that will be displayed inside the\n badge. Can be a data URL e.g. \"data:image/svg+xml;utf8,<svg...\"\n left_color: The color of the part of the badge containing the left-hand\n text. Can be an valid CSS color\n (see https://developer.mozilla.org/en-US/docs/Web/CSS/color) or a\n color name defined here:\n https://github.com/badges/shields/blob/master/lib/colorscheme.json\n right_color: The color of the part of the badge containing the\n right-hand text. Can be an valid CSS color\n (see https://developer.mozilla.org/en-US/docs/Web/CSS/color) or a\n color name defined here:\n https://github.com/badges/shields/blob/master/lib/colorscheme.json\n measurer: A text_measurer.TextMeasurer that can be used to measure the\n width of left_text and right_text.\n embed_logo: If True then embed the logo image directly in the badge.\n This can prevent an HTTP request and some browsers will not render\n external image referenced. When True, `logo` must be a HTTP/HTTPS\n URI or a filesystem path. Also, the `badge` call may raise an\n exception if the logo cannot be loaded, is not an image, etc."}
{"_id": "q_5286", "text": "Generates the subset of 'characters' that can be encoded by 'encodings'.\n\n Args:\n characters: The characters to check for encodeability e.g. 'abcd'.\n encodings: The encodings to check against e.g. ['cp1252', 'iso-8859-5'].\n\n Returns:\n The subset of 'characters' that can be encoded using one of the provided\n encodings."}
{"_id": "q_5287", "text": "Return a mapping between each given character and its length.\n\n Args:\n measurer: The TextMeasurer used to measure the width of the text in\n pixels.\n characters: The characters to measure e.g. \"ml\".\n\n Returns:\n A mapping from the given characters to their length in pixels, as\n determined by 'measurer' e.g. {'m': 5.2, 'l', 1.2}."}
{"_id": "q_5288", "text": "Write the data required by PrecalculatedTextMeasurer to a stream."}
{"_id": "q_5289", "text": "Called internally during the parsing of the Zabransky database, to\n add coefficients as they are read one per line"}
{"_id": "q_5290", "text": "Determines the index at which the coefficients for the current\n temperature are stored in `coeff_sets`."}
{"_id": "q_5291", "text": "r'''Method to calculate heat capacity of a liquid at temperature `T`\n with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate heat capacity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n Cp : float\n Heat capacity of the liquid at T, [J/mol/K]"}
{"_id": "q_5292", "text": "r'''Method to calculate the integral of a property with respect to\n temperature, using a specified method. Implements the \n analytical integrals of all available methods except for tabular data,\n the case of multiple coefficient sets needed to encompass the temperature\n range of any of the ZABRANSKY methods, and the CSP methods using the\n vapor phase properties.\n\n Parameters\n ----------\n T1 : float\n Lower limit of integration, [K]\n T2 : float\n Upper limit of integration, [K]\n method : str\n Method for which to find the integral\n\n Returns\n -------\n integral : float\n Calculated integral of the property over the given range, \n [`units*K`]"}
{"_id": "q_5293", "text": "r'''Method to calculate heat capacity of a liquid mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n Cplm : float\n Molar heat capacity of the liquid mixture at the given conditions,\n [J/mol]"}
{"_id": "q_5294", "text": "r'''Method to calculate heat capacity of a solid mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n Cpsm : float\n Molar heat capacity of the solid mixture at the given conditions, [J/mol]"}
{"_id": "q_5295", "text": "r'''Calculates the objective function of the Rachford-Rice flash equation.\n This function should be called by a solver seeking a solution to a flash\n calculation. The unknown variable is `V_over_F`, for which a solution\n must be between 0 and 1.\n\n .. math::\n \\sum_i \\frac{z_i(K_i-1)}{1 + \\frac{V}{F}(K_i-1)} = 0\n\n Parameters\n ----------\n V_over_F : float\n Vapor fraction guess [-]\n zs : list[float]\n Overall mole fractions of all species, [-]\n Ks : list[float]\n Equilibrium K-values, [-]\n\n Returns\n -------\n error : float\n Deviation between the objective function at the correct V_over_F\n and the attempted V_over_F, [-]\n\n Notes\n -----\n The derivation is as follows:\n\n .. math::\n F z_i = L x_i + V y_i\n\n x_i = \\frac{z_i}{1 + \\frac{V}{F}(K_i-1)}\n\n \\sum_i y_i = \\sum_i K_i x_i = 1\n\n \\sum_i(y_i - x_i)=0\n\n \\sum_i \\frac{z_i(K_i-1)}{1 + \\frac{V}{F}(K_i-1)} = 0\n\n Examples\n --------\n >>> Rachford_Rice_flash_error(0.5, zs=[0.5, 0.3, 0.2],\n ... Ks=[1.685, 0.742, 0.532])\n 0.04406445591174976\n\n References\n ----------\n .. [1] Rachford, H. H. Jr, and J. D. Rice. \"Procedure for Use of Electronic\n Digital Computers in Calculating Flash Vaporization Hydrocarbon\n Equilibrium.\" Journal of Petroleum Technology 4, no. 10 (October 1,\n 1952): 19-3. doi:10.2118/952327-G."}
{"_id": "q_5296", "text": "r'''Calculates the activity coefficients of each species in a mixture\n using the Wilson method, given their mole fractions, and\n dimensionless interaction parameters. Those are normally correlated with\n temperature, and need to be calculated separately.\n\n .. math::\n \\ln \\gamma_i = 1 - \\ln \\left(\\sum_j^N \\Lambda_{ij} x_j\\right)\n -\\sum_j^N \\frac{\\Lambda_{ji}x_j}{\\displaystyle\\sum_k^N \\Lambda_{jk}x_k}\n\n Parameters\n ----------\n xs : list[float]\n Liquid mole fractions of each species, [-]\n params : list[list[float]]\n Dimensionless interaction parameters of each compound with each other,\n [-]\n\n Returns\n -------\n gammas : list[float]\n Activity coefficient for each species in the liquid mixture, [-]\n\n Notes\n -----\n This model needs N^2 parameters.\n\n The original model correlated the interaction parameters using the standard\n pure-component molar volumes of each species at 25\u00b0C, in the following form:\n\n .. math::\n \\Lambda_{ij} = \\frac{V_j}{V_i} \\exp\\left(\\frac{-\\lambda_{i,j}}{RT}\\right)\n\n However, that form has less flexibility and offered no advantage over\n using only regressed parameters.\n\n Most correlations for the interaction parameters include some of the terms\n shown in the following form:\n\n .. math::\n \\ln \\Lambda_{ij} =a_{ij}+\\frac{b_{ij}}{T}+c_{ij}\\ln T + d_{ij}T\n + \\frac{e_{ij}}{T^2} + h_{ij}{T^2}\n\n The Wilson model is not applicable to liquid-liquid systems.\n\n Examples\n --------\n Ethanol-water example, at 343.15 K and 1 MPa:\n\n >>> Wilson([0.252, 0.748], [[1, 0.154], [0.888, 1]])\n [1.8814926087178843, 1.1655774931125487]\n\n References\n ----------\n .. [1] Wilson, Grant M. \"Vapor-Liquid Equilibrium. XI. A New Expression for\n the Excess Free Energy of Mixing.\" Journal of the American Chemical\n Society 86, no. 2 (January 1, 1964): 127-130. doi:10.1021/ja01056a002.\n .. [2] Gmehling, Jurgen, Barbel Kolbe, Michael Kleiber, and Jurgen Rarey.\n Chemical Thermodynamics for Process Simulation. 1st edition. Weinheim:\n Wiley-VCH, 2012."}
{"_id": "q_5297", "text": "r'''Determines the phase of a one-species chemical system according to\n basic rules, using whatever information is available. Considers only the\n phases liquid, solid, and gas; does not consider two-phase\n scenarios, as should occurs between phase boundaries.\n\n * If the melting temperature is known and the temperature is under or equal\n to it, consider it a solid.\n * If the critical temperature is known and the temperature is greater or\n equal to it, consider it a gas.\n * If the vapor pressure at `T` is known and the pressure is under or equal\n to it, consider it a gas. If the pressure is greater than the vapor\n pressure, consider it a liquid.\n * If the melting temperature, critical temperature, and vapor pressure are\n not known, attempt to use the boiling point to provide phase information.\n If the pressure is between 90 kPa and 110 kPa (approximately normal),\n consider it a liquid if it is under the boiling temperature and a gas if\n above the boiling temperature.\n * If the pressure is above 110 kPa and the boiling temperature is known,\n consider it a liquid if the temperature is under the boiling temperature.\n * Return None otherwise.\n\n Parameters\n ----------\n T : float\n Temperature, [K]\n P : float\n Pressure, [Pa]\n Tm : float, optional\n Normal melting temperature, [K]\n Tb : float, optional\n Normal boiling point, [K]\n Tc : float, optional\n Critical temperature, [K]\n Psat : float, optional\n Vapor pressure of the fluid at `T`, [Pa]\n\n Returns\n -------\n phase : str\n Either 's', 'l', 'g', or None if the phase cannot be determined\n\n Notes\n -----\n No special attential is paid to any phase transition. For the case where\n the melting point is not provided, the possibility of the fluid being solid\n is simply ignored.\n\n Examples\n --------\n >>> identify_phase(T=280, P=101325, Tm=273.15, Psat=991)\n 'l'"}
{"_id": "q_5298", "text": "r'''Charge of a chemical, computed with RDKit from a chemical's SMILES.\n If RDKit is not available, holds None.\n\n Examples\n --------\n >>> Chemical('sodium ion').charge\n 1"}
{"_id": "q_5299", "text": "r'''RDKit object of the chemical, without hydrogen. If RDKit is not\n available, holds None.\n\n For examples of what can be done with RDKit, see\n `their website <http://www.rdkit.org/docs/GettingStartedInPython.html>`_."}
{"_id": "q_5300", "text": "r'''RDKit object of the chemical, with hydrogen. If RDKit is not\n available, holds None.\n\n For examples of what can be done with RDKit, see\n `their website <http://www.rdkit.org/docs/GettingStartedInPython.html>`_."}
{"_id": "q_5301", "text": "r'''Dictionary of legal status indicators for the chemical.\n\n Examples\n --------\n >>> pprint(Chemical('benzene').legal_status)\n {'DSL': 'LISTED',\n 'EINECS': 'LISTED',\n 'NLP': 'UNLISTED',\n 'SPIN': 'LISTED',\n 'TSCA': 'LISTED'}"}
{"_id": "q_5302", "text": "r'''Dictionary of economic status indicators for the chemical.\n\n Examples\n --------\n >>> pprint(Chemical('benzene').economic_status)\n [\"US public: {'Manufactured': 6165232.1, 'Imported': 463146.474, 'Exported': 271908.252}\",\n u'1,000,000 - 10,000,000 tonnes per annum',\n u'Intermediate Use Only',\n 'OECD HPV Chemicals']"}
{"_id": "q_5303", "text": "r'''This function handles the retrieval of a chemical's Global Warming\n Potential, relative to CO2. Lookup is based on CASRNs. Will automatically\n select a data source to use if no Method is provided; returns None if the\n data is not available.\n\n Returns the GWP for the 100yr outlook by default.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n GWP : float\n Global warming potential, [(impact/mass chemical)/(impact/mass CO2)]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain GWP with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are IPCC (2007) 100yr',\n 'IPCC (2007) 100yr-SAR', 'IPCC (2007) 20yr', and 'IPCC (2007) 500yr'. \n All valid values are also held in the list GWP_methods.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the GWP for the desired chemical, and will return methods\n instead of the GWP\n\n Notes\n -----\n All data is from [1]_, the official source. Several chemicals are available\n in [1]_ are not included here as they do not have a CAS.\n Methods are 'IPCC (2007) 100yr', 'IPCC (2007) 100yr-SAR',\n 'IPCC (2007) 20yr', and 'IPCC (2007) 500yr'.\n\n Examples\n --------\n Methane, 100-yr outlook\n\n >>> GWP(CASRN='74-82-8')\n 25.0\n\n References\n ----------\n .. [1] IPCC. \"2.10.2 Direct Global Warming Potentials - AR4 WGI Chapter 2:\n Changes in Atmospheric Constituents and in Radiative Forcing.\" 2007.\n https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-10-2.html."}
{"_id": "q_5304", "text": "r'''Method to calculate vapor pressure of a fluid at temperature `T`\n with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at calculate vapor pressure, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n Psat : float\n Vapor pressure at T, [pa]"}
{"_id": "q_5305", "text": "Counts the number of real volumes in `Vs`, and determines what to do.\n If there is only one real volume, the method \n `set_properties_from_solution` is called with it. If there are\n two real volumes, `set_properties_from_solution` is called once with \n each volume. The phase is returned by `set_properties_from_solution`, \n and the volumes is set to either `V_l` or `V_g` as appropriate. \n\n Parameters\n ----------\n Vs : list[float]\n Three possible molar volumes, [m^3/mol]"}
{"_id": "q_5306", "text": "Generic method to calculate `T` from a specified `P` and `V`.\n Provides SciPy's `newton` solver, and iterates to solve the general\n equation for `P`, recalculating `a_alpha` as a function of temperature\n using `a_alpha_and_derivatives` each iteration.\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n quick : bool, optional\n Whether to use a SymPy cse-derived expression (3x faster) or \n individual formulas - not applicable where a numerical solver is\n used.\n\n Returns\n -------\n T : float\n Temperature, [K]"}
{"_id": "q_5307", "text": "r'''Method to calculate `a_alpha` and its first and second\n derivatives for this EOS. Returns `a_alpha`, `da_alpha_dT`, and \n `d2a_alpha_dT2`. See `GCEOS.a_alpha_and_derivatives` for more \n documentation. Uses the set values of `Tc`, `kappa`, and `a`. \n \n For use in `solve_T`, returns only `a_alpha` if full is False.\n\n .. math::\n a\\alpha = a \\left(\\kappa \\left(- \\frac{T^{0.5}}{Tc^{0.5}} \n + 1\\right) + 1\\right)^{2}\n \n \\frac{d a\\alpha}{dT} = - \\frac{1.0 a \\kappa}{T^{0.5} Tc^{0.5}}\n \\left(\\kappa \\left(- \\frac{T^{0.5}}{Tc^{0.5}} + 1\\right) + 1\\right)\n\n \\frac{d^2 a\\alpha}{dT^2} = 0.5 a \\kappa \\left(- \\frac{1}{T^{1.5} \n Tc^{0.5}} \\left(\\kappa \\left(\\frac{T^{0.5}}{Tc^{0.5}} - 1\\right)\n - 1\\right) + \\frac{\\kappa}{T^{1.0} Tc^{1.0}}\\right)"}
{"_id": "q_5308", "text": "r'''Method to calculate `T` from a specified `P` and `V` for the PRSV\n EOS. Uses `Tc`, `a`, `b`, `kappa0` and `kappa` as well, obtained from \n the class's namespace.\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n quick : bool, optional\n Whether to use a SymPy cse-derived expression (somewhat faster) or \n individual formulas.\n\n Returns\n -------\n T : float\n Temperature, [K]\n \n Notes\n -----\n Not guaranteed to produce a solution. There are actually two solution,\n one much higher than normally desired; it is possible the solver could\n converge on this."}
{"_id": "q_5309", "text": "r'''Method to calculate `T` from a specified `P` and `V` for the VDW\n EOS. Uses `a`, and `b`, obtained from the class's namespace.\n\n .. math::\n T = \\frac{1}{R V^{2}} \\left(P V^{2} \\left(V - b\\right)\n + V a - a b\\right)\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n\n Returns\n -------\n T : float\n Temperature, [K]"}
{"_id": "q_5310", "text": "r'''Method to calculate `T` from a specified `P` and `V` for the RK\n EOS. Uses `a`, and `b`, obtained from the class's namespace.\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n quick : bool, optional\n Whether to use a SymPy cse-derived expression (3x faster) or \n individual formulas\n\n Returns\n -------\n T : float\n Temperature, [K]\n\n Notes\n -----\n The exact solution can be derived as follows; it is excluded for \n breviety.\n \n >>> from sympy import *\n >>> P, T, V, R = symbols('P, T, V, R')\n >>> Tc, Pc = symbols('Tc, Pc')\n >>> a, b = symbols('a, b')\n\n >>> RK = Eq(P, R*T/(V-b) - a/sqrt(T)/(V*V + b*V))\n >>> # solve(RK, T)"}
{"_id": "q_5311", "text": "r'''Method to calculate `T` from a specified `P` and `V` for the API \n SRK EOS. Uses `a`, `b`, and `Tc` obtained from the class's namespace.\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n quick : bool, optional\n Whether to use a SymPy cse-derived expression (3x faster) or \n individual formulas\n\n Returns\n -------\n T : float\n Temperature, [K]\n\n Notes\n -----\n If S2 is set to 0, the solution is the same as in the SRK EOS, and that\n is used. Otherwise, newton's method must be used to solve for `T`. \n There are 8 roots of T in that case, six of them real. No guarantee can\n be made regarding which root will be obtained."}
{"_id": "q_5312", "text": "r'''Method to calculate `a_alpha` and its first and second\n derivatives for this EOS. Returns `a_alpha`, `da_alpha_dT`, and \n `d2a_alpha_dT2`. See `GCEOS.a_alpha_and_derivatives` for more \n documentation. Uses the set values of `Tc`, `omega`, and `a`.\n \n Because of its similarity for the TWUPR EOS, this has been moved to an \n external `TWU_a_alpha_common` function. See it for further \n documentation."}
{"_id": "q_5313", "text": "r'''This function handles the retrieval of a chemical's boiling\n point. Lookup is based on CASRNs. Will automatically select a data\n source to use if no Method is provided; returns None if the data is not\n available.\n\n Prefered sources are 'CRC Physical Constants, organic' for organic\n chemicals, and 'CRC Physical Constants, inorganic' for inorganic\n chemicals. Function has data for approximately 13000 chemicals.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Tb : float\n Boiling temperature, [K]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Tb with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n Tb_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Tb for the desired chemical, and will return methods instead of Tb\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods,\n useful for for performance reasons and ignoring inaccurate methods\n\n Notes\n -----\n A total of four methods are available for this function. They are:\n\n * 'CRC_ORG', a compillation of data on organics\n as published in [1]_.\n * 'CRC_INORG', a compillation of data on\n inorganic as published in [1]_.\n * 'YAWS', a large compillation of data from a\n variety of sources; no data points are sourced in the work of [2]_.\n * 'PSAT_DEFINITION', calculation of boiling point from a\n vapor pressure calculation. This is normally off by a fraction of a\n degree even in the best cases. Listed in IgnoreMethods by default\n for performance reasons.\n\n Examples\n --------\n >>> Tb('7732-18-5')\n 373.124\n\n References\n ----------\n .. [1] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of\n Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014.\n .. [2] Yaws, Carl L. Thermophysical Properties of Chemicals and\n Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional\n Publishing, 2014."}
{"_id": "q_5314", "text": "r'''This function handles the retrieval of a chemical's melting\n point. Lookup is based on CASRNs. Will automatically select a data\n source to use if no Method is provided; returns None if the data is not\n available.\n\n Prefered sources are 'Open Notebook Melting Points', with backup sources\n 'CRC Physical Constants, organic' for organic chemicals, and\n 'CRC Physical Constants, inorganic' for inorganic chemicals. Function has\n data for approximately 14000 chemicals.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Tm : float\n Melting temperature, [K]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Tm with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n Tm_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Tm for the desired chemical, and will return methods instead of Tm\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods\n\n Notes\n -----\n A total of three sources are available for this function. They are:\n\n * 'OPEN_NTBKM, a compillation of data on organics\n as published in [1]_ as Open Notebook Melting Points; Averaged \n (median) values were used when\n multiple points were available. For more information on this\n invaluable and excellent collection, see\n http://onswebservices.wikispaces.com/meltingpoint.\n * 'CRC_ORG', a compillation of data on organics\n as published in [2]_.\n * 'CRC_INORG', a compillation of data on\n inorganic as published in [2]_.\n\n Examples\n --------\n >>> Tm(CASRN='7732-18-5')\n 273.15\n\n References\n ----------\n .. [1] Bradley, Jean-Claude, Antony Williams, and Andrew Lang.\n \"Jean-Claude Bradley Open Melting Point Dataset\", May 20, 2014.\n https://figshare.com/articles/Jean_Claude_Bradley_Open_Melting_Point_Datset/1031637.\n .. [2] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of\n Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014."}
{"_id": "q_5315", "text": "r'''Calculates enthalpy of vaporization at arbitrary temperatures using the\n Clapeyron equation.\n\n The enthalpy of vaporization is given by:\n\n .. math::\n \\Delta H_{vap} = RT \\Delta Z \\frac{\\ln (P_c/Psat)}{(1-T_{r})}\n\n Parameters\n ----------\n T : float\n Temperature of fluid [K]\n Tc : float\n Critical temperature of fluid [K]\n Pc : float\n Critical pressure of fluid [Pa]\n dZ : float\n Change in compressibility factor between liquid and gas, []\n Psat : float\n Saturation pressure of fluid [Pa], optional\n\n Returns\n -------\n Hvap : float\n Enthalpy of vaporization, [J/mol]\n\n Notes\n -----\n No original source is available for this equation.\n [1]_ claims this equation overpredicts enthalpy by several percent.\n Under Tr = 0.8, dZ = 1 is a reasonable assumption.\n This equation is most accurate at the normal boiling point.\n\n Internal units are bar.\n\n WARNING: I believe it possible that the adjustment for pressure may be incorrect\n\n Examples\n --------\n Problem from Perry's examples.\n\n >>> Clapeyron(T=294.0, Tc=466.0, Pc=5.55E6)\n 26512.354585061985\n\n References\n ----------\n .. [1] Poling, Bruce E. The Properties of Gases and Liquids. 5th edition.\n New York: McGraw-Hill Professional, 2000."}
{"_id": "q_5316", "text": "This function handles the calculation of a chemical's enthalpy of fusion.\n Generally this, is used by the chemical class, as all parameters are passed.\n Calling the function directly works okay.\n\n Enthalpy of fusion is a weak function of pressure, and its effects are\n neglected.\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface."}
{"_id": "q_5317", "text": "This function handles the calculation of a chemical's enthalpy of sublimation.\n Generally this, is used by the chemical class, as all parameters are passed.\n\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface."}
{"_id": "q_5318", "text": "This function handles the retrival of a mixtures's liquidus point.\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface.\n\n >>> Tliquidus(Tms=[250.0, 350.0], xs=[0.5, 0.5])\n 350.0\n >>> Tliquidus(Tms=[250, 350], xs=[0.5, 0.5], Method='Simple')\n 300.0\n >>> Tliquidus(Tms=[250, 350], xs=[0.5, 0.5], AvailableMethods=True)\n ['Maximum', 'Simple', 'None']"}
{"_id": "q_5319", "text": "r'''This function handles the calculation of a chemical's solubility\n parameter. Calculation is a function of temperature, but is not always\n presented as such. No lookup values are available; either `Hvapm`, `Vml`,\n and `T` are provided or the calculation cannot be performed.\n\n .. math::\n \\delta = \\sqrt{\\frac{\\Delta H_{vap} - RT}{V_m}}\n\n Parameters\n ----------\n T : float\n Temperature of the fluid [k]\n Hvapm : float\n Heat of vaporization [J/mol/K]\n Vml : float\n Specific volume of the liquid [m^3/mol]\n CASRN : str, optional\n CASRN of the fluid, not currently used [-]\n\n Returns\n -------\n delta : float\n Solubility parameter, [Pa^0.5]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain the solubility parameter\n with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n solubility_parameter_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the solubility parameter for the desired chemical, and will return\n methods instead of the solubility parameter\n\n Notes\n -----\n Undefined past the critical point. For convenience, if Hvap is not defined,\n an error is not raised; None is returned instead. Also for convenience,\n if Hvapm is less than RT, None is returned to avoid taking the root of a\n negative number.\n\n This parameter is often given in units of cal/ml, which is 2045.48 times\n smaller than the value returned here.\n\n Examples\n --------\n Pentane at STP\n\n >>> solubility_parameter(T=298.2, Hvapm=26403.3, Vml=0.000116055)\n 14357.681538173534\n\n References\n ----------\n .. [1] Barton, Allan F. M. CRC Handbook of Solubility Parameters and Other\n Cohesion Parameters, Second Edition. CRC Press, 1991."}
{"_id": "q_5320", "text": "r'''Returns the maximum solubility of a solute in a solvent.\n\n .. math::\n \\ln x_i^L \\gamma_i^L = \\frac{\\Delta H_{m,i}}{RT}\\left(\n 1 - \\frac{T}{T_{m,i}}\\right) - \\frac{\\Delta C_{p,i}(T_{m,i}-T)}{RT}\n + \\frac{\\Delta C_{p,i}}{R}\\ln\\frac{T_m}{T}\n\n \\Delta C_{p,i} = C_{p,i}^L - C_{p,i}^S\n\n Parameters\n ----------\n T : float\n Temperature of the system [K]\n Tm : float\n Melting temperature of the solute [K]\n Hm : float\n Heat of melting at the melting temperature of the solute [J/mol]\n Cpl : float, optional\n Molar heat capacity of the solute as a liquid [J/mol/K]\n Cpls: float, optional\n Molar heat capacity of the solute as a solid [J/mol/K]\n gamma : float, optional\n Activity coefficient of the solute as a liquid [-]\n\n Returns\n -------\n x : float\n Mole fraction of solute at maximum solubility [-]\n\n Notes\n -----\n gamma is of the solute in liquid phase\n\n Examples\n --------\n From [1]_, matching example\n\n >>> solubility_eutectic(T=260., Tm=278.68, Hm=9952., Cpl=0, Cps=0, gamma=3.0176)\n 0.24340068761677464\n\n References\n ----------\n .. [1] Gmehling, Jurgen. Chemical Thermodynamics: For Process Simulation.\n Weinheim, Germany: Wiley-VCH, 2012."}
{"_id": "q_5321", "text": "r'''Returns the freezing point depression caused by a solute in a solvent.\n Can use either the mole fraction of the solute or its molality and the\n molecular weight of the solvent. Assumes ideal system behavior.\n\n .. math::\n \\Delta T_m = \\frac{R T_m^2 x}{\\Delta H_m}\n\n \\Delta T_m = \\frac{R T_m^2 (MW) M}{1000 \\Delta H_m}\n\n Parameters\n ----------\n Tm : float\n Melting temperature of the solute [K]\n Hm : float\n Heat of melting at the melting temperature of the solute [J/mol]\n x : float, optional\n Mole fraction of the solute [-]\n M : float, optional\n Molality [mol/kg]\n MW: float, optional\n Molecular weight of the solvent [g/mol]\n\n Returns\n -------\n dTm : float\n Freezing point depression [K]\n\n Notes\n -----\n MW is the molecular weight of the solvent. M is the molality of the solute.\n\n Examples\n --------\n From [1]_, matching example.\n\n >>> Tm_depression_eutectic(353.35, 19110, .02)\n 1.0864594900639515\n\n References\n ----------\n .. [1] Gmehling, Jurgen. Chemical Thermodynamics: For Process Simulation.\n Weinheim, Germany: Wiley-VCH, 2012."}
{"_id": "q_5322", "text": "r'''Calculates saturation liquid volume, using Rackett CSP method and\n critical properties.\n\n The molar volume of a liquid is given by:\n\n .. math::\n V_s = \\frac{RT_c}{P_c}{Z_c}^{[1+(1-{T/T_c})^{2/7} ]}\n\n Units are all currently in m^3/mol - this can be changed to kg/m^3\n\n Parameters\n ----------\n T : float\n Temperature of fluid [K]\n Tc : float\n Critical temperature of fluid [K]\n Pc : float\n Critical pressure of fluid [Pa]\n Zc : float\n Critical compressibility of fluid, [-]\n\n Returns\n -------\n Vs : float\n Saturation liquid volume, [m^3/mol]\n\n Notes\n -----\n Units are dependent on gas constant R, imported from scipy\n According to Reid et. al, underpredicts volume for compounds with Zc < 0.22\n\n Examples\n --------\n Propane, example from the API Handbook\n\n >>> Vm_to_rho(Rackett(272.03889, 369.83, 4248000.0, 0.2763), 44.09562)\n 531.3223212651092\n\n References\n ----------\n .. [1] Rackett, Harold G. \"Equation of State for Saturated Liquids.\"\n Journal of Chemical & Engineering Data 15, no. 4 (1970): 514-517.\n doi:10.1021/je60047a012"}
{"_id": "q_5323", "text": "r'''Calculates saturation liquid volume, using Yamada and Gunn CSP method\n and a chemical's critical properties and acentric factor.\n\n The molar volume of a liquid is given by:\n\n .. math::\n V_s = \\frac{RT_c}{P_c}{(0.29056-0.08775\\omega)}^{[1+(1-{T/T_c})^{2/7}]}\n\n Units are in m^3/mol.\n\n Parameters\n ----------\n T : float\n Temperature of fluid [K]\n Tc : float\n Critical temperature of fluid [K]\n Pc : float\n Critical pressure of fluid [Pa]\n omega : float\n Acentric factor for fluid, [-]\n\n Returns\n -------\n Vs : float\n saturation liquid volume, [m^3/mol]\n\n Notes\n -----\n This equation is an improvement on the Rackett equation.\n This is often presented as the Rackett equation.\n The acentric factor is used here, instead of the critical compressibility\n A variant using a reference fluid also exists\n\n Examples\n --------\n >>> Yamada_Gunn(300, 647.14, 22048320.0, 0.245)\n 2.1882836429895796e-05\n\n References\n ----------\n .. [1] Gunn, R. D., and Tomoyoshi Yamada. \"A Corresponding States\n Correlation of Saturated Liquid Volumes.\" AIChE Journal 17, no. 6\n (1971): 1341-45. doi:10.1002/aic.690170613\n .. [2] Yamada, Tomoyoshi, and Robert D. Gunn. \"Saturated Liquid Molar\n Volumes. Rackett Equation.\" Journal of Chemical & Engineering Data 18,\n no. 2 (1973): 234-36. doi:10.1021/je60057a006"}
{"_id": "q_5324", "text": "r'''Calculates saturation liquid density, using the Townsend and Hales\n CSP method as modified from the original Riedel equation. Uses\n chemical critical volume and temperature, as well as acentric factor\n\n The density of a liquid is given by:\n\n .. math::\n Vs = V_c/\\left(1+0.85(1-T_r)+(1.692+0.986\\omega)(1-T_r)^{1/3}\\right)\n\n Parameters\n ----------\n T : float\n Temperature of fluid [K]\n Tc : float\n Critical temperature of fluid [K]\n Vc : float\n Critical volume of fluid [m^3/mol]\n omega : float\n Acentric factor for fluid, [-]\n\n Returns\n -------\n Vs : float\n Saturation liquid volume, [m^3/mol]\n\n Notes\n -----\n The requirement for critical volume and acentric factor requires all data.\n\n Examples\n --------\n >>> Townsend_Hales(300, 647.14, 55.95E-6, 0.3449)\n 1.8007361992619923e-05\n\n References\n ----------\n .. [1] Hales, J. L, and R Townsend. \"Liquid Densities from 293 to 490 K of\n Nine Aromatic Hydrocarbons.\" The Journal of Chemical Thermodynamics\n 4, no. 5 (1972): 763-72. doi:10.1016/0021-9614(72)90050-X"}
{"_id": "q_5325", "text": "r'''Calculate saturation liquid density using the COSTALD CSP method.\n\n A popular and accurate estimation method. If possible, fit parameters are\n used; alternatively critical properties work well.\n\n The density of a liquid is given by:\n\n .. math::\n V_s=V^*V^{(0)}[1-\\omega_{SRK}V^{(\\delta)}]\n\n V^{(0)}=1-1.52816(1-T_r)^{1/3}+1.43907(1-T_r)^{2/3}\n - 0.81446(1-T_r)+0.190454(1-T_r)^{4/3}\n\n V^{(\\delta)}=\\frac{-0.296123+0.386914T_r-0.0427258T_r^2-0.0480645T_r^3}\n {T_r-1.00001}\n\n Units are that of critical or fit constant volume.\n\n Parameters\n ----------\n T : float\n Temperature of fluid [K]\n Tc : float\n Critical temperature of fluid [K]\n Vc : float\n Critical volume of fluid [m^3/mol].\n This parameter is alternatively a fit parameter\n omega : float\n (ideally SRK) Acentric factor for fluid, [-]\n This parameter is alternatively a fit parameter.\n\n Returns\n -------\n Vs : float\n Saturation liquid volume\n\n Notes\n -----\n 196 constants are fit to this function in [1]_.\n Range: 0.25 < Tr < 0.95, often said to be to 1.0\n\n This function has been checked with the API handbook example problem.\n\n Examples\n --------\n Propane, from an example in the API Handbook\n\n >>> Vm_to_rho(COSTALD(272.03889, 369.83333, 0.20008161E-3, 0.1532), 44.097)\n 530.3009967969841\n\n\n References\n ----------\n .. [1] Hankinson, Risdon W., and George H. Thomson. \"A New Correlation for\n Saturated Densities of Liquids and Their Mixtures.\" AIChE Journal\n 25, no. 4 (1979): 653-663. doi:10.1002/aic.690250412"}
{"_id": "q_5326", "text": "r'''Calculate mixture liquid density using the Amgat mixing rule.\n Highly inacurate, but easy to use. Assumes idea liquids with\n no excess volume. Average molecular weight should be used with it to obtain\n density.\n\n .. math::\n V_{mix} = \\sum_i x_i V_i\n\n or in terms of density:\n\n .. math::\n\n \\rho_{mix} = \\sum\\frac{x_i}{\\rho_i}\n\n Parameters\n ----------\n xs : array\n Mole fractions of each component, []\n Vms : array\n Molar volumes of each fluids at conditions [m^3/mol]\n\n Returns\n -------\n Vm : float\n Mixture liquid volume [m^3/mol]\n\n Notes\n -----\n Units are that of the given volumes.\n It has been suggested to use this equation with weight fractions,\n but the results have been less accurate.\n\n Examples\n --------\n >>> Amgat([0.5, 0.5], [4.057e-05, 5.861e-05])\n 4.9590000000000005e-05"}
{"_id": "q_5327", "text": "r'''Calculate mixture liquid density using the COSTALD CSP method.\n\n A popular and accurate estimation method. If possible, fit parameters are\n used; alternatively critical properties work well.\n\n The mixing rules giving parameters for the pure component COSTALD\n equation are:\n\n .. math::\n T_{cm} = \\frac{\\sum_i\\sum_j x_i x_j (V_{ij}T_{cij})}{V_m}\n\n V_m = 0.25\\left[ \\sum x_i V_i + 3(\\sum x_i V_i^{2/3})(\\sum_i x_i V_i^{1/3})\\right]\n\n V_{ij}T_{cij} = (V_iT_{ci}V_{j}T_{cj})^{0.5}\n\n \\omega = \\sum_i z_i \\omega_i\n\n Parameters\n ----------\n xs: list\n Mole fractions of each component\n T : float\n Temperature of fluid [K]\n Tcs : list\n Critical temperature of fluids [K]\n Vcs : list\n Critical volumes of fluids [m^3/mol].\n This parameter is alternatively a fit parameter\n omegas : list\n (ideally SRK) Acentric factor of all fluids, [-]\n This parameter is alternatively a fit parameter.\n\n Returns\n -------\n Vs : float\n Saturation liquid mixture volume\n\n Notes\n -----\n Range: 0.25 < Tr < 0.95, often said to be to 1.0\n No example has been found.\n Units are that of critical or fit constant volume.\n\n Examples\n --------\n >>> COSTALD_mixture([0.4576, 0.5424], 298., [512.58, 647.29],[0.000117, 5.6e-05], [0.559,0.344] )\n 2.706588773271354e-05\n\n References\n ----------\n .. [1] Hankinson, Risdon W., and George H. Thomson. \"A New Correlation for\n Saturated Densities of Liquids and Their Mixtures.\" AIChE Journal\n 25, no. 4 (1979): 653-663. doi:10.1002/aic.690250412"}
{"_id": "q_5328", "text": "r'''Method to calculate molar volume of a liquid mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n Vm : float\n Molar volume of the liquid mixture at the given conditions, \n [m^3/mol]"}
{"_id": "q_5329", "text": "r'''Method to calculate pressure-dependent gas molar volume at\n temperature `T` and pressure `P` with a given method.\n\n This method has no exception handling; see `TP_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate molar volume, [K]\n P : float\n Pressure at which to calculate molar volume, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n Vm : float\n Molar volume of the gas at T and P, [m^3/mol]"}
{"_id": "q_5330", "text": "r'''Method to calculate the molar volume of a solid at tempearture `T`\n with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate molar volume, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n Vms : float\n Molar volume of the solid at T, [m^3/mol]"}
{"_id": "q_5331", "text": "r'''Looks up the legal status of a chemical according to either a specifc\n method or with all methods.\n\n Returns either the status as a string for a specified method, or the\n status of the chemical in all available data sources, in the format\n {source: status}.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n status : str or dict\n Legal status information [-]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain legal status with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n legal_status_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the legal status for the desired chemical, and will return methods\n instead of the status\n CASi : int, optional\n CASRN as an integer, used internally [-]\n\n Notes\n -----\n\n Supported methods are:\n\n * **DSL**: Canada Domestic Substance List, [1]_. As extracted on Feb 11, 2015\n from a html list. This list is updated continuously, so this version\n will always be somewhat old. Strictly speaking, there are multiple\n lists but they are all bundled together here. A chemical may be\n 'Listed', or be on the 'Non-Domestic Substances List (NDSL)',\n or be on the list of substances with 'Significant New Activity (SNAc)',\n or be on the DSL but with a 'Ministerial Condition pertaining to this\n substance', or have been removed from the DSL, or have had a\n Ministerial prohibition for the substance.\n * **TSCA**: USA EPA Toxic Substances Control Act Chemical Inventory, [2]_.\n This list is as extracted on 2016-01. It is believed this list is\n updated on a periodic basis (> 6 month). A chemical may simply be\n 'Listed', or may have certain flags attached to it. All these flags\n are described in the dict TSCA_flags.\n * **EINECS**: European INventory of Existing Commercial chemical\n Substances, [3]_. As extracted from a spreadsheet dynamically\n generated at [1]_. This list was obtained March 2015; a more recent\n revision already exists.\n * **NLP**: No Longer Polymers, a list of chemicals with special\n regulatory exemptions in EINECS. Also described at [3]_.\n * **SPIN**: Substances Prepared in Nordic Countries. Also a boolean\n data type. Retrieved 2015-03 from [4]_.\n\n Other methods which could be added are:\n\n * Australia: AICS Australian Inventory of Chemical Substances\n * China: Inventory of Existing Chemical Substances Produced or Imported\n in China (IECSC)\n * Europe: REACH List of Registered Substances\n * India: List of Hazardous Chemicals\n * Japan: ENCS: Inventory of existing and new chemical substances\n * Korea: Existing Chemicals Inventory (KECI)\n * Mexico: INSQ National Inventory of Chemical Substances in Mexico\n * New Zealand: Inventory of Chemicals (NZIoC)\n * Philippines: PICCS Philippines Inventory of Chemicals and Chemical\n Substances\n\n Examples\n --------\n >>> pprint(legal_status('64-17-5'))\n {'DSL': 'LISTED',\n 'EINECS': 'LISTED',\n 'NLP': 'UNLISTED',\n 'SPIN': 'LISTED',\n 'TSCA': 'LISTED'}\n\n References\n ----------\n .. [1] Government of Canada.. \"Substances Lists\" Feb 11, 2015.\n https://www.ec.gc.ca/subsnouvelles-newsubs/default.asp?n=47F768FE-1.\n .. [2] US EPA. \"TSCA Chemical Substance Inventory.\" Accessed April 2016.\n https://www.epa.gov/tsca-inventory.\n .. [3] ECHA. \"EC Inventory\". Accessed March 2015.\n http://echa.europa.eu/information-on-chemicals/ec-inventory.\n .. [4] SPIN. \"SPIN Substances in Products In Nordic Countries.\" Accessed\n March 2015. http://195.215.202.233/DotNetNuke/default.aspx."}
{"_id": "q_5332", "text": "Look up the economic status of a chemical.\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface.\n\n >>> pprint(economic_status(CASRN='98-00-0'))\n [\"US public: {'Manufactured': 0.0, 'Imported': 10272.711, 'Exported': 184.127}\",\n u'10,000 - 100,000 tonnes per annum',\n 'OECD HPV Chemicals']\n\n >>> economic_status(CASRN='13775-50-3') # SODIUM SESQUISULPHATE\n []\n >>> economic_status(CASRN='98-00-0', Method='OECD high production volume chemicals')\n 'OECD HPV Chemicals'\n >>> economic_status(CASRN='98-01-1', Method='European Chemicals Agency Total Tonnage Bands')\n [u'10,000 - 100,000 tonnes per annum']"}
{"_id": "q_5333", "text": "Method to compute all available properties with the Joback method;\n returns their results as a dict. For the tempearture dependent values\n Cpig and mul, both the coefficients and objects to perform calculations\n are returned."}
{"_id": "q_5334", "text": "r'''This function handles the retrieval of a chemical's conductivity.\n Lookup is based on CASRNs. Will automatically select a data source to use\n if no Method is provided; returns None if the data is not available.\n\n Function has data for approximately 100 chemicals.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n kappa : float\n Electrical conductivity of the fluid, [S/m]\n T : float, only returned if full_info == True\n Temperature at which conductivity measurement was made\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain RI with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n conductivity_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n conductivity for the desired chemical, and will return methods instead\n of conductivity\n full_info : bool, optional\n If True, function will return the temperature at which the conductivity\n reading was made\n\n Notes\n -----\n Only one source is available in this function. It is:\n\n * 'LANGE_COND' which is from Lange's Handbook, Table 8.34 Electrical \n Conductivity of Various Pure Liquids', a compillation of data in [1]_.\n\n Examples\n --------\n >>> conductivity('7732-18-5')\n (4e-06, 291.15)\n\n References\n ----------\n .. [1] Speight, James. Lange's Handbook of Chemistry. 16 edition.\n McGraw-Hill Professional, 2005."}
{"_id": "q_5335", "text": "Helper method for balance_ions for the proportional family of methods. \n See balance_ions for a description of the methods; parameters are fairly\n obvious."}
{"_id": "q_5336", "text": "r'''Method to calculate permittivity of a liquid at temperature `T`\n with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate relative permittivity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n epsilon : float\n Relative permittivity of the liquid at T, [-]"}
{"_id": "q_5337", "text": "Data is stored in the format\n InChI key\\tbool bool bool \\tsubgroup count ...\\tsubgroup count \\tsubgroup count...\n where the bools refer to whether or not the original UNIFAC, modified\n UNIFAC, and PSRK group assignments were completed correctly.\n The subgroups and their count have an indefinite length."}
{"_id": "q_5338", "text": "r'''This function handles the retrieval of a chemical's dipole moment.\n Lookup is based on CASRNs. Will automatically select a data source to use\n if no Method is provided; returns None if the data is not available.\n\n Prefered source is 'CCCBDB'. Considerable variation in reported data has\n found.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n dipole : float\n Dipole moment, [debye]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain dipole moment with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are 'CCCBDB', 'MULLER', or\n 'POLING'. All valid values are also held in the list `dipole_methods`.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the dipole moment for the desired chemical, and will return methods\n instead of the dipole moment\n\n Notes\n -----\n A total of three sources are available for this function. They are:\n\n * 'CCCBDB', a series of critically evaluated data for compounds in\n [1]_, intended for use in predictive modeling.\n * 'MULLER', a collection of data in a\n group-contribution scheme in [2]_.\n * 'POLING', in the appendix in [3].\n \n This function returns dipole moment in units of Debye. This is actually\n a non-SI unit; to convert to SI, multiply by 3.33564095198e-30 and its\n units will be in ampere*second^2 or equivalently and more commonly given,\n coulomb*second. The constant is the result of 1E-21/c, where c is the\n speed of light.\n \n Examples\n --------\n >>> dipole_moment(CASRN='64-17-5')\n 1.44\n\n References\n ----------\n .. [1] NIST Computational Chemistry Comparison and Benchmark Database\n NIST Standard Reference Database Number 101 Release 17b, September 2015,\n Editor: Russell D. Johnson III http://cccbdb.nist.gov/\n .. [2] Muller, Karsten, Liudmila Mokrushina, and Wolfgang Arlt. \"Second-\n Order Group Contribution Method for the Determination of the Dipole\n Moment.\" Journal of Chemical & Engineering Data 57, no. 4 (April 12,\n 2012): 1231-36. doi:10.1021/je2013395.\n .. [3] Poling, Bruce E. The Properties of Gases and Liquids. 5th edition.\n New York: McGraw-Hill Professional, 2000."}
{"_id": "q_5339", "text": "r'''This function handles the retrieval of a chemical's critical\n pressure. Lookup is based on CASRNs. Will automatically select a data\n source to use if no Method is provided; returns None if the data is not\n available.\n\n Prefered sources are 'IUPAC' for organic chemicals, and 'MATTHEWS' for \n inorganic chemicals. Function has data for approximately 1000 chemicals.\n\n Examples\n --------\n >>> Pc(CASRN='64-17-5')\n 6137000.0\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Pc : float\n Critical pressure, [Pa]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Pc with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are 'IUPAC', 'MATTHEWS', \n 'CRC', 'PSRK', 'PD', 'YAWS', and 'SURF'. All valid values are also held \n in the list `Pc_methods`.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Pc for the desired chemical, and will return methods instead of Pc\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods,\n useful for for performance reasons and ignoring inaccurate methods\n\n Notes\n -----\n A total of seven sources are available for this function. They are:\n\n * 'IUPAC', a series of critically evaluated\n experimental datum for organic compounds in [1]_, [2]_, [3]_, [4]_,\n [5]_, [6]_, [7]_, [8]_, [9]_, [10]_, [11]_, and [12]_.\n * 'MATTHEWS', a series of critically\n evaluated data for inorganic compounds in [13]_.\n * 'CRC', a compillation of critically\n evaluated data by the TRC as published in [14]_.\n * 'PSRK', a compillation of experimental and\n estimated data published in [15]_.\n * 'PD', an older compillation of\n data published in [16]_\n * 'YAWS', a large compillation of data from a\n variety of sources; no data points are sourced in the work of [17]_.\n * SURF', an estimation method using a\n simple quadratic method for estimating Pc from Tc and Vc. This is\n ignored and not returned as a method by default.\n\n References\n ----------\n .. [1] Ambrose, Douglas, and Colin L. Young. \"Vapor-Liquid Critical\n Properties of Elements and Compounds. 1. An Introductory Survey.\"\n Journal of Chemical & Engineering Data 41, no. 1 (January 1, 1996):\n 154-154. doi:10.1021/je950378q.\n .. [2] Ambrose, Douglas, and Constantine Tsonopoulos. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 2. Normal Alkanes.\"\n Journal of Chemical & Engineering Data 40, no. 3 (May 1, 1995): 531-46.\n doi:10.1021/je00019a001.\n .. [3] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 3. Aromatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 40, no. 3\n (May 1, 1995): 547-58. doi:10.1021/je00019a002.\n .. [4] Gude, Michael, and Amyn S. Teja. \"Vapor-Liquid Critical Properties\n of Elements and Compounds. 4. Aliphatic Alkanols.\" Journal of Chemical\n & Engineering Data 40, no. 5 (September 1, 1995): 1025-36.\n doi:10.1021/je00021a001.\n .. [5] Daubert, Thomas E. \"Vapor-Liquid Critical Properties of Elements\n and Compounds. 5. Branched Alkanes and Cycloalkanes.\" Journal of\n Chemical & Engineering Data 41, no. 3 (January 1, 1996): 365-72.\n doi:10.1021/je9501548.\n .. [6] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 6. Unsaturated Aliphatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 41, no. 4\n (January 1, 1996): 645-56. doi:10.1021/je9501999.\n .. [7] Kudchadker, Arvind P., Douglas Ambrose, and Constantine Tsonopoulos.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 7. Oxygen\n Compounds Other Than Alkanols and Cycloalkanols.\" Journal of Chemical &\n Engineering Data 46, no. 3 (May 1, 2001): 457-79. doi:10.1021/je0001680.\n .. [8] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 8. Organic Sulfur,\n Silicon, and Tin Compounds (C + H + S, Si, and Sn).\" Journal of Chemical\n & Engineering Data 46, no. 3 (May 1, 2001): 480-85.\n doi:10.1021/je000210r.\n .. [9] Marsh, Kenneth N., Colin L. Young, David W. Morton, Douglas Ambrose,\n and Constantine Tsonopoulos. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 9. Organic Compounds Containing Nitrogen.\"\n Journal of Chemical & Engineering Data 51, no. 2 (March 1, 2006):\n 305-14. doi:10.1021/je050221q.\n .. [10] Marsh, Kenneth N., Alan Abramson, Douglas Ambrose, David W. Morton,\n Eugene Nikitin, Constantine Tsonopoulos, and Colin L. Young.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 10. Organic\n Compounds Containing Halogens.\" Journal of Chemical & Engineering Data\n 52, no. 5 (September 1, 2007): 1509-38. doi:10.1021/je700336g.\n .. [11] Ambrose, Douglas, Constantine Tsonopoulos, and Eugene D. Nikitin.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 11. Organic\n Compounds Containing B + O; Halogens + N, + O, + O + S, + S, + Si;\n N + O; and O + S, + Si.\" Journal of Chemical & Engineering Data 54,\n no. 3 (March 12, 2009): 669-89. doi:10.1021/je800580z.\n .. [12] Ambrose, Douglas, Constantine Tsonopoulos, Eugene D. Nikitin, David\n W. Morton, and Kenneth N. Marsh. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 12. Review of Recent Data for Hydrocarbons and\n Non-Hydrocarbons.\" Journal of Chemical & Engineering Data, October 5,\n 2015, 151005081500002. doi:10.1021/acs.jced.5b00571.\n .. [13] Mathews, Joseph F. \"Critical Constants of Inorganic Substances.\"\n Chemical Reviews 72, no. 1 (February 1, 1972): 71-100.\n doi:10.1021/cr60275a004.\n .. [14] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of\n Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014.\n .. [15] Horstmann, Sven, Anna Jab\u0142oniec, J\u00f6rg Krafczyk, Kai Fischer, and\n J\u00fcrgen Gmehling. \"PSRK Group Contribution Equation of State:\n Comprehensive Revision and Extension IV, Including Critical Constants\n and \u0391-Function Parameters for 1000 Components.\" Fluid Phase Equilibria\n 227, no. 2 (January 25, 2005): 157-64. doi:10.1016/j.fluid.2004.11.002.\n .. [16] Passut, Charles A., and Ronald P. Danner. \"Acentric Factor. A\n Valuable Correlating Parameter for the Properties of Hydrocarbons.\"\n Industrial & Engineering Chemistry Process Design and Development 12,\n no. 3 (July 1, 1973): 365\u201368. doi:10.1021/i260047a026.\n .. [17] Yaws, Carl L. Thermophysical Properties of Chemicals and\n Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional\n Publishing, 2014."}
{"_id": "q_5340", "text": "r'''This function handles the retrieval of a chemical's critical\n volume. Lookup is based on CASRNs. Will automatically select a data\n source to use if no Method is provided; returns None if the data is not\n available.\n\n Prefered sources are 'IUPAC' for organic chemicals, and 'MATTHEWS' for \n inorganic chemicals. Function has data for approximately 1000 chemicals.\n\n Examples\n --------\n >>> Vc(CASRN='64-17-5')\n 0.000168\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Vc : float\n Critical volume, [m^3/mol]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Vc with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are 'IUPAC', 'MATTHEWS', \n 'CRC', 'PSRK', 'YAWS', and 'SURF'. All valid values are also held \n in the list `Vc_methods`.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Vc for the desired chemical, and will return methods instead of Vc\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods,\n useful for for performance reasons and ignoring inaccurate methods\n\n Notes\n -----\n A total of six sources are available for this function. They are:\n\n * 'IUPAC', a series of critically evaluated\n experimental datum for organic compounds in [1]_, [2]_, [3]_, [4]_,\n [5]_, [6]_, [7]_, [8]_, [9]_, [10]_, [11]_, and [12]_.\n * 'MATTHEWS', a series of critically\n evaluated data for inorganic compounds in [13]_.\n * 'CRC', a compillation of critically\n evaluated data by the TRC as published in [14]_.\n * 'PSRK', a compillation of experimental and\n estimated data published in [15]_.\n * 'YAWS', a large compillation of data from a\n variety of sources; no data points are sourced in the work of [16]_.\n * 'SURF', an estimation method using a\n simple quadratic method for estimating Pc from Tc and Vc. This is\n ignored and not returned as a method by default\n\n References\n ----------\n .. [1] Ambrose, Douglas, and Colin L. Young. \"Vapor-Liquid Critical\n Properties of Elements and Compounds. 1. An Introductory Survey.\"\n Journal of Chemical & Engineering Data 41, no. 1 (January 1, 1996):\n 154-154. doi:10.1021/je950378q.\n .. [2] Ambrose, Douglas, and Constantine Tsonopoulos. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 2. Normal Alkanes.\"\n Journal of Chemical & Engineering Data 40, no. 3 (May 1, 1995): 531-46.\n doi:10.1021/je00019a001.\n .. [3] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 3. Aromatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 40, no. 3\n (May 1, 1995): 547-58. doi:10.1021/je00019a002.\n .. [4] Gude, Michael, and Amyn S. Teja. \"Vapor-Liquid Critical Properties\n of Elements and Compounds. 4. Aliphatic Alkanols.\" Journal of Chemical\n & Engineering Data 40, no. 5 (September 1, 1995): 1025-36.\n doi:10.1021/je00021a001.\n .. [5] Daubert, Thomas E. \"Vapor-Liquid Critical Properties of Elements\n and Compounds. 5. Branched Alkanes and Cycloalkanes.\" Journal of\n Chemical & Engineering Data 41, no. 3 (January 1, 1996): 365-72.\n doi:10.1021/je9501548.\n .. [6] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 6. Unsaturated Aliphatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 41, no. 4\n (January 1, 1996): 645-56. doi:10.1021/je9501999.\n .. [7] Kudchadker, Arvind P., Douglas Ambrose, and Constantine Tsonopoulos.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 7. Oxygen\n Compounds Other Than Alkanols and Cycloalkanols.\" Journal of Chemical &\n Engineering Data 46, no. 3 (May 1, 2001): 457-79. doi:10.1021/je0001680.\n .. [8] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 8. Organic Sulfur,\n Silicon, and Tin Compounds (C + H + S, Si, and Sn).\" Journal of Chemical\n & Engineering Data 46, no. 3 (May 1, 2001): 480-85.\n doi:10.1021/je000210r.\n .. [9] Marsh, Kenneth N., Colin L. Young, David W. Morton, Douglas Ambrose,\n and Constantine Tsonopoulos. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 9. Organic Compounds Containing Nitrogen.\"\n Journal of Chemical & Engineering Data 51, no. 2 (March 1, 2006):\n 305-14. doi:10.1021/je050221q.\n .. [10] Marsh, Kenneth N., Alan Abramson, Douglas Ambrose, David W. Morton,\n Eugene Nikitin, Constantine Tsonopoulos, and Colin L. Young.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 10. Organic\n Compounds Containing Halogens.\" Journal of Chemical & Engineering Data\n 52, no. 5 (September 1, 2007): 1509-38. doi:10.1021/je700336g.\n .. [11] Ambrose, Douglas, Constantine Tsonopoulos, and Eugene D. Nikitin.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 11. Organic\n Compounds Containing B + O; Halogens + N, + O, + O + S, + S, + Si;\n N + O; and O + S, + Si.\" Journal of Chemical & Engineering Data 54,\n no. 3 (March 12, 2009): 669-89. doi:10.1021/je800580z.\n .. [12] Ambrose, Douglas, Constantine Tsonopoulos, Eugene D. Nikitin, David\n W. Morton, and Kenneth N. Marsh. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 12. Review of Recent Data for Hydrocarbons and\n Non-Hydrocarbons.\" Journal of Chemical & Engineering Data, October 5,\n 2015, 151005081500002. doi:10.1021/acs.jced.5b00571.\n .. [13] Mathews, Joseph F. \"Critical Constants of Inorganic Substances.\"\n Chemical Reviews 72, no. 1 (February 1, 1972): 71-100.\n doi:10.1021/cr60275a004.\n .. [14] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of\n Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014.\n .. [15] Horstmann, Sven, Anna Jab\u0142oniec, J\u00f6rg Krafczyk, Kai Fischer, and\n J\u00fcrgen Gmehling. \"PSRK Group Contribution Equation of State:\n Comprehensive Revision and Extension IV, Including Critical Constants\n and \u0391-Function Parameters for 1000 Components.\" Fluid Phase Equilibria\n 227, no. 2 (January 25, 2005): 157-64. doi:10.1016/j.fluid.2004.11.002.\n .. [16] Yaws, Carl L. Thermophysical Properties of Chemicals and\n Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional\n Publishing, 2014."}
{"_id": "q_5341", "text": "r'''This function handles the retrieval of a chemical's critical\n compressibility. Lookup is based on CASRNs. Will automatically select a\n data source to use if no Method is provided; returns None if the data is\n not available.\n\n Prefered sources are 'IUPAC' for organic chemicals, and 'MATTHEWS' for \n inorganic chemicals. Function has data for approximately 1000 chemicals.\n\n Examples\n --------\n >>> Zc(CASRN='64-17-5')\n 0.24100000000000002\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Zc : float\n Critical compressibility, [-]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Vc with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are 'IUPAC', 'MATTHEWS', \n 'CRC', 'PSRK', 'YAWS', and 'COMBINED'. All valid values are also held \n in `Zc_methods`.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Zc for the desired chemical, and will return methods instead of Zc\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods,\n useful for for performance reasons and ignoring inaccurate methods\n\n Notes\n -----\n A total of five sources are available for this function. They are:\n\n * 'IUPAC', a series of critically evaluated\n experimental datum for organic compounds in [1]_, [2]_, [3]_, [4]_,\n [5]_, [6]_, [7]_, [8]_, [9]_, [10]_, [11]_, and [12]_.\n * 'MATTHEWS', a series of critically\n evaluated data for inorganic compounds in [13]_.\n * 'CRC', a compillation of critically\n evaluated data by the TRC as published in [14]_.\n * 'PSRK', a compillation of experimental and\n estimated data published in [15]_.\n * 'YAWS', a large compillation of data from a\n variety of sources; no data points are sourced in the work of [16]_.\n\n References\n ----------\n .. [1] Ambrose, Douglas, and Colin L. Young. \"Vapor-Liquid Critical\n Properties of Elements and Compounds. 1. An Introductory Survey.\"\n Journal of Chemical & Engineering Data 41, no. 1 (January 1, 1996):\n 154-154. doi:10.1021/je950378q.\n .. [2] Ambrose, Douglas, and Constantine Tsonopoulos. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 2. Normal Alkanes.\"\n Journal of Chemical & Engineering Data 40, no. 3 (May 1, 1995): 531-46.\n doi:10.1021/je00019a001.\n .. [3] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 3. Aromatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 40, no. 3\n (May 1, 1995): 547-58. doi:10.1021/je00019a002.\n .. [4] Gude, Michael, and Amyn S. Teja. \"Vapor-Liquid Critical Properties\n of Elements and Compounds. 4. Aliphatic Alkanols.\" Journal of Chemical\n & Engineering Data 40, no. 5 (September 1, 1995): 1025-36.\n doi:10.1021/je00021a001.\n .. [5] Daubert, Thomas E. \"Vapor-Liquid Critical Properties of Elements\n and Compounds. 5. Branched Alkanes and Cycloalkanes.\" Journal of\n Chemical & Engineering Data 41, no. 3 (January 1, 1996): 365-72.\n doi:10.1021/je9501548.\n .. [6] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 6. Unsaturated Aliphatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 41, no. 4\n (January 1, 1996): 645-56. doi:10.1021/je9501999.\n .. [7] Kudchadker, Arvind P., Douglas Ambrose, and Constantine Tsonopoulos.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 7. Oxygen\n Compounds Other Than Alkanols and Cycloalkanols.\" Journal of Chemical &\n Engineering Data 46, no. 3 (May 1, 2001): 457-79. doi:10.1021/je0001680.\n .. [8] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 8. Organic Sulfur,\n Silicon, and Tin Compounds (C + H + S, Si, and Sn).\" Journal of Chemical\n & Engineering Data 46, no. 3 (May 1, 2001): 480-85.\n doi:10.1021/je000210r.\n .. [9] Marsh, Kenneth N., Colin L. Young, David W. Morton, Douglas Ambrose,\n and Constantine Tsonopoulos. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 9. Organic Compounds Containing Nitrogen.\"\n Journal of Chemical & Engineering Data 51, no. 2 (March 1, 2006):\n 305-14. doi:10.1021/je050221q.\n .. [10] Marsh, Kenneth N., Alan Abramson, Douglas Ambrose, David W. Morton,\n Eugene Nikitin, Constantine Tsonopoulos, and Colin L. Young.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 10. Organic\n Compounds Containing Halogens.\" Journal of Chemical & Engineering Data\n 52, no. 5 (September 1, 2007): 1509-38. doi:10.1021/je700336g.\n .. [11] Ambrose, Douglas, Constantine Tsonopoulos, and Eugene D. Nikitin.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 11. Organic\n Compounds Containing B + O; Halogens + N, + O, + O + S, + S, + Si;\n N + O; and O + S, + Si.\" Journal of Chemical & Engineering Data 54,\n no. 3 (March 12, 2009): 669-89. doi:10.1021/je800580z.\n .. [12] Ambrose, Douglas, Constantine Tsonopoulos, Eugene D. Nikitin, David\n W. Morton, and Kenneth N. Marsh. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 12. Review of Recent Data for Hydrocarbons and\n Non-Hydrocarbons.\" Journal of Chemical & Engineering Data, October 5,\n 2015, 151005081500002. doi:10.1021/acs.jced.5b00571.\n .. [13] Mathews, Joseph F. \"Critical Constants of Inorganic Substances.\"\n Chemical Reviews 72, no. 1 (February 1, 1972): 71-100.\n doi:10.1021/cr60275a004.\n .. [14] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of\n Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014.\n .. [15] Horstmann, Sven, Anna Jab\u0142oniec, J\u00f6rg Krafczyk, Kai Fischer, and\n J\u00fcrgen Gmehling. \"PSRK Group Contribution Equation of State:\n Comprehensive Revision and Extension IV, Including Critical Constants\n and \u0391-Function Parameters for 1000 Components.\" Fluid Phase Equilibria\n 227, no. 2 (January 25, 2005): 157-64. doi:10.1016/j.fluid.2004.11.002.\n .. [16] Yaws, Carl L. Thermophysical Properties of Chemicals and\n Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional\n Publishing, 2014."}
{"_id": "q_5342", "text": "r'''Function for calculating a critical property of a substance from its\n other two critical properties. Calls functions Ihmels, Meissner, and\n Grigoras, each of which use a general 'Critical surface' type of equation.\n Limited accuracy is expected due to very limited theoretical backing.\n\n Parameters\n ----------\n Tc : float\n Critical temperature of fluid (optional) [K]\n Pc : float\n Critical pressure of fluid (optional) [Pa]\n Vc : float\n Critical volume of fluid (optional) [m^3/mol]\n AvailableMethods : bool\n Request available methods for given parameters\n Method : string\n Request calculation uses the requested method\n\n Returns\n -------\n Tc, Pc or Vc : float\n Critical property of fluid [K], [Pa], or [m^3/mol]\n\n Notes\n -----\n\n Examples\n --------\n Decamethyltetrasiloxane [141-62-8]\n\n >>> critical_surface(Tc=599.4, Pc=1.19E6, Method='IHMELS')\n 0.0010927333333333334"}
{"_id": "q_5343", "text": "r'''Function for calculating a critical property of a substance from its\n other two critical properties, but retrieving the actual other critical\n values for convenient calculation.\n Calls functions Ihmels, Meissner, and\n Grigoras, each of which use a general 'Critical surface' type of equation.\n Limited accuracy is expected due to very limited theoretical backing.\n\n Parameters\n ----------\n CASRN : string\n The CAS number of the desired chemical\n T : bool\n Estimate critical temperature\n P : bool\n Estimate critical pressure\n V : bool\n Estimate critical volume\n\n Returns\n -------\n Tc, Pc or Vc : float\n Critical property of fluid [K], [Pa], or [m^3/mol]\n\n Notes\n -----\n Avoids recursion only by eliminating the None and critical surface options\n for calculating each critical property. So long as it never calls itself.\n Note that when used by Tc, Pc or Vc, this function results in said function\n calling the other functions (to determine methods) and (with method specified)\n\n Examples\n --------\n >>> # Decamethyltetrasiloxane [141-62-8]\n >>> third_property('141-62-8', V=True)\n 0.0010920041152263375\n\n >>> # Succinic acid 110-15-6\n >>> third_property('110-15-6', P=True)\n 6095016.233766234"}
{"_id": "q_5344", "text": "Checks if a CAS number is valid. Returns False if the parser cannot \n parse the given string..\n\n Parameters\n ----------\n CASRN : string\n A three-piece, dash-separated set of numbers\n\n Returns\n -------\n result : bool\n Boolean value if CASRN was valid. If parsing fails, return False also.\n\n Notes\n -----\n Check method is according to Chemical Abstract Society. However, no lookup\n to their service is performed; therefore, this function cannot detect\n false positives.\n\n Function also does not support additional separators, apart from '-'.\n \n CAS numbers up to the series 1 XXX XXX-XX-X are now being issued.\n \n A long can hold CAS numbers up to 2 147 483-64-7\n\n Examples\n --------\n >>> checkCAS('7732-18-5')\n True\n >>> checkCAS('77332-18-5')\n False"}
{"_id": "q_5345", "text": "Charge of the species as an integer. Computed as a property as most\n species do not have a charge and so storing it would be a waste of \n memory."}
{"_id": "q_5346", "text": "Loads a file with newline-separated integers representing which \n chemical should be kept in memory; ones not included are ignored."}
{"_id": "q_5347", "text": "r'''This function handles the retrieval or calculation a chemical's\n Stockmayer parameter. Values are available from one source with lookup\n based on CASRNs, or can be estimated from 7 CSP methods.\n Will automatically select a data source to use if no Method is provided;\n returns None if the data is not available.\n\n Prefered sources are 'Magalh\u00e3es, Lito, Da Silva, and Silva (2013)' for\n common chemicals which had valies listed in that source, and the CSP method\n `Tee, Gotoh, and Stewart CSP with Tc, omega (1966)` for chemicals which\n don't.\n\n Examples\n --------\n >>> Stockmayer(CASRN='64-17-5')\n 1291.41\n\n Parameters\n ----------\n Tm : float, optional\n Melting temperature of fluid [K]\n Tb : float, optional\n Boiling temperature of fluid [K]\n Tc : float, optional\n Critical temperature, [K]\n Zc : float, optional\n Critical compressibility, [-]\n omega : float, optional\n Acentric factor of compound, [-]\n CASRN : string, optional\n CASRN [-]\n\n Returns\n -------\n epsilon_k : float\n Lennard-Jones depth of potential-energy minimum over k, [K]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain epsilon with the given\n inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n Stockmayer_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n epsilon for the desired chemical, and will return methods instead of\n epsilon\n\n Notes\n -----\n These values are somewhat rough, as they attempt to pigeonhole a chemical\n into L-J behavior.\n\n The tabulated data is from [2]_, for 322 chemicals.\n\n References\n ----------\n .. [1] Bird, R. Byron, Warren E. Stewart, and Edwin N. Lightfoot.\n Transport Phenomena, Revised 2nd Edition. New York:\n John Wiley & Sons, Inc., 2006\n .. [2] Magalh\u00e3es, Ana L., Patr\u00edcia F. Lito, Francisco A. Da Silva, and\n Carlos M. Silva. \"Simple and Accurate Correlations for Diffusion\n Coefficients of Solutes in Liquids and Supercritical Fluids over Wide\n Ranges of Temperature and Density.\" The Journal of Supercritical Fluids\n 76 (April 2013): 94-114. doi:10.1016/j.supflu.2013.02.002."}
{"_id": "q_5348", "text": "r'''This function handles the retrieval or calculation a chemical's\n L-J molecular diameter. Values are available from one source with lookup\n based on CASRNs, or can be estimated from 9 CSP methods.\n Will automatically select a data source to use if no Method is provided;\n returns None if the data is not available.\n\n Prefered sources are 'Magalh\u00e3es, Lito, Da Silva, and Silva (2013)' for\n common chemicals which had valies listed in that source, and the CSP method\n `Tee, Gotoh, and Stewart CSP with Tc, Pc, omega (1966)` for chemicals which\n don't.\n\n Examples\n --------\n >>> molecular_diameter(CASRN='64-17-5')\n 4.23738\n\n Parameters\n ----------\n Tc : float, optional\n Critical temperature, [K]\n Pc : float, optional\n Critical pressure, [Pa]\n Vc : float, optional\n Critical volume, [m^3/mol]\n Zc : float, optional\n Critical compressibility, [-]\n omega : float, optional\n Acentric factor of compound, [-]\n Vm : float, optional\n Molar volume of liquid at the melting point of the fluid [K]\n Vb : float, optional\n Molar volume of liquid at the boiling point of the fluid [K]\n CASRN : string, optional\n CASRN [-]\n\n Returns\n -------\n sigma : float\n Lennard-Jones molecular diameter, [Angstrom]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain epsilon with the given\n inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n molecular_diameter_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n sigma for the desired chemical, and will return methods instead of\n sigma\n\n Notes\n -----\n These values are somewhat rough, as they attempt to pigeonhole a chemical\n into L-J behavior.\n\n The tabulated data is from [2]_, for 322 chemicals.\n\n References\n ----------\n .. [1] Bird, R. Byron, Warren E. Stewart, and Edwin N. Lightfoot.\n Transport Phenomena, Revised 2nd Edition. New York:\n John Wiley & Sons, Inc., 2006\n .. [2] Magalh\u00e3es, Ana L., Patr\u00edcia F. Lito, Francisco A. Da Silva, and\n Carlos M. Silva. \"Simple and Accurate Correlations for Diffusion\n Coefficients of Solutes in Liquids and Supercritical Fluids over Wide\n Ranges of Temperature and Density.\" The Journal of Supercritical Fluids\n 76 (April 2013): 94-114. doi:10.1016/j.supflu.2013.02.002."}
{"_id": "q_5349", "text": "r'''This function handles the retrieval of a chemical's acentric factor,\n `omega`, or its calculation from correlations or directly through the\n definition of acentric factor if possible. Requires a known boiling point,\n critical temperature and pressure for use of the correlations. Requires\n accurate vapor pressure data for direct calculation.\n\n Will automatically select a method to use if no Method is provided;\n returns None if the data is not available and cannot be calculated.\n\n .. math::\n \\omega \\equiv -\\log_{10}\\left[\\lim_{T/T_c=0.7}(P^{sat}/P_c)\\right]-1.0\n\n Examples\n --------\n >>> omega(CASRN='64-17-5')\n 0.635\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n omega : float\n Acentric factor of compound\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain omega with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are 'PSRK', 'PD', 'YAWS', \n 'LK', and 'DEFINITION'. All valid values are also held in the list\n omega_methods.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n omega for the desired chemical, and will return methods instead of\n omega\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods,\n useful for for performance reasons and ignoring inaccurate methods\n\n Notes\n -----\n A total of five sources are available for this function. They are:\n\n * 'PSRK', a compillation of experimental and estimated data published \n in the Appendix of [15]_, the fourth revision of the PSRK model.\n * 'PD', an older compillation of\n data published in (Passut & Danner, 1973) [16]_.\n * 'YAWS', a large compillation of data from a\n variety of sources; no data points are sourced in the work of [17]_.\n * 'LK', a estimation method for hydrocarbons.\n * 'DEFINITION', based on the definition of omega as\n presented in [1]_, using vapor pressure data.\n\n References\n ----------\n .. [1] Pitzer, K. S., D. Z. Lippmann, R. F. Curl, C. M. Huggins, and\n D. E. Petersen: The Volumetric and Thermodynamic Properties of Fluids.\n II. Compressibility Factor, Vapor Pressure and Entropy of Vaporization.\n J. Am. Chem. Soc., 77: 3433 (1955).\n .. [2] Horstmann, Sven, Anna Jab\u0142oniec, J\u00f6rg Krafczyk, Kai Fischer, and\n J\u00fcrgen Gmehling. \"PSRK Group Contribution Equation of State:\n Comprehensive Revision and Extension IV, Including Critical Constants\n and \u0391-Function Parameters for 1000 Components.\" Fluid Phase Equilibria\n 227, no. 2 (January 25, 2005): 157-64. doi:10.1016/j.fluid.2004.11.002.\n .. [3] Passut, Charles A., and Ronald P. Danner. \"Acentric Factor. A\n Valuable Correlating Parameter for the Properties of Hydrocarbons.\"\n Industrial & Engineering Chemistry Process Design and Development 12,\n no. 3 (July 1, 1973): 365-68. doi:10.1021/i260047a026.\n .. [4] Yaws, Carl L. Thermophysical Properties of Chemicals and\n Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional\n Publishing, 2014."}
{"_id": "q_5350", "text": "r'''This function handles the calculation of a chemical's Stiel Polar\n factor, directly through the definition of Stiel-polar factor if possible.\n Requires Tc, Pc, acentric factor, and a vapor pressure datum at Tr=0.6.\n\n Will automatically select a method to use if no Method is provided;\n returns None if the data is not available and cannot be calculated.\n\n .. math::\n x = \\log P_r|_{T_r=0.6} + 1.70 \\omega + 1.552\n\n Parameters\n ----------\n Tc : float\n Critical temperature of fluid [K]\n Pc : float\n Critical pressure of fluid [Pa]\n omega : float\n Acentric factor of the fluid [-]\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n factor : float\n Stiel polar factor of compound\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Stiel polar factor with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Only 'DEFINITION' is accepted so far.\n All valid values are also held in the list Stiel_polar_methods.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Stiel-polar factor for the desired chemical, and will return methods\n instead of stiel-polar factor\n\n Notes\n -----\n Only one source is available for this function. It is:\n\n * 'DEFINITION', based on the definition of\n Stiel Polar Factor presented in [1]_, using vapor pressure data.\n\n A few points have also been published in [2]_, which may be used for\n comparison. Currently this is only used for a surface tension correlation.\n\n Examples\n --------\n >>> StielPolar(647.3, 22048321.0, 0.344, CASRN='7732-18-5')\n 0.024581140348734376\n\n References\n ----------\n .. [1] Halm, Roland L., and Leonard I. Stiel. \"A Fourth Parameter for the\n Vapor Pressure and Entropy of Vaporization of Polar Fluids.\" AIChE\n Journal 13, no. 2 (1967): 351-355. doi:10.1002/aic.690130228.\n .. [2] D, Kukoljac Milo\u0161, and Grozdani\u0107 Du\u0161an K. \"New Values of the\n Polarity Factor.\" Journal of the Serbian Chemical Society 65, no. 12\n (January 1, 2000). http://www.shd.org.rs/JSCS/Vol65/No12-Pdf/JSCS12-07.pdf"}
{"_id": "q_5351", "text": "r'''Round a number to the nearest whole number. If the number is exactly\n between two numbers, round to the even whole number. Used by\n `viscosity_index`.\n\n Parameters\n ----------\n i : float\n Number, [-]\n\n Returns\n -------\n i : int\n Rounded number, [-]\n\n Notes\n -----\n Should never run with inputs from a practical function, as numbers on\n computers aren't really normally exactly between two numbers.\n\n Examples\n --------\n _round_whole_even(116.5)\n 116"}
{"_id": "q_5352", "text": "r'''Method to calculate low-pressure liquid viscosity at tempearture\n `T` with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate viscosity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n mu : float\n Viscosity of the liquid at T and a low pressure, [Pa*S]"}
{"_id": "q_5353", "text": "r'''Method to calculate pressure-dependent liquid viscosity at\n temperature `T` and pressure `P` with a given method.\n\n This method has no exception handling; see `TP_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate viscosity, [K]\n P : float\n Pressure at which to calculate viscosity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n mu : float\n Viscosity of the liquid at T and P, [Pa*S]"}
{"_id": "q_5354", "text": "r'''Method to calculate viscosity of a liquid mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n mu : float\n Viscosity of the liquid mixture, [Pa*s]"}
{"_id": "q_5355", "text": "r'''Method to calculate viscosity of a gas mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n mu : float\n Viscosity of gas mixture, [Pa*s]"}
{"_id": "q_5356", "text": "This function handles the retrieval of Time-Weighted Average limits on worker\n exposure to dangerous chemicals.\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface.\n\n >>> TWA('98-00-0')\n (10.0, 'ppm')\n >>> TWA('1303-00-0')\n (5.0742430905659505e-05, 'ppm')\n >>> TWA('7782-42-5', AvailableMethods=True)\n ['Ontario Limits', 'None']"}
{"_id": "q_5357", "text": "This function handles the retrieval of Ceiling limits on worker\n exposure to dangerous chemicals.\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface.\n\n >>> Ceiling('75-07-0')\n (25.0, 'ppm')\n >>> Ceiling('1395-21-7')\n (6e-05, 'mg/m^3')\n >>> Ceiling('7572-29-4', AvailableMethods=True)\n ['Ontario Limits', 'None']"}
{"_id": "q_5358", "text": "r'''Looks up if a chemical is listed as a carcinogen or not according to\n either a specifc method or with all methods.\n\n Returns either the status as a string for a specified method, or the\n status of the chemical in all available data sources, in the format\n {source: status}.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n status : str or dict\n Carcinogen status information [-]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain carcinogen status with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n Carcinogen_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n if a chemical is listed as carcinogenic, and will return methods\n instead of the status\n\n Notes\n -----\n Supported methods are:\n\n * **IARC**: International Agency for Research on Cancer, [1]_. As\n extracted with a last update of February 22, 2016. Has listing\n information of 843 chemicals with CAS numbers. Chemicals without\n CAS numbers not included here. If two listings for the same CAS\n were available, that closest to the CAS number was used. If two\n listings were available published at different times, the latest\n value was used. All else equal, the most pessimistic value was used.\n * **NTP**: National Toxicology Program, [2]_. Has data on 226\n chemicals.\n\n Examples\n --------\n >>> Carcinogen('61-82-5')\n {'National Toxicology Program 13th Report on Carcinogens': 'Reasonably Anticipated', 'International Agency for Research on Cancer': 'Not classifiable as to its carcinogenicity to humans (3)'}\n\n References\n ----------\n .. [1] International Agency for Research on Cancer. Agents Classified by\n the IARC Monographs, Volumes 1-115. Lyon, France: IARC; 2016 Available\n from: http://monographs.iarc.fr/ENG/Classification/\n .. [2] NTP (National Toxicology Program). 2014. Report on Carcinogens,\n Thirteenth Edition. Research Triangle Park, NC: U.S. Department of\n Health and Human Services, Public Health Service.\n http://ntp.niehs.nih.gov/pubhealth/roc/roc13/"}
{"_id": "q_5359", "text": "r'''This function handles the retrieval or calculation of a chemical's\n autoifnition temperature. Lookup is based on CASRNs. No predictive methods\n are currently implemented. Will automatically select a data source to use\n if no Method is provided; returns None if the data is not available.\n\n Prefered source is 'IEC 60079-20-1 (2010)' [1]_, with the secondary source\n 'NFPA 497 (2008)' [2]_ having very similar data.\n\n Examples\n --------\n >>> Tautoignition(CASRN='71-43-2')\n 771.15\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Tautoignition : float\n Autoignition point of the chemical, [K]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Tautoignition with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n Tautoignition_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Tautoignition for the desired chemical, and will return methods\n instead of Tautoignition\n\n Notes\n -----\n\n References\n ----------\n .. [1] IEC. \u201cIEC 60079-20-1:2010 Explosive atmospheres - Part 20-1:\n Material characteristics for gas and vapour classification - Test\n methods and data.\u201d https://webstore.iec.ch/publication/635. See also\n https://law.resource.org/pub/in/bis/S05/is.iec.60079.20.1.2010.pdf\n .. [2] National Fire Protection Association. NFPA 497: Recommended\n Practice for the Classification of Flammable Liquids, Gases, or Vapors\n and of Hazardous. NFPA, 2008."}
{"_id": "q_5360", "text": "r'''This function handles the retrieval or calculation of a chemical's\n Lower Flammability Limit. Lookup is based on CASRNs. Two predictive methods\n are currently implemented. Will automatically select a data source to use\n if no Method is provided; returns None if the data is not available.\n\n Prefered source is 'IEC 60079-20-1 (2010)' [1]_, with the secondary source\n 'NFPA 497 (2008)' [2]_ having very similar data. If the heat of combustion\n is provided, the estimation method `Suzuki_LFL` can be used. If the atoms\n of the molecule are available, the method `Crowl_Louvar_LFL` can be used.\n\n Examples\n --------\n >>> LFL(CASRN='71-43-2')\n 0.012\n\n Parameters\n ----------\n Hc : float, optional\n Heat of combustion of gas [J/mol]\n atoms : dict, optional\n Dictionary of atoms and atom counts\n CASRN : string, optional\n CASRN [-]\n\n Returns\n -------\n LFL : float\n Lower flammability limit of the gas in an atmosphere at STP, [mole fraction]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain LFL with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n LFL_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the Lower Flammability Limit for the desired chemical, and will return\n methods instead of Lower Flammability Limit.\n\n Notes\n -----\n\n References\n ----------\n .. [1] IEC. \u201cIEC 60079-20-1:2010 Explosive atmospheres - Part 20-1:\n Material characteristics for gas and vapour classification - Test\n methods and data.\u201d https://webstore.iec.ch/publication/635. See also\n https://law.resource.org/pub/in/bis/S05/is.iec.60079.20.1.2010.pdf\n .. [2] National Fire Protection Association. NFPA 497: Recommended\n Practice for the Classification of Flammable Liquids, Gases, or Vapors\n and of Hazardous. NFPA, 2008."}
{"_id": "q_5361", "text": "r'''This function handles the retrieval or calculation of a chemical's\n Upper Flammability Limit. Lookup is based on CASRNs. Two predictive methods\n are currently implemented. Will automatically select a data source to use\n if no Method is provided; returns None if the data is not available.\n\n Prefered source is 'IEC 60079-20-1 (2010)' [1]_, with the secondary source\n 'NFPA 497 (2008)' [2]_ having very similar data. If the heat of combustion\n is provided, the estimation method `Suzuki_UFL` can be used. If the atoms\n of the molecule are available, the method `Crowl_Louvar_UFL` can be used.\n\n Examples\n --------\n >>> UFL(CASRN='71-43-2')\n 0.086\n\n Parameters\n ----------\n Hc : float, optional\n Heat of combustion of gas [J/mol]\n atoms : dict, optional\n Dictionary of atoms and atom counts\n CASRN : string, optional\n CASRN [-]\n\n Returns\n -------\n UFL : float\n Upper flammability limit of the gas in an atmosphere at STP, [mole fraction]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain UFL with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n UFL_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the Upper Flammability Limit for the desired chemical, and will return\n methods instead of Upper Flammability Limit.\n\n Notes\n -----\n\n References\n ----------\n .. [1] IEC. \u201cIEC 60079-20-1:2010 Explosive atmospheres - Part 20-1:\n Material characteristics for gas and vapour classification - Test\n methods and data.\u201d https://webstore.iec.ch/publication/635. See also\n https://law.resource.org/pub/in/bis/S05/is.iec.60079.20.1.2010.pdf\n .. [2] National Fire Protection Association. NFPA 497: Recommended\n Practice for the Classification of Flammable Liquids, Gases, or Vapors\n and of Hazardous. NFPA, 2008."}
{"_id": "q_5362", "text": "r'''Interface for drawing a 2D image of all the molecules in the\n mixture. Requires an HTML5 browser, and the libraries RDKit and\n IPython. An exception is raised if either of these libraries is\n absent.\n\n Parameters\n ----------\n Hs : bool\n Whether or not to show hydrogen\n\n Examples\n --------\n Mixture(['natural gas']).draw_2d()"}
{"_id": "q_5363", "text": "r'''Calculate a real fluid's Joule Thomson coefficient. The required \n derivative should be calculated with an equation of state, and `Cp` is the\n real fluid versions. This can either be calculated with `dV_dT` directly, \n or with `beta` if it is already known.\n\n .. math::\n \\mu_{JT} = \\left(\\frac{\\partial T}{\\partial P}\\right)_H = \\frac{1}{C_p}\n \\left[T \\left(\\frac{\\partial V}{\\partial T}\\right)_P - V\\right]\n = \\frac{V}{C_p}\\left(\\beta T-1\\right)\n \n Parameters\n ----------\n T : float\n Temperature of fluid, [K]\n V : float\n Molar volume of fluid, [m^3/mol]\n Cp : float\n Real fluid heat capacity at constant pressure, [J/mol/K]\n dV_dT : float, optional\n Derivative of `V` with respect to `T`, [m^3/mol/K]\n beta : float, optional\n Isobaric coefficient of a thermal expansion, [1/K]\n\n Returns\n -------\n mu_JT : float\n Joule-Thomson coefficient [K/Pa]\n \n Examples\n --------\n Example from [2]_:\n \n >>> Joule_Thomson(T=390, V=0.00229754, Cp=153.235, dV_dT=1.226396e-05)\n 1.621956080529905e-05\n\n References\n ----------\n .. [1] Walas, Stanley M. Phase Equilibria in Chemical Engineering. \n Butterworth-Heinemann, 1985.\n .. [2] Pratt, R. M. \"Thermodynamic Properties Involving Derivatives: Using \n the Peng-Robinson Equation of State.\" Chemical Engineering Education 35,\n no. 2 (March 1, 2001): 112-115."}
{"_id": "q_5364", "text": "r'''Converts a list of mole fractions to mass fractions. Requires molecular\n weights for all species.\n\n .. math::\n w_i = \\frac{z_i MW_i}{MW_{avg}}\n\n MW_{avg} = \\sum_i z_i MW_i\n\n Parameters\n ----------\n zs : iterable\n Mole fractions [-]\n MWs : iterable\n Molecular weights [g/mol]\n\n Returns\n -------\n ws : iterable\n Mass fractions [-]\n\n Notes\n -----\n Does not check that the sums add to one. Does not check that inputs are of\n the same length.\n\n Examples\n --------\n >>> zs_to_ws([0.5, 0.5], [10, 20])\n [0.3333333333333333, 0.6666666666666666]"}
{"_id": "q_5365", "text": "r'''Converts a list of mole fractions to volume fractions. Requires molar\n volumes for all species.\n\n .. math::\n \\text{Vf}_i = \\frac{z_i V_{m,i}}{\\sum_i z_i V_{m,i}}\n\n Parameters\n ----------\n zs : iterable\n Mole fractions [-]\n VMs : iterable\n Molar volumes of species [m^3/mol]\n\n Returns\n -------\n Vfs : list\n Molar volume fractions [-]\n\n Notes\n -----\n Does not check that the sums add to one. Does not check that inputs are of\n the same length.\n\n Molar volumes are specified in terms of pure components only. Function\n works with any phase.\n\n Examples\n --------\n Acetone and benzene example\n\n >>> zs_to_Vfs([0.637, 0.363], [8.0234e-05, 9.543e-05])\n [0.5960229712956298, 0.4039770287043703]"}
{"_id": "q_5366", "text": "r'''Checks inputs for suitability of use by a mixing rule which requires\n all inputs to be of the same length and non-None. A number of variations\n were attempted for this function; this was found to be the quickest.\n\n Parameters\n ----------\n all_inputs : array-like of array-like\n list of all the lists of inputs, [-]\n length : int, optional\n Length of the desired inputs, [-]\n\n Returns\n -------\n False/True : bool\n Returns True only if all inputs are the same length (or length `length`)\n and none of the inputs contain None [-]\n\n Notes\n -----\n Does not check for nan values.\n\n Examples\n --------\n >>> none_and_length_check(([1, 1], [1, 1], [1, 30], [10,0]), length=2)\n True"}
{"_id": "q_5367", "text": "r'''Simple function calculates a property based on weighted averages of\n logarithmic properties.\n\n .. math::\n y = \\sum_i \\text{frac}_i \\cdot \\log(\\text{prop}_i)\n\n Parameters\n ----------\n fracs : array-like\n Fractions of a mixture\n props: array-like\n Properties\n\n Returns\n -------\n prop : value\n Calculated property\n\n Notes\n -----\n Does not work on negative values.\n Returns None if any fractions or properties are missing or are not of the\n same length.\n\n Examples\n --------\n >>> mixing_logarithmic([0.1, 0.9], [0.01, 0.02])\n 0.01866065983073615"}
{"_id": "q_5368", "text": "r'''Determines which phase's property should be set as a default, given\n the phase a chemical is, and the property values of various phases. For the\n case of liquid-gas phase, returns None. If the property is not available\n for the current phase, or if the current phase is not known, returns None.\n\n Parameters\n ----------\n phase : str\n One of {'s', 'l', 'g', 'two-phase'}\n s : float\n Solid-phase property\n l : float\n Liquid-phase property\n g : float\n Gas-phase property\n V_over_F : float\n Vapor phase fraction\n\n Returns\n -------\n prop : float\n The selected/calculated property for the relevant phase\n\n Notes\n -----\n Could calculate mole-fraction weighted properties for the two phase regime.\n Could also implement equilibria with solid phases.\n\n Examples\n --------\n >>> phase_select_property(phase='g', l=1560.14, g=3312.)\n 3312.0"}
{"_id": "q_5369", "text": "r'''Method to obtain a sorted list of methods which are valid at `T`\n according to `test_method_validity`. Considers either only user methods\n if forced is True, or all methods. User methods are first tested\n according to their listed order, and unless forced is True, then all\n methods are tested and sorted by their order in `ranked_methods`.\n\n Parameters\n ----------\n T : float\n Temperature at which to test methods, [K]\n\n Returns\n -------\n sorted_valid_methods : list\n Sorted lists of methods valid at T according to\n `test_method_validity`"}
{"_id": "q_5370", "text": "r'''Method to solve for the temperature at which a property is at a\n specified value. `T_dependent_property` is used to calculate the value\n of the property as a function of temperature; if `reset_method` is True,\n the best method is used at each temperature as the solver seeks a\n solution. This slows the solution moderately.\n\n Checks the given property value with `test_property_validity` first\n and raises an exception if it is not valid. Requires that Tmin and\n Tmax have been set to know what range to search within.\n\n Search is performed with the brenth solver from SciPy.\n\n Parameters\n ----------\n goal : float\n Propoerty value desired, [`units`]\n reset_method : bool\n Whether or not to reset the method as the solver searches\n\n Returns\n -------\n T : float\n Temperature at which the property is the specified value [K]"}
{"_id": "q_5371", "text": "r'''Method to obtain a derivative of a property with respect to \n temperature, of a given order. Methods found valid by \n `select_valid_methods` are attempted until a method succeeds. If no \n methods are valid and succeed, None is returned.\n\n Calls `calculate_derivative` internally to perform the actual\n calculation.\n \n .. math::\n \\text{derivative} = \\frac{d (\\text{property})}{d T}\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the derivative, [K]\n order : int\n Order of the derivative, >= 1\n\n Returns\n -------\n derivative : float\n Calculated derivative property, [`units/K^order`]"}
{"_id": "q_5372", "text": "r'''Method to calculate the integral of a property with respect to\n temperature, using a specified method. Uses SciPy's `quad` function\n to perform the integral, with no options.\n \n This method can be overwritten by subclasses who may perfer to add\n analytical methods for some or all methods as this is much faster.\n\n If the calculation does not succeed, returns the actual error\n encountered.\n\n Parameters\n ----------\n T1 : float\n Lower limit of integration, [K]\n T2 : float\n Upper limit of integration, [K]\n method : str\n Method for which to find the integral\n\n Returns\n -------\n integral : float\n Calculated integral of the property over the given range, \n [`units*K`]"}
{"_id": "q_5373", "text": "r'''Method to calculate the integral of a property with respect to\n temperature, using a specified method. Methods found valid by \n `select_valid_methods` are attempted until a method succeeds. If no \n methods are valid and succeed, None is returned.\n \n Calls `calculate_integral` internally to perform the actual\n calculation.\n\n .. math::\n \\text{integral} = \\int_{T_1}^{T_2} \\text{property} \\; dT\n\n Parameters\n ----------\n T1 : float\n Lower limit of integration, [K]\n T2 : float\n Upper limit of integration, [K]\n method : str\n Method for which to find the integral\n\n Returns\n -------\n integral : float\n Calculated integral of the property over the given range, \n [`units*K`]"}
{"_id": "q_5374", "text": "r'''Method to calculate the integral of a property over temperature\n with respect to temperature, using a specified method. Uses SciPy's \n `quad` function to perform the integral, with no options.\n \n This method can be overwritten by subclasses who may perfer to add\n analytical methods for some or all methods as this is much faster.\n\n If the calculation does not succeed, returns the actual error\n encountered.\n\n Parameters\n ----------\n T1 : float\n Lower limit of integration, [K]\n T2 : float\n Upper limit of integration, [K]\n method : str\n Method for which to find the integral\n\n Returns\n -------\n integral : float\n Calculated integral of the property over the given range, \n [`units`]"}
{"_id": "q_5375", "text": "r'''Method to load all data, and set all_methods based on the available\n data and properties. Demo function for testing only; must be\n implemented according to the methods available for each individual\n method."}
{"_id": "q_5376", "text": "r'''Method to calculate a property with a specified method, with no\n validity checking or error handling. Demo function for testing only;\n must be implemented according to the methods available for each\n individual method. Include the interpolation call here.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n method : str\n Method name to use\n\n Returns\n -------\n prop : float\n Calculated property, [`units`]"}
{"_id": "q_5377", "text": "r'''Method to obtain a sorted list methods which are valid at `T`\n according to `test_method_validity`. Considers either only user methods\n if forced is True, or all methods. User methods are first tested\n according to their listed order, and unless forced is True, then all\n methods are tested and sorted by their order in `ranked_methods`.\n\n Parameters\n ----------\n T : float\n Temperature at which to test methods, [K]\n P : float\n Pressure at which to test methods, [Pa]\n\n Returns\n -------\n sorted_valid_methods_P : list\n Sorted lists of methods valid at T and P according to\n `test_method_validity`"}
{"_id": "q_5378", "text": "r'''Method to calculate the property with sanity checking and without\n specifying a specific method. `select_valid_methods_P` is used to obtain\n a sorted list of methods to try. Methods are then tried in order until\n one succeeds. The methods are allowed to fail, and their results are\n checked with `test_property_validity`. On success, the used method\n is stored in the variable `method_P`.\n\n If `method_P` is set, this method is first checked for validity with\n `test_method_validity_P` for the specified temperature, and if it is\n valid, it is then used to calculate the property. The result is checked\n for validity, and returned if it is valid. If either of the checks fail,\n the function retrieves a full list of valid methods with\n `select_valid_methods_P` and attempts them as described above.\n\n If no methods are found which succeed, returns None.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n\n Returns\n -------\n prop : float\n Calculated property, [`units`]"}
{"_id": "q_5379", "text": "r'''Method to calculate a derivative of a temperature and pressure\n dependent property with respect to temperature at constant pressure,\n of a given order. Methods found valid by `select_valid_methods_P` are \n attempted until a method succeeds. If no methods are valid and succeed,\n None is returned.\n\n Calls `calculate_derivative_T` internally to perform the actual\n calculation.\n \n .. math::\n \\text{derivative} = \\frac{d (\\text{property})}{d T}|_{P}\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the derivative, [K]\n P : float\n Pressure at which to calculate the derivative, [Pa]\n order : int\n Order of the derivative, >= 1\n\n Returns\n -------\n d_prop_d_T_at_P : float\n Calculated derivative property, [`units/K^order`]"}
{"_id": "q_5380", "text": "r'''Method to calculate a derivative of a temperature and pressure\n dependent property with respect to pressure at constant temperature,\n of a given order. Methods found valid by `select_valid_methods_P` are \n attempted until a method succeeds. If no methods are valid and succeed,\n None is returned.\n\n Calls `calculate_derivative_P` internally to perform the actual\n calculation.\n \n .. math::\n \\text{derivative} = \\frac{d (\\text{property})}{d P}|_{T}\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the derivative, [K]\n P : float\n Pressure at which to calculate the derivative, [Pa]\n order : int\n Order of the derivative, >= 1\n\n Returns\n -------\n d_prop_d_P_at_T : float\n Calculated derivative property, [`units/Pa^order`]"}
{"_id": "q_5381", "text": "r'''Method to calculate a derivative of a mixture property with respect\n to temperature at constant pressure and composition,\n of a given order. Methods found valid by `select_valid_methods` are \n attempted until a method succeeds. If no methods are valid and succeed,\n None is returned.\n\n Calls `calculate_derivative_T` internally to perform the actual\n calculation.\n \n .. math::\n \\text{derivative} = \\frac{d (\\text{property})}{d T}|_{P, z}\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the derivative, [K]\n P : float\n Pressure at which to calculate the derivative, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n order : int\n Order of the derivative, >= 1\n\n Returns\n -------\n d_prop_d_T_at_P : float\n Calculated derivative property, [`units/K^order`]"}
{"_id": "q_5382", "text": "r'''Method to calculate a derivative of a mixture property with respect\n to pressure at constant temperature and composition,\n of a given order. Methods found valid by `select_valid_methods` are \n attempted until a method succeeds. If no methods are valid and succeed,\n None is returned.\n\n Calls `calculate_derivative_P` internally to perform the actual\n calculation.\n \n .. math::\n \\text{derivative} = \\frac{d (\\text{property})}{d P}|_{T, z}\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the derivative, [K]\n P : float\n Pressure at which to calculate the derivative, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n order : int\n Order of the derivative, >= 1\n\n Returns\n -------\n d_prop_d_P_at_T : float\n Calculated derivative property, [`units/Pa^order`]"}
{"_id": "q_5383", "text": "r'''Generic method to calculate `T` from a specified `P` and `V`.\n Provides SciPy's `newton` solver, and iterates to solve the general\n equation for `P`, recalculating `a_alpha` as a function of temperature\n using `a_alpha_and_derivatives` each iteration.\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n quick : bool, optional\n Unimplemented, although it may be possible to derive explicit \n expressions as done for many pure-component EOS\n\n Returns\n -------\n T : float\n Temperature, [K]"}
{"_id": "q_5384", "text": "r'''Sets `a`, `kappa`, and `Tc` for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."}
{"_id": "q_5385", "text": "r'''Sets `a`, `m`, and `Tc` for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."}
{"_id": "q_5386", "text": "r'''Sets `a`, `kappa0`, `kappa1`, and `Tc` for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."}
{"_id": "q_5387", "text": "r'''Sets `a`, `kappa`, `kappa0`, `kappa1`, `kappa2`, `kappa3` and `Tc`\n for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."}
{"_id": "q_5388", "text": "r'''Sets `a`, `omega`, and `Tc` for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."}
{"_id": "q_5389", "text": "r'''Sets `a`, `S1`, `S2` and `Tc` for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."}
{"_id": "q_5390", "text": "r'''Estimates the thermal conductivity of parafin liquid hydrocarbons.\n Fits their data well, and is useful as only MW is required.\n X is the Molecular weight, and Y the temperature.\n\n .. math::\n K = a + bY + CY^2 + dY^3\n\n a = A_1 + B_1 X + C_1 X^2 + D_1 X^3\n\n b = A_2 + B_2 X + C_2 X^2 + D_2 X^3\n\n c = A_3 + B_3 X + C_3 X^2 + D_3 X^3\n\n d = A_4 + B_4 X + C_4 X^2 + D_4 X^3\n\n Parameters\n ----------\n T : float\n Temperature of the fluid [K]\n M : float\n Molecular weight of the fluid [g/mol]\n\n Returns\n -------\n kl : float\n Estimated liquid thermal conductivity [W/m/k]\n\n Notes\n -----\n The accuracy of this equation has not been reviewed.\n\n Examples\n --------\n Data point from [1]_.\n\n >>> Bahadori_liquid(273.15, 170)\n 0.14274278108272603\n\n References\n ----------\n .. [1] Bahadori, Alireza, and Saeid Mokhatab. \"Estimating Thermal\n Conductivity of Hydrocarbons.\" Chemical Engineering 115, no. 13\n (December 2008): 52-54"}
{"_id": "q_5391", "text": "r'''Estimates the thermal conductivity of hydrocarbons gases at low P.\n Fits their data well, and is useful as only MW is required.\n Y is the Molecular weight, and X the temperature.\n\n .. math::\n K = a + bY + CY^2 + dY^3\n\n a = A_1 + B_1 X + C_1 X^2 + D_1 X^3\n\n b = A_2 + B_2 X + C_2 X^2 + D_2 X^3\n\n c = A_3 + B_3 X + C_3 X^2 + D_3 X^3\n\n d = A_4 + B_4 X + C_4 X^2 + D_4 X^3\n\n Parameters\n ----------\n T : float\n Temperature of the gas [K]\n MW : float\n Molecular weight of the gas [g/mol]\n\n Returns\n -------\n kg : float\n Estimated gas thermal conductivity [W/m/k]\n\n Notes\n -----\n The accuracy of this equation has not been reviewed.\n\n Examples\n --------\n >>> Bahadori_gas(40+273.15, 20) # Point from article\n 0.031968165337873326\n\n References\n ----------\n .. [1] Bahadori, Alireza, and Saeid Mokhatab. \"Estimating Thermal\n Conductivity of Hydrocarbons.\" Chemical Engineering 115, no. 13\n (December 2008): 52-54"}
{"_id": "q_5392", "text": "r'''Method to calculate low-pressure liquid thermal conductivity at\n tempearture `T` with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature of the liquid, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n kl : float\n Thermal conductivity of the liquid at T and a low pressure, [W/m/K]"}
{"_id": "q_5393", "text": "r'''Method to calculate pressure-dependent liquid thermal conductivity\n at temperature `T` and pressure `P` with a given method.\n\n This method has no exception handling; see `TP_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate liquid thermal conductivity, [K]\n P : float\n Pressure at which to calculate liquid thermal conductivity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n kl : float\n Thermal conductivity of the liquid at T and P, [W/m/K]"}
{"_id": "q_5394", "text": "r'''Method to calculate thermal conductivity of a liquid mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n k : float\n Thermal conductivity of the liquid mixture, [W/m/K]"}
{"_id": "q_5395", "text": "r'''Method to calculate low-pressure gas thermal conductivity at\n tempearture `T` with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature of the gas, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n kg : float\n Thermal conductivity of the gas at T and a low pressure, [W/m/K]"}
{"_id": "q_5396", "text": "r'''Method to calculate pressure-dependent gas thermal conductivity\n at temperature `T` and pressure `P` with a given method.\n\n This method has no exception handling; see `TP_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate gas thermal conductivity, [K]\n P : float\n Pressure at which to calculate gas thermal conductivity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n kg : float\n Thermal conductivity of the gas at T and P, [W/m/K]"}
{"_id": "q_5397", "text": "r'''Method to calculate thermal conductivity of a gas mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n kg : float\n Thermal conductivity of gas mixture, [W/m/K]"}
{"_id": "q_5398", "text": "r'''Basic formula parser to determine the charge from a formula - given\n that the charge is already specified as one element of the formula.\n\n Performs no sanity checking that elements are actually elements.\n \n Parameters\n ----------\n formula : str\n Formula string, very simply formats only, ending in one of '+x',\n '-x', n*'+', or n*'-' or any of them surrounded by brackets but always\n at the end of a formula.\n\n Returns\n -------\n charge : int\n Charge of the molecule, [faraday]\n\n Notes\n -----\n\n Examples\n --------\n >>> charge_from_formula('Br3-')\n -1\n >>> charge_from_formula('Br3(-)')\n -1"}
{"_id": "q_5399", "text": "Convolve 2d gaussian."}
{"_id": "q_5400", "text": "Generate a gaussian kernel."}
{"_id": "q_5401", "text": "Convert PIL image to numpy grayscale array and numpy alpha array.\n\n Args:\n img (PIL.Image): PIL Image object.\n\n Returns:\n (gray, alpha): both numpy arrays."}
{"_id": "q_5402", "text": "Compute the SSIM value from the reference image to the target image.\n\n Args:\n target (str or PIL.Image): Input image to compare the reference image\n to. This may be a PIL Image object or, to save time, an SSIMImage\n object (e.g. the img member of another SSIM object).\n\n Returns:\n Computed SSIM float value."}
{"_id": "q_5403", "text": "Computes SSIM.\n\n Args:\n im1: First PIL Image object to compare.\n im2: Second PIL Image object to compare.\n\n Returns:\n SSIM float value."}
{"_id": "q_5404", "text": "Switch to a new code version on all cluster nodes. You\n should ensure that cluster nodes are updated, otherwise they\n won't be able to apply commands.\n\n :param newVersion: new code version\n :type int\n :param callback: will be called on cussess or fail\n :type callback: function(`FAIL_REASON <#pysyncobj.FAIL_REASON>`_, None)"}
{"_id": "q_5405", "text": "Dumps different debug info about cluster to dict and return it"}
{"_id": "q_5406", "text": "Find the node to which a connection belongs.\n\n :param conn: connection object\n :type conn: TcpConnection\n :returns corresponding node or None if the node cannot be found\n :rtype Node or None"}
{"_id": "q_5407", "text": "Callback for connections initiated by the other side\n\n :param conn: connection object\n :type conn: TcpConnection"}
{"_id": "q_5408", "text": "Callback for initial messages on incoming connections. Handles encryption, utility messages, and association of the connection with a Node.\n Once this initial setup is done, the relevant connected callback is executed, and further messages are deferred to the onMessageReceived callback.\n\n :param conn: connection object\n :type conn: TcpConnection\n :param message: received message\n :type message: any"}
{"_id": "q_5409", "text": "Check whether this node should initiate a connection to another node\n\n :param node: the other node\n :type node: Node"}
{"_id": "q_5410", "text": "Connect to a node if necessary.\n\n :param node: node to connect to\n :type node: Node"}
{"_id": "q_5411", "text": "Callback for receiving a message on a new outgoing connection. Used only if encryption is enabled to exchange the random keys.\n Once the key exchange is done, this triggers the onNodeConnected callback, and further messages are deferred to the onMessageReceived callback.\n\n :param conn: connection object\n :type conn: TcpConnection\n :param message: received message\n :type message: any"}
{"_id": "q_5412", "text": "Callback for when a connection is terminated or considered dead. Initiates a reconnect if necessary.\n\n :param conn: connection object\n :type conn: TcpConnection"}
{"_id": "q_5413", "text": "Send a message to a node. Returns False if the connection appears to be dead either before or after actually trying to send the message.\n\n :param node: target node\n :type node: Node\n :param message: message\n :param message: any\n :returns success\n :rtype bool"}
{"_id": "q_5414", "text": "Destroy this transport"}
{"_id": "q_5415", "text": "Put an item into the queue.\n True - if item placed in queue.\n False - if queue is full and item can not be placed."}
{"_id": "q_5416", "text": "Put an item into the queue. Items should be comparable, eg. tuples.\n True - if item placed in queue.\n False - if queue is full and item can not be placed."}
{"_id": "q_5417", "text": "Extract the smallest item from queue.\n Return default if queue is empty."}
{"_id": "q_5418", "text": "Attempt to acquire lock.\n\n :param lockID: unique lock identifier.\n :type lockID: str\n :param sync: True - to wait until lock is acquired or failed to acquire.\n :type sync: bool\n :param callback: if sync is False - callback will be called with operation result.\n :type callback: func(opResult, error)\n :param timeout: max operation time (default - unlimited)\n :type timeout: float\n :return True if acquired, False - somebody else already acquired lock"}
{"_id": "q_5419", "text": "Check if lock is acquired by ourselves.\n\n :param lockID: unique lock identifier.\n :type lockID: str\n :return True if lock is acquired by ourselves."}
{"_id": "q_5420", "text": "Decorator which wraps checks and returns an error response on failure."}
{"_id": "q_5421", "text": "Decorator which ensures that one of the WATCHMAN_TOKENS is provided if set.\n\n WATCHMAN_TOKEN_NAME can also be set if the token GET parameter must be\n customized."}
{"_id": "q_5422", "text": "Establish a connection to the chat server.\n\n Returns when an error has occurred, or :func:`disconnect` has been\n called."}
{"_id": "q_5423", "text": "Return ``request_header`` for use when constructing requests.\n\n Returns:\n Populated request header."}
{"_id": "q_5424", "text": "Set this client as active.\n\n While a client is active, no other clients will raise notifications.\n Call this method whenever there is an indication the user is\n interacting with this client. This method may be called very\n frequently, and it will only make a request when necessary."}
{"_id": "q_5425", "text": "Parse the image upload response to obtain status.\n\n Args:\n res: http_utils.FetchResponse instance, the upload response\n\n Returns:\n dict, sessionStatus of the response\n\n Raises:\n hangups.NetworkError: If the upload request failed."}
{"_id": "q_5426", "text": "Parse channel array and call the appropriate events."}
{"_id": "q_5427", "text": "Add services to the channel.\n\n The services we add to the channel determine what kind of data we will\n receive on it.\n\n The \"babel\" service includes what we need for Hangouts. If this fails\n for some reason, hangups will never receive any events. The\n \"babel_presence_last_seen\" service is also required to receive presence\n notifications.\n\n This needs to be re-called whenever we open a new channel (when there's\n a new SID and client_id."}
{"_id": "q_5428", "text": "Send a Protocol Buffer formatted chat API request.\n\n Args:\n endpoint (str): The chat API endpoint to use.\n request_pb: The request body as a Protocol Buffer message.\n response_pb: The response body as a Protocol Buffer message.\n\n Raises:\n NetworkError: If the request fails."}
{"_id": "q_5429", "text": "Send a generic authenticated POST request.\n\n Args:\n url (str): URL of request.\n content_type (str): Request content type.\n response_type (str): The desired response format. Valid options\n are: 'json' (JSON), 'protojson' (pblite), and 'proto' (binary\n Protocol Buffer). 'proto' requires manually setting an extra\n header 'X-Goog-Encode-Response-If-Executable: base64'.\n data (str): Request body data.\n\n Returns:\n FetchResponse: Response containing HTTP code, cookies, and body.\n\n Raises:\n NetworkError: If the request fails."}
{"_id": "q_5430", "text": "Invite users to join an existing group conversation."}
{"_id": "q_5431", "text": "Create a new conversation."}
{"_id": "q_5432", "text": "Return conversation info and recent events."}
{"_id": "q_5433", "text": "Return one or more user entities.\n\n Searching by phone number only finds entities when their phone number\n is in your contacts (and not always even then), and can't be used to\n find Google Voice contacts."}
{"_id": "q_5434", "text": "Return info about the current user."}
{"_id": "q_5435", "text": "Return presence status for a list of users."}
{"_id": "q_5436", "text": "Remove a participant from a group conversation."}
{"_id": "q_5437", "text": "Rename a conversation.\n\n Both group and one-to-one conversations may be renamed, but the\n official Hangouts clients have mixed support for one-to-one\n conversations with custom names."}
{"_id": "q_5438", "text": "Enable or disable message history in a conversation."}
{"_id": "q_5439", "text": "Set the notification level of a conversation."}
{"_id": "q_5440", "text": "Set focus to a conversation."}
{"_id": "q_5441", "text": "Set whether group link sharing is enabled for a conversation."}
{"_id": "q_5442", "text": "Set the typing status of a conversation."}
{"_id": "q_5443", "text": "Return info on recent conversations and their events."}
{"_id": "q_5444", "text": "Convert a microsecond timestamp to a UTC datetime instance."}
{"_id": "q_5445", "text": "Convert UserID to hangouts_pb2.ParticipantId."}
{"_id": "q_5446", "text": "Return WatermarkNotification from hangouts_pb2.WatermarkNotification."}
{"_id": "q_5447", "text": "Return authorization headers for API request."}
{"_id": "q_5448", "text": "Make an HTTP request.\n\n Automatically uses configured HTTP proxy, and adds Google authorization\n header and cookies.\n\n Failures will be retried MAX_RETRIES times before raising NetworkError.\n\n Args:\n method (str): Request method.\n url (str): Request URL.\n params (dict): (optional) Request query string parameters.\n headers (dict): (optional) Request headers.\n data: (str): (optional) Request body data.\n\n Returns:\n FetchResponse: Response data.\n\n Raises:\n NetworkError: If the request fails."}
{"_id": "q_5449", "text": "Search for entities by phone number, email, or gaia_id."}
{"_id": "q_5450", "text": "Return EntityLookupSpec from phone number, email address, or gaia ID."}
{"_id": "q_5451", "text": "Return a readable name for a conversation.\n\n If the conversation has a custom name, use the custom name. Otherwise, for\n one-to-one conversations, the name is the full name of the other user. For\n group conversations, the name is a comma-separated list of first names. If\n the group conversation is empty, the name is \"Empty Conversation\".\n\n If truncate is true, only show up to two names in a group conversation.\n\n If show_unread is True, if there are unread chat messages, show the number\n of unread chat messages in parentheses after the conversation name."}
{"_id": "q_5452", "text": "Add foreground and background colours to a color scheme"}
{"_id": "q_5453", "text": "Sync all conversations by making paginated requests.\n\n Conversations are ordered by ascending sort timestamp.\n\n Args:\n client (Client): Connected client.\n\n Raises:\n NetworkError: If the requests fail.\n\n Returns:\n tuple of list of ``ConversationState`` messages and sync timestamp"}
{"_id": "q_5454", "text": "Loaded events which are unread sorted oldest to newest.\n\n Some Hangouts clients don't update the read timestamp for certain event\n types, such as membership changes, so this may return more unread\n events than these clients will show. There's also a delay between\n sending a message and the user's own message being considered read.\n\n (list of :class:`.ConversationEvent`)."}
{"_id": "q_5455", "text": "Handle a watermark notification."}
{"_id": "q_5456", "text": "Update the internal state of the conversation.\n\n This method is used by :class:`.ConversationList` to maintain this\n instance.\n\n Args:\n conversation: ``Conversation`` message."}
{"_id": "q_5457", "text": "Wrap hangouts_pb2.Event in ConversationEvent subclass."}
{"_id": "q_5458", "text": "Add an event to the conversation.\n\n This method is used by :class:`.ConversationList` to maintain this\n instance.\n\n Args:\n event_: ``Event`` message.\n\n Returns:\n :class:`.ConversationEvent` representing the event."}
{"_id": "q_5459", "text": "Send a message to this conversation.\n\n A per-conversation lock is acquired to ensure that messages are sent in\n the correct order when this method is called multiple times\n asynchronously.\n\n Args:\n segments: List of :class:`.ChatMessageSegment` objects to include\n in the message.\n image_file: (optional) File-like object containing an image to be\n attached to the message.\n image_id: (optional) ID of an Picasa photo to be attached to the\n message. If you specify both ``image_file`` and ``image_id``\n together, ``image_file`` takes precedence and ``image_id`` will\n be ignored.\n image_user_id: (optional) Picasa user ID, required only if\n ``image_id`` refers to an image from a different Picasa user,\n such as Google's sticker user.\n\n Raises:\n .NetworkError: If the message cannot be sent."}
{"_id": "q_5460", "text": "Leave this conversation.\n\n Raises:\n .NetworkError: If conversation cannot be left."}
{"_id": "q_5461", "text": "Set the notification level of this conversation.\n\n Args:\n level: ``NOTIFICATION_LEVEL_QUIET`` to disable notifications, or\n ``NOTIFICATION_LEVEL_RING`` to enable them.\n\n Raises:\n .NetworkError: If the request fails."}
{"_id": "q_5462", "text": "Update the timestamp of the latest event which has been read.\n\n This method will avoid making an API request if it will have no effect.\n\n Args:\n read_timestamp (datetime.datetime): (optional) Timestamp to set.\n Defaults to the timestamp of the newest event.\n\n Raises:\n .NetworkError: If the timestamp cannot be updated."}
{"_id": "q_5463", "text": "Get all the conversations.\n\n Args:\n include_archived (bool): (optional) Whether to include archived\n conversations. Defaults to ``False``.\n\n Returns:\n List of all :class:`.Conversation` objects."}
{"_id": "q_5464", "text": "Leave a conversation.\n\n Args:\n conv_id (str): ID of conversation to leave."}
{"_id": "q_5465", "text": "Add new conversation from hangouts_pb2.Conversation"}
{"_id": "q_5466", "text": "Get a cached conversation or fetch a missing conversation.\n\n Args:\n conv_id: string, conversation identifier\n\n Raises:\n NetworkError: If the request to fetch the conversation fails.\n\n Returns:\n :class:`.Conversation` with matching ID."}
{"_id": "q_5467", "text": "Receive a hangouts_pb2.Event and fan out to Conversations.\n\n Args:\n event_: hangouts_pb2.Event instance"}
{"_id": "q_5468", "text": "Receive Conversation delta and create or update the conversation.\n\n Args:\n conversation: hangouts_pb2.Conversation instance\n\n Raises:\n NetworkError: A request to fetch the complete conversation failed."}
{"_id": "q_5469", "text": "Receive SetTypingNotification and update the conversation.\n\n Args:\n set_typing_notification: hangouts_pb2.SetTypingNotification\n instance"}
{"_id": "q_5470", "text": "Receive WatermarkNotification and update the conversation.\n\n Args:\n watermark_notification: hangouts_pb2.WatermarkNotification instance"}
{"_id": "q_5471", "text": "Sync conversation state and events that could have been missed."}
{"_id": "q_5472", "text": "Construct user from ``Entity`` message.\n\n Args:\n entity: ``Entity`` message.\n self_user_id (~hangups.user.UserID or None): The ID of the current\n user. If ``None``, assume ``entity`` is the current user.\n\n Returns:\n :class:`~hangups.user.User` object."}
{"_id": "q_5473", "text": "Get a user by its ID.\n\n Args:\n user_id (~hangups.user.UserID): The ID of the user.\n\n Raises:\n KeyError: If no such user is known.\n\n Returns:\n :class:`~hangups.user.User` with the given ID."}
{"_id": "q_5474", "text": "Add or upgrade User from ConversationParticipantData."}
{"_id": "q_5475", "text": "Add an observer to this event.\n\n Args:\n callback: A function or coroutine callback to call when the event\n is fired.\n\n Raises:\n ValueError: If the callback has already been added."}
{"_id": "q_5476", "text": "Remove an observer from this event.\n\n Args:\n callback: A function or coroutine callback to remove from this\n event.\n\n Raises:\n ValueError: If the callback is not an observer of this event."}
{"_id": "q_5477", "text": "Fire this event, calling all observers with the same arguments."}
{"_id": "q_5478", "text": "Run a hangups example coroutine.\n\n Args:\n example_coroutine (coroutine): Coroutine to run with a connected\n hangups client and arguments namespace as arguments.\n extra_args (str): Any extra command line arguments required by the\n example."}
{"_id": "q_5479", "text": "Return ArgumentParser with any extra arguments."}
{"_id": "q_5480", "text": "Print column headers and rows as a reStructuredText table.\n\n Args:\n col_tuple: Tuple of column name strings.\n row_tuples: List of tuples containing row data."}
{"_id": "q_5481", "text": "Generate doc for an enum.\n\n Args:\n enum_descriptor: descriptor_pb2.EnumDescriptorProto instance for enum\n to generate docs for.\n locations: Dictionary of location paths tuples to\n descriptor_pb2.SourceCodeInfo.Location instances.\n path: Path tuple to the enum definition.\n name_prefix: Optional prefix for this enum's name."}
{"_id": "q_5482", "text": "Create a directory if it does not exist."}
{"_id": "q_5483", "text": "Show the overlay menu."}
{"_id": "q_5484", "text": "Handle connecting for the first time."}
{"_id": "q_5485", "text": "Open conversation tab for new messages & pass events to notifier."}
{"_id": "q_5486", "text": "Put a coroutine in the queue to be executed."}
{"_id": "q_5487", "text": "Consume coroutines from the queue by executing them."}
{"_id": "q_5488", "text": "Rename conversation and call callback."}
{"_id": "q_5489", "text": "Re-order the conversations when an event occurs."}
{"_id": "q_5490", "text": "Make users stop typing when they send a message."}
{"_id": "q_5491", "text": "Update status text."}
{"_id": "q_5492", "text": "Return MessageWidget representing a ConversationEvent.\n\n Returns None if the ConversationEvent does not have a widget\n representation."}
{"_id": "q_5493", "text": "Handle updating and scrolling when a new event is added.\n\n Automatically scroll down to show the new text if the bottom is\n showing. This allows the user to scroll up to read previous messages\n while new messages are arriving."}
{"_id": "q_5494", "text": "Load more events for this conversation."}
{"_id": "q_5495", "text": "Return the menu widget associated with this widget."}
{"_id": "q_5496", "text": "Update this conversation's tab title."}
{"_id": "q_5497", "text": "Update tab display."}
{"_id": "q_5498", "text": "Add or modify a tab.\n\n If widget is not a tab, it will be added. If switch is True, switch to\n this tab. If title is given, set the tab's title."}
{"_id": "q_5499", "text": "Use the access token to get session cookies.\n\n Raises GoogleAuthError if session cookies could not be loaded.\n\n Returns dict of cookies."}
{"_id": "q_5500", "text": "Populate and submit a form on the current page.\n\n Raises GoogleAuthError if form can not be submitted."}
{"_id": "q_5501", "text": "Parse response format for request for new channel SID.\n\n Example format (after parsing JS):\n [ [0,[\"c\",\"SID_HERE\",\"\",8]],\n [1,[{\"gsid\":\"GSESSIONID_HERE\"}]]]\n\n Returns (SID, gsessionid) tuple."}
{"_id": "q_5502", "text": "Listen for messages on the backwards channel.\n\n This method only returns when the connection has been closed due to an\n error."}
{"_id": "q_5503", "text": "Open a long-polling request and receive arrays.\n\n This method uses keep-alive to make re-opening the request faster, but\n the remote server will set the \"Connection: close\" header once an hour.\n\n Raises hangups.NetworkError or ChannelSessionError."}
{"_id": "q_5504", "text": "Parse push data and trigger events."}
{"_id": "q_5505", "text": "Decode optional or required field."}
{"_id": "q_5506", "text": "Decode repeated field."}
{"_id": "q_5507", "text": "Decode pblite to Protocol Buffer message.\n\n This method is permissive of decoding errors and will log them as warnings\n and continue decoding where possible.\n\n The first element of the outer pblite list must often be ignored using the\n ignore_first_item parameter because it contains an abbreviation of the name\n of the protobuf message (eg. cscmrp for ClientSendChatMessageResponseP)\n that's not part of the protobuf.\n\n Args:\n message: protocol buffer message instance to decode into.\n pblite: list representing a pblite-serialized message.\n ignore_first_item: If True, ignore the item at index 0 in the pblite\n list, making the item at index 1 correspond to field 1 in the\n message."}
{"_id": "q_5508", "text": "Sets the Elasticsearch hosts to use\n\n Args:\n hosts (str): A single hostname or URL, or list of hostnames or URLs\n use_ssl (bool): Use a HTTPS connection to the server\n ssl_cert_path (str): Path to the certificate chain"}
{"_id": "q_5509", "text": "Updates index mappings\n\n Args:\n aggregate_indexes (list): A list of aggregate index names\n forensic_indexes (list): A list of forensic index names"}
{"_id": "q_5510", "text": "Saves aggregate DMARC reports to Kafka\n\n Args:\n aggregate_reports (list): A list of aggregate report dictionaries\n to save to Kafka\n aggregate_topic (str): The name of the Kafka topic"}
{"_id": "q_5511", "text": "Extracts xml from a zip or gzip file at the given path, file-like object,\n or bytes.\n\n Args:\n input_: A path to a file, a file like object, or bytes\n\n Returns:\n str: The extracted XML"}
{"_id": "q_5512", "text": "Parses a file at the given path, a file-like object. or bytes as a\n aggregate DMARC report\n\n Args:\n _input: A path to a file, a file like object, or bytes\n nameservers (list): A list of one or more nameservers to use\n (Cloudflare's public DNS resolvers by default)\n dns_timeout (float): Sets the DNS timeout in seconds\n parallel (bool): Parallel processing\n\n Returns:\n OrderedDict: The parsed DMARC aggregate report"}
{"_id": "q_5513", "text": "Converts one or more parsed forensic reports to flat CSV format, including\n headers\n\n Args:\n reports: A parsed forensic report or list of parsed forensic reports\n\n Returns:\n str: Parsed forensic report data in flat CSV format, including headers"}
{"_id": "q_5514", "text": "Parses a DMARC aggregate or forensic file at the given path, a\n file-like object. or bytes\n\n Args:\n input_: A path to a file, a file like object, or bytes\n nameservers (list): A list of one or more nameservers to use\n (Cloudflare's public DNS resolvers by default)\n dns_timeout (float): Sets the DNS timeout in seconds\n strip_attachment_payloads (bool): Remove attachment payloads from\n forensic report results\n parallel (bool): Parallel processing\n\n Returns:\n OrderedDict: The parsed DMARC report"}
{"_id": "q_5515", "text": "Returns a list of an IMAP server's capabilities\n\n Args:\n server (imapclient.IMAPClient): An instance of imapclient.IMAPClient\n\n Returns (list): A list of capabilities"}
{"_id": "q_5516", "text": "Emails parsing results as a zip file\n\n Args:\n results (OrderedDict): Parsing results\n host: Mail server hostname or IP address\n mail_from: The value of the message from header\n mail_to : A list of addresses to mail to\n port (int): Port to use\n ssl (bool): Require a SSL connection from the start\n user: An optional username\n password: An optional password\n subject: Overrides the default message subject\n attachment_filename: Override the default attachment filename\n message: Override the default plain text body\n ssl_context: SSL context options"}
{"_id": "q_5517", "text": "Saves aggregate DMARC reports to Splunk\n\n Args:\n aggregate_reports: A list of aggregate report dictionaries\n to save in Splunk"}
{"_id": "q_5518", "text": "Decodes a base64 string, with padding being optional\n\n Args:\n data: A base64 encoded string\n\n Returns:\n bytes: The decoded bytes"}
{"_id": "q_5519", "text": "Gets the base domain name for the given domain\n\n .. note::\n Results are based on a list of public domain suffixes at\n https://publicsuffix.org/list/public_suffix_list.dat.\n\n Args:\n domain (str): A domain or subdomain\n use_fresh_psl (bool): Download a fresh Public Suffix List\n\n Returns:\n str: The base domain of the given domain"}
{"_id": "q_5520", "text": "Resolves an IP address to a hostname using a reverse DNS query\n\n Args:\n ip_address (str): The IP address to resolve\n cache (ExpiringDict): Cache storage\n nameservers (list): A list of one or more nameservers to use\n (Cloudflare's public DNS resolvers by default)\n timeout (float): Sets the DNS query timeout in seconds\n\n Returns:\n str: The reverse DNS hostname (if any)"}
{"_id": "q_5521", "text": "Converts a human-readable timestamp into a Python ``DateTime`` object\n\n Args:\n human_timestamp (str): A timestamp string\n to_utc (bool): Convert the timestamp to UTC\n\n Returns:\n DateTime: The converted timestamp"}
{"_id": "q_5522", "text": "Returns reverse DNS and country information for the given IP address\n\n Args:\n ip_address (str): The IP address to check\n cache (ExpiringDict): Cache storage\n nameservers (list): A list of one or more nameservers to use\n (Cloudflare's public DNS resolvers by default)\n timeout (float): Sets the DNS timeout in seconds\n parallel (bool): parallel processing\n\n Returns:\n OrderedDict: ``ip_address``, ``reverse_dns``"}
{"_id": "q_5523", "text": "Uses the ``msgconvert`` Perl utility to convert an Outlook MS file to\n standard RFC 822 format\n\n Args:\n msg_bytes (bytes): the content of the .msg file\n\n Returns:\n A RFC 822 string"}
{"_id": "q_5524", "text": "Separated this function for multiprocessing"}
{"_id": "q_5525", "text": "Sends a PUB command to the server on the specified subject.\n\n ->> PUB hello 5\n ->> MSG_PAYLOAD: world\n <<- MSG hello 2 5"}
{"_id": "q_5526", "text": "Publishes a message tagging it with a reply subscription\n which can be used by those receiving the message to respond.\n\n ->> PUB hello _INBOX.2007314fe0fcb2cdc2a2914c1 5\n ->> MSG_PAYLOAD: world\n <<- MSG hello 2 _INBOX.2007314fe0fcb2cdc2a2914c1 5"}
{"_id": "q_5527", "text": "Sets the subcription to use a task per message to be processed.\n\n ..deprecated:: 7.0\n Will be removed 9.0."}
{"_id": "q_5528", "text": "Sends a ping to the server expecting a pong back ensuring\n what we have written so far has made it to the server and\n also enabling measuring of roundtrip time.\n In case a pong is not returned within the allowed timeout,\n then it will raise ErrTimeout."}
{"_id": "q_5529", "text": "Looks up in the server pool for an available server\n and attempts to connect."}
{"_id": "q_5530", "text": "Process errors which occured while reading or parsing\n the protocol. If allow_reconnect is enabled it will\n try to switch the server to which it is currently connected\n otherwise it will disconnect."}
{"_id": "q_5531", "text": "Process PONG sent by server."}
{"_id": "q_5532", "text": "Coroutine which continuously tries to consume pending commands\n and then flushes them to the socket."}
{"_id": "q_5533", "text": "Coroutine which gathers bytes sent by the server\n and feeds them to the protocol parser.\n In case of error while reading, it will stop running\n and its task has to be rescheduled."}
{"_id": "q_5534", "text": "Generates a timezone aware datetime if the 'USE_TZ' setting is enabled\n\n :param value: The datetime value\n :return: A locale aware datetime"}
{"_id": "q_5535", "text": "Load feature data from a 2D ndarray on disk."}
{"_id": "q_5536", "text": "Load feature image data from image files.\n\n Args:\n images: A list of image filenames.\n names: An optional list of strings to use as the feature names. Must\n be in the same order as the images."}
{"_id": "q_5537", "text": "Decode images using Pearson's r.\n\n Computes the correlation between each input image and each feature\n image across voxels.\n\n Args:\n imgs_to_decode: An ndarray of images to decode, with voxels in rows\n and images in columns.\n\n Returns:\n An n_features x n_images 2D array, with each cell representing the\n pearson correlation between the i'th feature and the j'th image\n across all voxels."}
{"_id": "q_5538", "text": "Decoding using the dot product."}
{"_id": "q_5539", "text": "Set up data for a classification task given a set of masks\n\n Given a set of masks, this function retrieves studies associated with\n each mask at the specified threshold, optionally removes overlap and\n filters by studies and features, and returns studies by feature matrix\n (X) and class labels (y)\n\n Args:\n dataset: a Neurosynth dataset\n maks: a list of paths to Nifti masks\n threshold: percentage of voxels active within the mask for study\n to be included\n remove_overlap: A boolean indicating if studies studies that\n appear in more than one mask should be excluded\n studies: An optional list of study names used to constrain the set\n used in classification. If None, will use all features in the\n dataset.\n features: An optional list of feature names used to constrain the\n set used in classification. If None, will use all features in\n the dataset.\n regularize: Optional boolean indicating if X should be regularized\n\n Returns:\n A tuple (X, y) of np arrays.\n X is a feature by studies matrix and y is a vector of class labels"}
{"_id": "q_5540", "text": "Returns a list with the order that features requested appear in\n dataset"}
{"_id": "q_5541", "text": "Sets the class_weight of the classifier to match y"}
{"_id": "q_5542", "text": "Given a dataset, fits either features or voxels to y"}
{"_id": "q_5543", "text": "Aggregates over all voxels within each ROI in the input image.\n\n Takes a Dataset and a Nifti image that defines distinct regions, and\n returns a numpy matrix of ROIs x mappables, where the value at each\n ROI is the proportion of active voxels in that ROI. Each distinct ROI\n must have a unique value in the image; non-contiguous voxels with the\n same value will be assigned to the same ROI.\n\n Args:\n dataset: Either a Dataset instance from which image data are\n extracted, or a Numpy array containing image data to use. If\n the latter, the array contains voxels in rows and\n features/studies in columns. The number of voxels must be equal\n to the length of the vectorized image mask in the regions\n image.\n regions: An image defining the boundaries of the regions to use.\n Can be one of:\n 1) A string name of the NIFTI or Analyze-format image\n 2) A NiBabel SpatialImage\n 3) A list of NiBabel images\n 4) A 1D numpy array of the same length as the mask vector in\n the Dataset's current Masker.\n masker: Optional masker used to load image if regions is not a\n numpy array. Must be passed if dataset is a numpy array.\n threshold: An optional float in the range of 0 - 1 or integer. If\n passed, the array will be binarized, with ROI values above the\n threshold assigned to True and values below the threshold\n assigned to False. (E.g., if threshold = 0.05, only ROIs in\n which more than 5% of voxels are active will be considered\n active.) If threshold is integer, studies will only be\n considered active if they activate more than that number of\n voxels in the ROI.\n remove_zero: An optional boolean; when True, assume that voxels\n with value of 0 should not be considered as a separate ROI, and\n will be ignored.\n\n Returns:\n A 2D numpy array with ROIs in rows and mappables in columns."}
{"_id": "q_5544", "text": "Return top forty words from each topic in trained topic model."}
{"_id": "q_5545", "text": "Correlates row vector x with each row vector in 2D array y."}
{"_id": "q_5546", "text": "Determine FDR threshold given a p value array and desired false\n discovery rate q."}
{"_id": "q_5547", "text": "Create and store a new ImageTable instance based on the current\n Dataset. Will generally be called privately, but may be useful as a\n convenience method in cases where the user wants to re-generate the\n table with a new smoothing kernel of different radius.\n\n Args:\n r (int): An optional integer indicating the radius of the smoothing\n kernel. By default, this is None, which will keep whatever\n value is currently set in the Dataset instance."}
{"_id": "q_5548", "text": "Construct a new FeatureTable from file.\n\n Args:\n features: Feature data to add. Can be:\n (a) A text file containing the feature data, where each row is\n a study in the database, with features in columns. The first\n column must contain the IDs of the studies to match up with the\n image data.\n (b) A pandas DataFrame, where studies are in rows, features are\n in columns, and the index provides the study IDs.\n append (bool): If True, adds new features to existing ones\n incrementally. If False, replaces old features.\n merge, duplicates, min_studies, threshold: Additional arguments\n passed to FeatureTable.add_features()."}
{"_id": "q_5549", "text": "Load a pickled Dataset instance from file."}
{"_id": "q_5550", "text": "Given a list of features, returns features in order that they\n appear in database.\n\n Args:\n features (list): A list or 1D numpy array of named features to\n return.\n\n Returns:\n A list of features in order they appear in database."}
{"_id": "q_5551", "text": "Returns a list of all studies in the table that meet the desired\n feature-based criteria.\n\n Will most commonly be used to retrieve studies that use one or more\n features with some minimum frequency; e.g.,:\n\n get_ids(['fear', 'anxiety'], threshold=0.001)\n\n Args:\n features (lists): a list of feature names to search on.\n threshold (float): optional float indicating threshold features\n must pass to be included.\n func (Callable): any numpy function to use for thresholding\n (default: sum). The function will be applied to the list of\n features and the result compared to the threshold. This can be\n used to change the meaning of the query in powerful ways. E.g,:\n max: any of the features have to pass threshold\n (i.e., max > thresh)\n min: all features must each individually pass threshold\n (i.e., min > thresh)\n sum: the summed weight of all features must pass threshold\n (i.e., sum > thresh)\n get_weights (bool): if True, returns a dict with ids => weights.\n\n Returns:\n When get_weights is false (default), returns a list of study\n names. When true, returns a dict, with study names as keys\n and feature weights as values."}
{"_id": "q_5552", "text": "Returns all features that match any of the elements in the input\n list.\n\n Args:\n search (str, list): A string or list of strings defining the query.\n\n Returns:\n A list of matching feature names."}
{"_id": "q_5553", "text": "Convert FeatureTable to SciPy CSR matrix."}
{"_id": "q_5554", "text": "Deprecation warning decorator. Takes optional deprecation message,\n otherwise will use a generic warning."}
{"_id": "q_5555", "text": "Convert coordinates from one space to another using provided\n transformation matrix."}
{"_id": "q_5556", "text": "Convert an N x 3 array of XYZ coordinates to matrix indices."}
{"_id": "q_5557", "text": "Perform an ADC read with the provided mux, gain, data_rate, and mode\n values and with the comparator enabled as specified. Returns the signed\n integer result of the read."}
{"_id": "q_5558", "text": "Read a single ADC channel and return the ADC value as a signed integer\n result. Channel must be a value within 0-3."}
{"_id": "q_5559", "text": "Expand the given address into one or more normalized strings.\n\n Required\n --------\n @param address: the address as either Unicode or a UTF-8 encoded string\n\n Options\n -------\n @param languages: a tuple or list of ISO language code strings (e.g. \"en\", \"fr\", \"de\", etc.)\n to use in expansion. If None is passed, use language classifier\n to detect language automatically.\n @param address_components: an integer (bit-set) of address component expansions\n to use e.g. ADDRESS_NAME | ADDRESS_STREET would use\n only expansions which apply to venue names or streets.\n @param latin_ascii: use the Latin to ASCII transliterator, which normalizes e.g. \u00e6 => ae\n @param transliterate: use any available transliterators for non-Latin scripts, e.g.\n for the Greek phrase \u03b4\u03b9\u03b1\u03c6\u03bf\u03c1\u03b5\u03c4\u03b9\u03ba\u03bf\u03cd\u03c2 becomes diaphoretiko\u00fas\u0331\n @param strip_accents: strip accented characters e.g. \u00e9 => e, \u00e7 => c. This loses some\n information in various languags, but in general we want\n @param decompose: perform Unicode normalization (NFD form)\n @param lowercase: UTF-8 lowercase the string\n @param trim_string: trim spaces on either side of the string\n @param replace_word_hyphens: add version of the string replacing hyphens with space\n @param delete_word_hyphens: add version of the string with hyphens deleted\n @param replace_numeric_hyphens: add version of the string with numeric hyphens replaced \n e.g. 12345-6789 => 12345 6789\n @param delete_numeric_hyphens: add version of the string with numeric hyphens removed\n e.g. 12345-6789 => 123456789\n @param split_alpha_from_numeric: split tokens like CR17 into CR 17, helps with expansion\n of certain types of highway abbreviations\n @param delete_final_periods: remove final periods on abbreviations e.g. St. => St\n @param delete_acronym_periods: remove periods in acronyms e.g. U.S.A. => USA\n @param drop_english_possessives: normalize possessives e.g. Mark's => Marks\n @param delete_apostrophes: delete other types of hyphens e.g. O'Malley => OMalley\n @param expand_numex: converts numeric expressions e.g. Twenty sixth => 26th,\n using either the supplied languages or the result of\n automated language classification.\n @param roman_numerals: normalize Roman numerals e.g. IX => 9. Since these can be\n ambiguous (especially I and V), turning this on simply\n adds another version of the string if any potential\n Roman numerals are found."}
{"_id": "q_5560", "text": "Normalizes a string, tokenizes, and normalizes each token\n with string and token-level options.\n\n This version only uses libpostal's deterministic normalizations\n i.e. methods with a single output. The string tree version will\n return multiple normalized strings, each with tokens.\n\n Usage:\n normalized_tokens(u'St.-Barth\u00e9lemy')"}
{"_id": "q_5561", "text": "Parse address into components.\n\n @param address: the address as either Unicode or a UTF-8 encoded string\n @param language (optional): language code\n @param country (optional): country code"}
{"_id": "q_5562", "text": "Hash the given address into normalized strings that can be used to group similar\n addresses together for more detailed pairwise comparison. This can be thought of\n as the blocking function in record linkage or locally-sensitive hashing in the\n document near-duplicate detection.\n\n Required\n --------\n @param labels: array of component labels as either Unicode or UTF-8 encoded strings\n e.g. [\"house_number\", \"road\", \"postcode\"]\n @param values: array of component values as either Unicode or UTF-8 encoded strings\n e.g. [\"123\", \"Broadway\", \"11216\"]. Note len(values) must be equal to\n len(labels).\n\n Options\n -------\n @param languages: a tuple or list of ISO language code strings (e.g. \"en\", \"fr\", \"de\", etc.)\n to use in expansion. If None is passed, use language classifier\n to detect language automatically.\n @param with_name: use name in the hashes\n @param with_address: use house_number & street in the hashes\n @param with_unit: use secondary unit as part of the hashes\n @param with_city_or_equivalent: use the city, city_district, suburb, or island name as one of\n the geo qualifiers\n @param with_small_containing_boundaries: use small containing boundaries (currently state_district)\n as one of the geo qualifiers\n @param with_postal_code: use postal code as one of the geo qualifiers\n @param with_latlon: use geohash + neighbors as one of the geo qualifiers\n @param latitude: latitude (Y coordinate)\n @param longitude: longitude (X coordinate)\n @param geohash_precision: geohash tile size (default = 6)\n @param name_and_address_keys: include keys with name + address + geo\n @param name_only_keys: include keys with name + geo\n @param address_only_keys: include keys with address + geo"}
{"_id": "q_5563", "text": "Removed all dusty containers with 'Exited' in their status"}
{"_id": "q_5564", "text": "Removes all dangling images as well as all images referenced in a dusty spec; forceful removal is not used"}
{"_id": "q_5565", "text": "Write the given config to disk as a Dusty sub-config\n in the Nginx includes directory. Then, either start nginx\n or tell it to reload its config to pick up what we've\n just written."}
{"_id": "q_5566", "text": "We require the list of all remote repo paths to be passed in\n to this because otherwise we would need to import the spec assembler\n in this module, which would give us circular imports."}
{"_id": "q_5567", "text": "Daemon-side command to ensure we're running the latest\n versions of any managed repos, including the\n specs repo, before we do anything else in the up flow."}
{"_id": "q_5568", "text": "This command will use the compilers to get compose specs\n will pass those specs to the systems that need them. Those\n systems will in turn launch the services needed to make the\n local environment go."}
{"_id": "q_5569", "text": "Restart any containers associated with Dusty, or associated with\n the provided app_or_service_names."}
{"_id": "q_5570", "text": "Return a dictionary containing the Compose spec required to run\n Dusty's nginx container used for host forwarding."}
{"_id": "q_5571", "text": "Given the assembled specs and app_name, this function will return all apps and services specified in\n 'conditional_links' if they are specified in 'apps' or 'services' in assembled_specs. That means that\n some other part of the system has declared them as necessary, so they should be linked to this app"}
{"_id": "q_5572", "text": "This function returns a dictionary of the docker-compose.yml specifications for one app"}
{"_id": "q_5573", "text": "This function returns a dictionary of the docker_compose specifications\n for one service. Currently, this is just the Dusty service spec with\n an additional volume mount to support Dusty's cp functionality."}
{"_id": "q_5574", "text": "Returns a list of formatted port mappings for an app"}
{"_id": "q_5575", "text": "This returns formatted volume specifications for a docker-compose app. We mount the app\n as well as any libs it needs so that local code is used in our container, instead of whatever\n code was in the docker image.\n\n Additionally, we create a volume for the /cp directory used by Dusty to facilitate\n easy file transfers using `dusty cp`."}
{"_id": "q_5576", "text": "Expands specs.libs.depends.libs to include any indirectly required libs"}
{"_id": "q_5577", "text": "Returns all libs that are referenced in specs.apps.depends.libs"}
{"_id": "q_5578", "text": "Returns all services that are referenced in specs.apps.depends.services,\n or in specs.bundles.services"}
{"_id": "q_5579", "text": "This function adds an assets key to the specs, which is filled in with a dictionary\n of all assets defined by apps and libs in the specs"}
{"_id": "q_5580", "text": "This function takes an app or library name and will return the corresponding repo\n for that app or library"}
{"_id": "q_5581", "text": "Given the spec of an app or library, returns all repos that are guaranteed\n to live in the same container"}
{"_id": "q_5582", "text": "Given the name of an app or library, returns all repos that are guaranteed\n to live in the same container"}
{"_id": "q_5583", "text": "Return a string of all host rules required to match\n the given spec. This string is wrapped in the Dusty hosts\n header and footer so it can be easily removed later."}
{"_id": "q_5584", "text": "Moves the temporary binary to the location of the binary that's currently being run.\n Preserves owner, group, and permissions of original binary"}
{"_id": "q_5585", "text": "Context manager for setting up a TaskQueue. Upon leaving the\n context manager, all tasks that were enqueued will be executed\n in parallel subject to `pool_size` concurrency constraints."}
{"_id": "q_5586", "text": "This will output the nginx stream config string for specific port spec"}
{"_id": "q_5587", "text": "Starting with Yosemite, launchd was rearchitected and now only one\n launchd process runs for all users. This allows us to much more easily\n impersonate a user through launchd and extract the environment\n variables from their running processes."}
{"_id": "q_5588", "text": "Will check the mac_username config value; if it is present, will load that user's\n SSH_AUTH_SOCK environment variable to the current environment. This allows git clones\n to behave the same for the daemon as they do for the user"}
{"_id": "q_5589", "text": "Recursively delete a path upon exiting this context\n manager. Supports targets that are files or directories."}
{"_id": "q_5590", "text": "Copy a path from the local filesystem to a path inside a Dusty\n container. The files on the local filesystem must be accessible\n by the user specified in mac_username."}
{"_id": "q_5591", "text": "Given a dictionary containing the expanded dusty DAG specs this function will\n return a dictionary containing the port mappings needed by downstream methods. Currently\n this includes docker_compose, virtualbox, nginx and hosts_file."}
{"_id": "q_5592", "text": "Returns the Docker registry host associated with\n a given image name."}
{"_id": "q_5593", "text": "Reads the local Docker client config for the current user\n and returns all registries to which the user may be logged in.\n This is intended to be run client-side, not by the daemon."}
{"_id": "q_5594", "text": "Puts the client logger into streaming mode, which sends\n unbuffered input through to the socket one character at a time.\n We also disable propagation so the root logger does not\n receive many one-byte emissions. This context handler\n was originally created for streaming Compose up's\n terminal output through to the client and should only be\n used for similarly complex circumstances."}
{"_id": "q_5595", "text": "This is used to compile the command that will be run when the docker container starts\n up. This command has to install any libs that the app uses, run the `always` command, and\n run the `once` command if the container is being launched for the first time"}
{"_id": "q_5596", "text": "Raise the open file handles permitted by the Dusty daemon process\n and its child processes. The number we choose here needs to be within\n the OS X default kernel hard limit, which is 10240."}
{"_id": "q_5597", "text": "Start the daemon's HTTP server on a separate thread.\n This server is only used for servicing container status\n requests from Dusty's custom 502 page."}
{"_id": "q_5598", "text": "Ripped off and slightly modified based on docker-py's\n kwargs_from_env utility function."}
{"_id": "q_5599", "text": "Get a list of containers associated with the list\n of services. If no services are provided, attempts to\n return all containers associated with Dusty."}
{"_id": "q_5600", "text": "This function is used with `dusty up`. It will check all active repos to see if\n they are exported. If any are missing, it will replace current dusty exports with\n exports that are needed for currently active repos, and restart\n the nfs server"}
{"_id": "q_5601", "text": "Our exports file will be invalid if this folder doesn't exist, and the NFS server\n will not run correctly."}
{"_id": "q_5602", "text": "Given an existing consumer ID, return any new lines from the\n log since the last time the consumer was consumed."}
{"_id": "q_5603", "text": "This returns a list of formatted volume specs for an app. These mounts declared in the apps' spec\n and mounts declared in all lib specs the app depends on"}
{"_id": "q_5604", "text": "Returns a list of the formatted volume specs for a lib"}
{"_id": "q_5605", "text": "Returns a list of the formatted volume mounts for all libs that an app uses"}
{"_id": "q_5606", "text": "Initialize the Dusty VM if it does not already exist."}
{"_id": "q_5607", "text": "Start the Dusty VM if it is not already running."}
{"_id": "q_5608", "text": "Using VBoxManage is 0.5 seconds or so faster than Machine."}
{"_id": "q_5609", "text": "Something in the VM chain, either VirtualBox or Machine, helpfully\n sets up localhost-to-VM forwarding on port 22. We can inspect this\n rule to determine the port on localhost which gets forwarded to\n 22 in the VM."}
{"_id": "q_5610", "text": "Returns the MAC address assigned to the host-only adapter,\n using output from VBoxManage. Returned MAC address has no colons\n and is lower-cased."}
{"_id": "q_5611", "text": "Given the rather-complex output from an 'ip addr show' command\n on the VM, parse the output to determine the IP address\n assigned to the interface with the given MAC."}
{"_id": "q_5612", "text": "Determine the host-only IP of the Dusty VM through Virtualbox and SSH\n directly, bypassing Docker Machine. We do this because Docker Machine is\n much slower, taking about 600ms total. We are basically doing the same\n flow Docker Machine does in its own code."}
{"_id": "q_5613", "text": "Converts a python dict to a namedtuple, saving memory."}
{"_id": "q_5614", "text": "By default, return latest EOD Composite Price for a stock ticker.\n On average, each feed contains 3 data sources.\n\n Supported tickers + Available Day Ranges are here:\n https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip\n\n Args:\n ticker (string): Unique identifier for stock ticker\n startDate (string): Start of ticker range in YYYY-MM-DD format\n endDate (string): End of ticker range in YYYY-MM-DD format\n fmt (string): 'csv' or 'json'\n frequency (string): Resample frequency"}
{"_id": "q_5615", "text": "Return a pandas.DataFrame of historical prices for one or more ticker symbols.\n\n By default, return latest EOD Composite Price for a list of stock tickers.\n On average, each feed contains 3 data sources.\n\n Supported tickers + Available Day Ranges are here:\n https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip\n or from the TiingoClient.list_tickers() method.\n\n Args:\n tickers (string/list): One or more unique identifiers for a stock ticker.\n startDate (string): Start of ticker range in YYYY-MM-DD format.\n endDate (string): End of ticker range in YYYY-MM-DD format.\n metric_name (string): Optional parameter specifying metric to be returned for each\n ticker. In the event of a single ticker, this is optional and if not specified\n all of the available data will be returned. In the event of a list of tickers,\n this parameter is required.\n frequency (string): Resample frequency (defaults to daily)."}
{"_id": "q_5616", "text": "Make a local copy of the sqlite cookie database and return the new filename.\n This is necessary in case this database is still being written to while the user browses\n to avoid sqlite locking errors."}
{"_id": "q_5617", "text": "Try to load cookies from all supported browsers and return combined cookiejar\n Optionally pass in a domain name to only load cookies from the specified domain"}
{"_id": "q_5618", "text": "Decrypt encoded cookies"}
{"_id": "q_5619", "text": "Get the application bearer token from client_id and client_secret."}
{"_id": "q_5620", "text": "Make a request to the spotify API with the current bearer credentials.\n\n Parameters\n ----------\n route : Union[tuple[str, str], Route]\n A tuple of the method and url or a :class:`Route` object.\n kwargs : Any\n keyword arguments to pass into :class:`aiohttp.ClientSession.request`"}
{"_id": "q_5621", "text": "Get an albums tracks by an ID.\n\n Parameters\n ----------\n spotify_id : str\n The spotify_id to search by.\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optiona[int]\n The offset of which Spotify should start yielding from.\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code."}
{"_id": "q_5622", "text": "Get a spotify artist by their ID.\n\n Parameters\n ----------\n spotify_id : str\n The spotify_id to search by."}
{"_id": "q_5623", "text": "Get an artists tracks by their ID.\n\n Parameters\n ----------\n spotify_id : str\n The spotify_id to search by.\n include_groups : INCLUDE_GROUPS_TP\n INCLUDE_GROUPS\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optiona[int]\n The offset of which Spotify should start yielding from.\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code."}
{"_id": "q_5624", "text": "Get an artists top tracks per country with their ID.\n\n Parameters\n ----------\n spotify_id : str\n The spotify_id to search by.\n country : COUNTRY_TP\n COUNTRY"}
{"_id": "q_5625", "text": "Get related artists for an artist by their ID.\n\n Parameters\n ----------\n spotify_id : str\n The spotify_id to search by."}
{"_id": "q_5626", "text": "Get a single category used to tag items in Spotify.\n\n Parameters\n ----------\n category_id : str\n The Spotify category ID for the category.\n country : COUNTRY_TP\n COUNTRY\n locale : LOCALE_TP\n LOCALE"}
{"_id": "q_5627", "text": "Get a list of Spotify playlists tagged with a particular category.\n\n Parameters\n ----------\n category_id : str\n The Spotify category ID for the category.\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optional[int]\n The index of the first item to return. Default: 0\n country : COUNTRY_TP\n COUNTRY"}
{"_id": "q_5628", "text": "Get a list of categories used to tag items in Spotify.\n\n Parameters\n ----------\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optional[int]\n The index of the first item to return. Default: 0\n country : COUNTRY_TP\n COUNTRY\n locale : LOCALE_TP\n LOCALE"}
{"_id": "q_5629", "text": "Get a list of Spotify featured playlists.\n\n Parameters\n ----------\n locale : LOCALE_TP\n LOCALE\n country : COUNTRY_TP\n COUNTRY\n timestamp : TIMESTAMP_TP\n TIMESTAMP\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optional[int]\n The index of the first item to return. Default: 0"}
{"_id": "q_5630", "text": "Get a list of new album releases featured in Spotify.\n\n Parameters\n ----------\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optional[int]\n The index of the first item to return. Default: 0\n country : COUNTRY_TP\n COUNTRY"}
{"_id": "q_5631", "text": "Get Recommendations Based on Seeds.\n\n Parameters\n ----------\n seed_artists : str\n A comma separated list of Spotify IDs for seed artists. Up to 5 seed values may be provided.\n seed_genres : str\n A comma separated list of any genres in the set of available genre seeds. Up to 5 seed values may be provided.\n seed_tracks : str\n A comma separated list of Spotify IDs for a seed track. Up to 5 seed values may be provided.\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code.\n max_* : Optional[Keyword arguments]\n For each tunable track attribute, a hard ceiling on the selected track attribute\u2019s value can be provided.\n min_* : Optional[Keyword arguments]\n For each tunable track attribute, a hard floor on the selected track attribute\u2019s value can be provided.\n target_* : Optional[Keyword arguments]\n For each of the tunable track attributes (below) a target value may be provided."}
{"_id": "q_5632", "text": "Check to see if the current user is following one or more artists or other Spotify users.\n\n Parameters\n ----------\n ids : List[str]\n A comma-separated list of the artist or the user Spotify IDs to check.\n A maximum of 50 IDs can be sent in one request.\n type : Optional[str]\n The ID type: either \"artist\" or \"user\".\n Default: \"artist\""}
{"_id": "q_5633", "text": "Get the albums of a Spotify artist.\n\n Parameters\n ----------\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optiona[int]\n The offset of which Spotify should start yielding from.\n include_groups : INCLUDE_GROUPS_TP\n INCLUDE_GROUPS\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code.\n\n Returns\n -------\n albums : List[Album]\n The albums of the artist."}
{"_id": "q_5634", "text": "get the total amout of tracks in the album.\n\n Parameters\n ----------\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code.\n\n Returns\n -------\n total : int\n The total amount of albums."}
{"_id": "q_5635", "text": "Get the users currently playing track.\n\n Returns\n -------\n context, track : Tuple[Context, Track]\n A tuple of the context and track."}
{"_id": "q_5636", "text": "Get information about the users avaliable devices.\n\n Returns\n -------\n devices : List[Device]\n The devices the user has available."}
{"_id": "q_5637", "text": "Get tracks from the current users recently played tracks.\n\n Returns\n -------\n playlist_history : List[Dict[str, Union[Track, Context, str]]]\n A list of playlist history object.\n Each object is a dict with a timestamp, track and context field."}
{"_id": "q_5638", "text": "Create a playlist for a Spotify user.\n\n Parameters\n ----------\n name : str\n The name of the playlist.\n public : Optional[bool]\n The public/private status of the playlist.\n `True` for public, `False` for private.\n collaborative : Optional[bool]\n If `True`, the playlist will become collaborative and other users will be able to modify the playlist.\n description : Optional[str]\n The playlist description\n\n Returns\n -------\n playlist : Playlist\n The playlist that was created."}
{"_id": "q_5639", "text": "get the albums tracks from spotify.\n\n Parameters\n ----------\n limit : Optional[int]\n The limit on how many tracks to retrieve for this album (default is 20).\n offset : Optional[int]\n The offset from where the api should start from in the tracks.\n \n Returns\n -------\n tracks : List[Track]\n The tracks of the artist."}
{"_id": "q_5640", "text": "loads all of the albums tracks, depending on how many the album has this may be a long operation.\n\n Parameters\n ----------\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code. Provide this parameter if you want to apply Track Relinking.\n \n Returns\n -------\n tracks : List[Track]\n The tracks of the artist."}
{"_id": "q_5641", "text": "Retrive an album with a spotify ID.\n\n Parameters\n ----------\n spotify_id : str\n The ID to search for.\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code\n\n Returns\n -------\n album : Album\n The album from the ID"}
{"_id": "q_5642", "text": "Retrive an track with a spotify ID.\n\n Parameters\n ----------\n spotify_id : str\n The ID to search for.\n\n Returns\n -------\n track : Track\n The track from the ID"}
{"_id": "q_5643", "text": "Retrive multiple albums with a list of spotify IDs.\n\n Parameters\n ----------\n ids : List[str]\n the ID to look for\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code\n\n Returns\n -------\n albums : List[Album]\n The albums from the IDs"}
{"_id": "q_5644", "text": "Retrive multiple artists with a list of spotify IDs.\n\n Parameters\n ----------\n ids : List[str]\n the IDs to look for\n\n Returns\n -------\n artists : List[Artist]\n The artists from the IDs"}
{"_id": "q_5645", "text": "decorator to assert an object has an attribute when run."}
{"_id": "q_5646", "text": "Construct a OAuth2 object from a `spotify.Client`."}
{"_id": "q_5647", "text": "Construct a OAuth2 URL instead of an OAuth2 object."}
{"_id": "q_5648", "text": "Attributes used when constructing url parameters."}
{"_id": "q_5649", "text": "URL parameters used."}
{"_id": "q_5650", "text": "Execute the logic behind the meaning of ExpirationDate + return the matched status.\n\n :return:\n The status of the tested domain.\n Can be one of the official status.\n :rtype: str"}
{"_id": "q_5651", "text": "Read the code and update all links."}
{"_id": "q_5652", "text": "Check if the current version is greater as the older older one."}
{"_id": "q_5653", "text": "Check if the current branch is `dev`."}
{"_id": "q_5654", "text": "Check if we have to put the previous version into the deprecated list."}
{"_id": "q_5655", "text": "Backup the current execution state."}
{"_id": "q_5656", "text": "Restore data from the given path."}
{"_id": "q_5657", "text": "Check if we have to ignore the given line.\n\n :param line: The line from the file.\n :type line: str"}
{"_id": "q_5658", "text": "Handle the data from the options.\n\n :param options: The list of options from the rule.\n :type options: list\n\n :return: The list of domains to return globally.\n :rtype: list"}
{"_id": "q_5659", "text": "Format the exctracted adblock line before passing it to the system.\n\n :param to_format: The extracted line from the file.\n :type to_format: str\n\n :param result: A list of the result of this method.\n :type result: list\n\n :return: The list of domains or IP to test.\n :rtype: list"}
{"_id": "q_5660", "text": "Return the HTTP code status.\n\n :return: The matched and formatted status code.\n :rtype: str|int|None"}
{"_id": "q_5661", "text": "Check if the given domain is a subdomain.\n\n :param domain: The domain we are checking.\n :type domain: str\n\n :return: The subdomain state.\n :rtype: bool\n\n .. warning::\n If an empty or a non-string :code:`domain` is given, we return :code:`None`."}
{"_id": "q_5662", "text": "Check the syntax of the given IPv4.\n\n :param ip: The IPv4 to check the syntax for.\n :type ip: str\n\n :return: The syntax validity.\n :rtype: bool\n\n .. warning::\n If an empty or a non-string :code:`ip` is given, we return :code:`None`."}
{"_id": "q_5663", "text": "Check if the given information is a URL.\n If it is the case, it download and update the location of file to test.\n\n :param passed: The url passed to the system.\n :type passed: str\n\n :return: The state of the check.\n :rtype: bool"}
{"_id": "q_5664", "text": "Manage the loading of the url system."}
{"_id": "q_5665", "text": "Decide if we print or not the header."}
{"_id": "q_5666", "text": "Manage the database, autosave and autocontinue systems for the case that we are reading\n a file.\n\n :param current: The currently tested element.\n :type current: str\n\n :param last: The last element of the list.\n :type last: str\n\n :param status: The status of the currently tested element.\n :type status: str"}
{"_id": "q_5667", "text": "Manage the case that we want to test only a domain.\n\n :param domain: The domain or IP to test.\n :type domain: str\n\n :param last_domain:\n The last domain to test if we are testing a file.\n :type last_domain: str\n\n :param return_status: Tell us if we need to return the status.\n :type return_status: bool"}
{"_id": "q_5668", "text": "Manage the case that we want to test only a given url.\n\n :param url_to_test: The url to test.\n :type url_to_test: str\n\n :param last_url:\n The last url of the file we are testing\n (if exist)\n :type last_url: str"}
{"_id": "q_5669", "text": "Format the extracted domain before passing it to the system.\n\n :param extracted_domain: The extracted domain.\n :type extracted_domain: str\n\n :return: The formatted domain or IP to test.\n :rtype: str\n\n .. note:\n Understand by formating the fact that we get rid\n of all the noises around the domain we want to test."}
{"_id": "q_5670", "text": "Extract all non commented lines from the file we are testing.\n\n :return: The elements to test.\n :rtype: list"}
{"_id": "q_5671", "text": "Manage the case that need to test each domain of a given file path.\n\n .. note::\n 1 domain per line."}
{"_id": "q_5672", "text": "Manage the case that we have to test a file\n\n .. note::\n 1 URL per line."}
{"_id": "q_5673", "text": "Switch PyFunceble.CONFIGURATION variables to their opposite.\n\n :param variable:\n The variable name to switch.\n The variable should be an index our configuration system.\n If we want to switch a bool variable, we should parse\n it here.\n :type variable: str|bool\n\n :param custom:\n Let us know if have to switch the parsed variable instead\n of our configuration index.\n :type custom: bool\n\n :return:\n The opposite of the configuration index or the given variable.\n :rtype: bool\n\n :raises:\n :code:`Exception`\n When the configuration is not valid. In other words,\n if the PyFunceble.CONFIGURATION[variable_name] is not a bool."}
{"_id": "q_5674", "text": "Get the status while testing for an IP or domain.\n\n .. note::\n We consider that the domain or IP we are currently testing\n is into :code:`PyFunceble.INTERN[\"to_test\"]`."}
{"_id": "q_5675", "text": "Handle the backend of the given status."}
{"_id": "q_5676", "text": "Get the structure we are going to work with.\n\n :return: The structure we have to work with.\n :rtype: dict"}
{"_id": "q_5677", "text": "Creates the given directory if it does not exists.\n\n :param directory: The directory to create.\n :type directory: str\n\n :param loop: Tell us if we are in the creation loop or not.\n :type loop: bool"}
{"_id": "q_5678", "text": "Delete the directory which are not registered into our structure."}
{"_id": "q_5679", "text": "Set the paths to the configuration files.\n\n :param path_to_config: The possible path to the config to load.\n :type path_to_config: str\n\n :return:\n The path to the config to read (0), the path to the default\n configuration to read as fallback.(1)\n :rtype: tuple"}
{"_id": "q_5680", "text": "Download `public-suffix.json` if not present."}
{"_id": "q_5681", "text": "Download the latest version of `dir_structure_production.json`."}
{"_id": "q_5682", "text": "Execute the logic behind the merging."}
{"_id": "q_5683", "text": "Convert the versions to a shorter one.\n\n :param version: The version to split.\n :type version: str\n\n :param return_non_digits:\n Activate the return of the non-digits parts of the splitted\n version.\n :type return_non_digits: bool\n\n :return: The splitted version name/numbers.\n :rtype: list"}
{"_id": "q_5684", "text": "Compare the given versions.\n\n :param local: The local version converted by split_versions().\n :type local: list\n\n :param upstream: The upstream version converted by split_versions().\n :type upstream: list\n\n :return:\n - True: local < upstream\n - None: local == upstream\n - False: local > upstream\n :rtype: bool|None"}
{"_id": "q_5685", "text": "Let us know if we are currently in the cloned version of\n PyFunceble which implicitly mean that we are in developement mode."}
{"_id": "q_5686", "text": "Handle and check that some configuration index exists."}
{"_id": "q_5687", "text": "Generate unified file. Understand by that that we use an unified table\n instead of a separate table for each status which could result into a\n misunderstanding."}
{"_id": "q_5688", "text": "Generate a file according to the domain status."}
{"_id": "q_5689", "text": "Check if we are allowed to produce a file based from the given\n information.\n\n :return:\n The state of the production.\n True: We do not produce file.\n False: We do produce file.\n :rtype: bool"}
{"_id": "q_5690", "text": "Implement the standard and alphabetical sorting.\n\n :param element: The element we are currently reading.\n :type element: str\n\n :return: The formatted element.\n :rtype: str"}
{"_id": "q_5691", "text": "The idea behind this method is to sort a list of domain hierarchicaly.\n\n :param element: The element we are currently reading.\n :type element: str\n\n :return: The formatted element.\n :rtype: str\n\n .. note::\n For a domain like :code:`aaa.bbb.ccc.tdl`.\n\n A normal sorting is done in the following order:\n 1. :code:`aaa`\n 2. :code:`bbb`\n 3. :code:`ccc`\n 4. :code:`tdl`\n\n This method allow the sorting to be done in the following order:\n 1. :code:`tdl`\n 2. :code:`ccc`\n 3. :code:`bbb`\n 4. :code:`aaa`"}
{"_id": "q_5692", "text": "Initiate the IANA database if it is not the case."}
{"_id": "q_5693", "text": "Extract the extention from the given block.\n Plus get its referer."}
{"_id": "q_5694", "text": "Update the content of the `iana-domains-db` file."}
{"_id": "q_5695", "text": "Retrieve the mining informations."}
{"_id": "q_5696", "text": "Backup the mined informations."}
{"_id": "q_5697", "text": "Remove the currently tested element from the mining\n data."}
{"_id": "q_5698", "text": "Provide the list of mined so they can be added to the list\n queue.\n\n :return: The list of mined domains or URL.\n :rtype: list"}
{"_id": "q_5699", "text": "Get and return the content of the given log file.\n\n :param file: The file we have to get the content from.\n :type file: str\n\n :return The content of the given file.\n :rtype: dict"}
{"_id": "q_5700", "text": "Write the content into the given file.\n\n :param content: The dict to write.\n :type content: dict\n\n :param file: The file to write.\n :type file: str"}
{"_id": "q_5701", "text": "Logs the case that the referer was not found.\n\n :param extension: The extension of the domain we are testing.\n :type extension: str"}
{"_id": "q_5702", "text": "Construct header of the table according to template.\n\n :param data_to_print:\n The list of data to print into the header of the table.\n :type data_to_print: list\n\n :param header_separator:\n The separator to use between the table header and our data.\n :type header_separator: str\n\n :param colomn_separator: The separator to use between each colomns.\n :type colomn_separator: str\n\n :return: The data to print in a list format.\n :rtype: list"}
{"_id": "q_5703", "text": "Management and creation of templates of header.\n Please consider as \"header\" the title of each columns.\n\n :param do_not_print:\n Tell us if we have to print the header or not.\n :type do_not_print: bool"}
{"_id": "q_5704", "text": "Construct the table of data according to given size.\n\n :param size: The maximal length of each string in the table.\n :type size: list\n\n :return:\n A dict with all information about the data and how to which what\n maximal size to print it.\n :rtype: OrderedDict\n\n :raises:\n :code:`Exception`\n If the data and the size does not have the same length."}
{"_id": "q_5705", "text": "Get the size of each columns from the header.\n\n :param header:\n The header template we have to get the size from.\n :type header: dict\n\n :return: The maximal size of the each data to print.\n :rtype: list"}
{"_id": "q_5706", "text": "Management and input of data to the table.\n\n :raises:\n :code:`Exception`\n When self.data_to_print is not a list."}
{"_id": "q_5707", "text": "Save the current time to the file.\n\n :param last:\n Tell us if we are at the very end of the file testing.\n :type last: bool"}
{"_id": "q_5708", "text": "Set the databases files to delete."}
{"_id": "q_5709", "text": "Delete almost all discovered files.\n\n :param clean_all:\n Tell the subsystem if we have to clean everything instesd\n of almost everything.\n :type clean_all: bool"}
{"_id": "q_5710", "text": "Get hash of the given data.\n\n :param algo: The algorithm to use.\n :type algo: str"}
{"_id": "q_5711", "text": "Return the hash of the given file"}
{"_id": "q_5712", "text": "Remove a given key from a given dictionary.\n\n :param key_to_remove: The key(s) to delete.\n :type key_to_remove: list|str\n\n :return: The dict without the given key(s).\n :rtype: dict|None"}
{"_id": "q_5713", "text": "Rename the given keys from the given dictionary.\n\n :param key_to_rename:\n The key(s) to rename.\n Expected format: :code:`{old:new}`\n :type key_to_rename: dict\n\n :param strict:\n Tell us if we have to rename the exact index or\n the index which looks like the given key(s)\n\n :return: The well formatted dict.\n :rtype: dict|None"}
{"_id": "q_5714", "text": "Merge the content of to_merge into the given main dictionnary.\n\n :param to_merge: The dictionnary to merge.\n :type to_merge: dict\n\n :param strict:\n Tell us if we have to strictly merge lists.\n\n :code:`True`: We follow index\n :code`False`: We follow element (content)\n :type strict: bool\n\n :return: The merged dict.\n :rtype: dict"}
{"_id": "q_5715", "text": "Save a dictionnary into a JSON file.\n\n :param destination:\n A path to a file where we're going to\n write the converted dict into a JSON format.\n :type destination: str"}
{"_id": "q_5716", "text": "Fix the path of the given path.\n\n :param splited_path: A list to convert to the right path.\n :type splited_path: list\n\n :return: The fixed path.\n :rtype: str"}
{"_id": "q_5717", "text": "Read a given file path and return its content.\n\n :return: The content of the given file path.\n :rtype: str"}
{"_id": "q_5718", "text": "Return a well formatted list. Basicaly, it's sort a list and remove duplicate.\n\n :return: A sorted, without duplicate, list.\n :rtype: list"}
{"_id": "q_5719", "text": "Return a list of string which don't match the\n given regex."}
{"_id": "q_5720", "text": "Used to get exploitable result of re.search\n\n :return: The data of the match status.\n :rtype: mixed"}
{"_id": "q_5721", "text": "Used to replace a matched string with another.\n\n :return: The data after replacement.\n :rtype: str"}
{"_id": "q_5722", "text": "Print on screen and on file the percentages for each status."}
{"_id": "q_5723", "text": "Check if the given URL is valid.\n\n :param url: The url to validate.\n :type url: str\n\n :param return_base:\n Allow us the return of the url base (if URL formatted correctly).\n :type return_formatted: bool\n\n :param return_formatted:\n Allow us to get the URL converted to IDNA if the conversion\n is activated.\n :type return_formatted: bool\n\n\n :return: The validity of the URL or its base.\n :rtype: bool|str"}
{"_id": "q_5724", "text": "Check if the given subdomain is a subdomain.\n\n :param domain: The domain to validate.\n :type domain: str\n\n :return: The validity of the subdomain.\n :rtype: bool"}
{"_id": "q_5725", "text": "Execute the logic behind the Syntax handling.\n\n :return: The syntax status.\n :rtype: str"}
{"_id": "q_5726", "text": "Return the current content of the inactive-db.json file."}
{"_id": "q_5727", "text": "Save the current database into the inactive-db.json file."}
{"_id": "q_5728", "text": "Get the timestamp where we are going to save our current list.\n\n :return: The timestamp to append with the currently tested element.\n :rtype: int|str"}
{"_id": "q_5729", "text": "Get the content of the database.\n\n :return: The content of the database.\n :rtype: list"}
{"_id": "q_5730", "text": "Check if the currently tested element is into the database."}
{"_id": "q_5731", "text": "Backup the database into its file."}
{"_id": "q_5732", "text": "Check if the current time is older than the one in the database."}
{"_id": "q_5733", "text": "Implementation of UNIX whois.\n\n :param whois_server: The WHOIS server to use to get the record.\n :type whois_server: str\n\n :param domain: The domain to get the whois record from.\n :type domain: str\n\n :param timeout: The timeout to apply to the request.\n :type timeout: int\n\n :return: The whois record from the given whois server, if exist.\n :rtype: str|None"}
{"_id": "q_5734", "text": "Execute the logic behind the URL handling.\n\n :return: The status of the URL.\n :rtype: str"}
{"_id": "q_5735", "text": "Return the referer aka the WHOIS server of the current domain extension."}
{"_id": "q_5736", "text": "docstring for _randone"}
{"_id": "q_5737", "text": "Wrapper for Zotero._cleanup"}
{"_id": "q_5738", "text": "Add a retrieved template to the cache for 304 checking\n accepts a dict and key name, adds the retrieval time, and adds both\n to self.templates as a new dict using the specified key"}
{"_id": "q_5739", "text": "Remove keys we added for internal use"}
{"_id": "q_5740", "text": "Return the contents of My Publications"}
{"_id": "q_5741", "text": "Return the total number of items in the specified collection"}
{"_id": "q_5742", "text": "Return the total number of items for the specified tag"}
{"_id": "q_5743", "text": "General method for returning total counts"}
{"_id": "q_5744", "text": "Retrieve info about the permissions associated with the\n key associated to the given Zotero instance"}
{"_id": "q_5745", "text": "Get the last modified version"}
{"_id": "q_5746", "text": "Retrieve all collections and subcollections. Works for top-level collections\n or for a specific collection. Works at all collection depths."}
{"_id": "q_5747", "text": "Get subcollections for a specific collection"}
{"_id": "q_5748", "text": "Retrieve all items in the library for a particular query\n This method will override the 'limit' parameter if it's been set"}
{"_id": "q_5749", "text": "Return a list of dicts which are dumped CSL JSON"}
{"_id": "q_5750", "text": "Return a list of strings formatted as HTML citation entries"}
{"_id": "q_5751", "text": "Get a template for a new item"}
{"_id": "q_5752", "text": "Create attachments\n accepts a list of one or more attachment template dicts\n and an optional parent Item ID. If this is specified,\n attachments are created under this ID"}
{"_id": "q_5753", "text": "Delete one or more saved searches by passing a list of one or more\n unique search keys"}
{"_id": "q_5754", "text": "Add one or more tags to a retrieved item,\n then update it on the server\n Accepts a dict, and one or more tags to add to it\n Returns the updated item from the server"}
{"_id": "q_5755", "text": "Update an existing item\n Accepts one argument, a dict containing Item data"}
{"_id": "q_5756", "text": "Update existing items\n Accepts one argument, a list of dicts containing Item data"}
{"_id": "q_5757", "text": "Validate saved search conditions, raising an error if any contain invalid operators"}
{"_id": "q_5758", "text": "Split a multiline string into a list, excluding blank lines."}
{"_id": "q_5759", "text": "Split a string with comma or space-separated elements into a list."}
{"_id": "q_5760", "text": "Evaluate environment markers."}
{"_id": "q_5761", "text": "Get configuration value."}
{"_id": "q_5762", "text": "Set configuration value."}
{"_id": "q_5763", "text": "Compatibility helper to use setup.cfg in setup.py."}
{"_id": "q_5764", "text": "Get LanguageTool version."}
{"_id": "q_5765", "text": "Get supported languages."}
{"_id": "q_5766", "text": "Set LanguageTool directory."}
{"_id": "q_5767", "text": "Match text against enabled rules."}
{"_id": "q_5768", "text": "Return newest compatible version.\n\n >>> version = get_newest_possible_languagetool_version()\n >>> version in [JAVA_6_COMPATIBLE_VERSION,\n ... JAVA_7_COMPATIBLE_VERSION,\n ... LATEST_VERSION]\n True"}
{"_id": "q_5769", "text": "Get common directory in a zip file if any."}
{"_id": "q_5770", "text": "Make a Qt async slot run on asyncio loop."}
{"_id": "q_5771", "text": "Class decorator to add a logger to a class."}
{"_id": "q_5772", "text": "Selector has delivered us an event."}
{"_id": "q_5773", "text": "Add more ASN.1 MIB source repositories.\n\n MibCompiler.compile will invoke each of configured source objects\n in order of their addition asking each to fetch MIB module specified\n by name.\n\n Args:\n sources: reader object(s)\n\n Returns:\n reference to itself (can be used for call chaining)"}
{"_id": "q_5774", "text": "Add more transformed MIBs repositories to borrow MIBs from.\n\n Whenever MibCompiler.compile encounters MIB module which neither of\n the *searchers* can find or fetched ASN.1 MIB module can not be\n parsed (due to syntax errors), these *borrowers* objects will be\n invoked in order of their addition asking each if already transformed\n MIB can be fetched (borrowed).\n\n Args:\n borrowers: borrower object(s)\n\n Returns:\n reference to itself (can be used for call chaining)"}
{"_id": "q_5775", "text": "Get current object.\n This is useful if you want the real\n object behind the proxy at a time for performance reasons or because\n you want to pass the object into a different context."}
{"_id": "q_5776", "text": "r\"\"\"Kullback information criterion\n\n .. math:: KIC(k) = log(\\rho_k) + 3 \\frac{k+1}{N}\n\n :validation: double checked versus octave."}
{"_id": "q_5777", "text": "r\"\"\"approximate corrected Kullback information\n\n .. math:: AKICc(k) = log(rho_k) + \\frac{p}{N*(N-k)} + (3-\\frac{k+2}{N})*\\frac{k+1}{N-k-2}"}
{"_id": "q_5778", "text": "r\"\"\"Final prediction error criterion\n\n .. math:: FPE(k) = \\frac{N + k + 1}{N - k - 1} \\rho_k\n\n :validation: double checked versus octave."}
{"_id": "q_5779", "text": "r\"\"\"Minimum Description Length\n\n .. math:: MDL(k) = N log \\rho_k + p \\log N\n\n :validation: results"}
{"_id": "q_5780", "text": "Generate the Main examples gallery reStructuredText\n\n Start the sphinx-gallery configuration and recursively scan the examples\n directories in order to populate the examples gallery"}
{"_id": "q_5781", "text": "Setup sphinx-gallery sphinx extension"}
{"_id": "q_5782", "text": "r\"\"\"Correlation function\n\n This function should give the same results as :func:`xcorr` but it\n returns the positive lags only. Moreover the algorithm does not use\n FFT as compared to other algorithms.\n\n :param array x: first data array of length N\n :param array y: second data array of length N. If not specified, computes the\n autocorrelation.\n :param int maxlags: compute cross correlation between [0:maxlags]\n when maxlags is not specified, the range of lags is [0:maxlags].\n :param str norm: normalisation in ['biased', 'unbiased', None, 'coeff']\n\n * *biased* correlation=raw/N,\n * *unbiased* correlation=raw/(N-`|lag|`)\n * *coeff* correlation=raw/(rms(x).rms(y))/N\n * None correlation=raw\n\n :return:\n * a numpy.array correlation sequence, r[1,N]\n * a float for the zero-lag correlation, r[0]\n\n The *unbiased* correlation has the form:\n\n .. math::\n\n \\hat{r}_{xx} = \\frac{1}{N-m}T \\sum_{n=0}^{N-m-1} x[n+m]x^*[n] T\n\n The *biased* correlation differs by the front factor only:\n\n .. math::\n\n \\check{r}_{xx} = \\frac{1}{N}T \\sum_{n=0}^{N-m-1} x[n+m]x^*[n] T\n\n with :math:`0\\leq m\\leq N-1`.\n\n .. doctest::\n\n >>> from spectrum import CORRELATION\n >>> x = [1,2,3,4,5]\n >>> res = CORRELATION(x,x, maxlags=0, norm='biased')\n >>> res[0]\n 11.0\n\n .. note:: this function should be replaced by :func:`xcorr`.\n\n .. seealso:: :func:`xcorr`"}
{"_id": "q_5783", "text": "Cross-correlation using numpy.correlate\n\n Estimates the cross-correlation (and autocorrelation) sequence of a random\n process of length N. By default, there is no normalisation and the output\n sequence of the cross-correlation has a length 2*N+1.\n\n :param array x: first data array of length N\n :param array y: second data array of length N. If not specified, computes the\n autocorrelation.\n :param int maxlags: compute cross correlation between [-maxlags:maxlags]\n when maxlags is not specified, the range of lags is [-N+1:N-1].\n :param str option: normalisation in ['biased', 'unbiased', None, 'coeff']\n\n The true cross-correlation sequence is\n\n .. math:: r_{xy}[m] = E(x[n+m].y^*[n]) = E(x[n].y^*[n-m])\n\n However, in practice, only a finite segment of one realization of the\n infinite-length random process is available.\n\n The correlation is estimated using numpy.correlate(x,y,'full').\n Normalisation is handled by this function using the following cases:\n\n * 'biased': Biased estimate of the cross-correlation function\n * 'unbiased': Unbiased estimate of the cross-correlation function\n * 'coeff': Normalizes the sequence so the autocorrelations at zero\n lag is 1.0.\n\n :return:\n * a numpy.array containing the cross-correlation sequence (length 2*N-1)\n * lags vector\n\n .. note:: If x and y are not the same length, the shorter vector is\n zero-padded to the length of the longer vector.\n\n .. rubric:: Examples\n\n .. doctest::\n\n >>> from spectrum import xcorr\n >>> x = [1,2,3,4,5]\n >>> c, l = xcorr(x,x, maxlags=0, norm='biased')\n >>> c\n array([ 11.])\n\n .. seealso:: :func:`CORRELATION`."}
{"_id": "q_5784", "text": "Finds the minimum eigenvalue of a Hermitian Toeplitz matrix\n\n The classical power method is used together with a fast Toeplitz\n equation solution routine. The eigenvector is normalized to unit length.\n\n :param T0: Scalar corresponding to real matrix element t(0)\n :param T: Array of M complex matrix elements t(1),...,t(M) C from the left column of the Toeplitz matrix\n :param TOL: Real scalar tolerance; routine exits when [ EVAL(k) - EVAL(k-1) ]/EVAL(k-1) < TOL , where the index k denotes the iteration number.\n\n :return:\n * EVAL - Real scalar denoting the minimum eigenvalue of matrix\n * EVEC - Array of M complex eigenvector elements associated\n\n\n .. note::\n * External array T must be dimensioned >= M\n * array EVEC must be >= M+1\n * Internal array E must be dimensioned >= M+1 . \n\n * **dependencies**\n * :meth:`spectrum.toeplitz.HERMTOEP`"}
{"_id": "q_5785", "text": "r\"\"\"Generate the Morlet waveform\n\n\n The Morlet waveform is defined as follows:\n\n .. math:: w[x] = \\cos{5x} \\exp^{-x^2/2}\n\n :param lb: lower bound\n :param ub: upper bound\n :param int n: waveform data samples\n\n\n .. plot::\n :include-source:\n :width: 80%\n\n from spectrum import morlet\n from pylab import plot\n plot(morlet(0,10,100))"}
{"_id": "q_5786", "text": "convert reflection coefficients to prediction filter polynomial\n\n :param k: reflection coefficients"}
{"_id": "q_5787", "text": "Convert reflection coefficients to log area ratios.\n\n :param k: reflection coefficients\n :return: inverse sine parameters\n\n The log area ratio is defined by G = log((1+k)/(1-k)) , where the K\n parameter is the reflection coefficient.\n\n .. seealso:: :func:`lar2rc`, :func:`rc2poly`, :func:`rc2ac`, :func:`rc2ic`.\n\n :References:\n [1] J. Makhoul, \"Linear Prediction: A Tutorial Review,\" Proc. IEEE, Vol.63, No.4, pp.561-580, Apr 1975."}
{"_id": "q_5788", "text": "Convert log area ratios to reflection coefficients.\n\n :param g: log area ratios\n :returns: the reflection coefficients\n\n .. seealso: :func:`rc2lar`, :func:`poly2rc`, :func:`ac2rc`, :func:`is2rc`.\n\n :References:\n [1] J. Makhoul, \"Linear Prediction: A Tutorial Review,\" Proc. IEEE, Vol.63, No.4, pp.561-580, Apr 1975."}
{"_id": "q_5789", "text": "Convert line spectral frequencies to prediction filter coefficients\n\n returns a vector a containing the prediction filter coefficients from a vector lsf of line spectral frequencies.\n\n .. doctest::\n\n >>> from spectrum import lsf2poly\n >>> lsf = [0.7842 , 1.5605 , 1.8776 , 1.8984, 2.3593]\n >>> a = lsf2poly(lsf)\n\n # array([ 1.00000000e+00, 6.14837835e-01, 9.89884967e-01,\n # 9.31594056e-05, 3.13713832e-03, -8.12002261e-03 ])\n\n .. seealso:: poly2lsf, rc2poly, ac2poly, rc2is"}
{"_id": "q_5790", "text": "Prediction polynomial to line spectral frequencies.\n\n converts the prediction polynomial specified by A,\n into the corresponding line spectral frequencies, LSF.\n normalizes the prediction polynomial by A(1).\n\n .. doctest::\n\n >>> from spectrum import poly2lsf\n >>> a = [1.0000, 0.6149, 0.9899, 0.0000 ,0.0031, -0.0082]\n >>> lsf = poly2lsf(a)\n >>> lsf = array([0.7842, 1.5605, 1.8776, 1.8984, 2.3593])\n\n .. seealso:: lsf2poly, poly2rc, poly2qc, rc2is"}
{"_id": "q_5791", "text": "Convert a one-sided PSD to a twosided PSD\n\n In order to keep the power in the onesided PSD the same\n as in the twosided version, the onesided values are twice\n as much as in the input data (except for the zero-lag value).\n\n ::\n\n >>> twosided_2_onesided([10, 2,3,3,2,8])\n array([ 10., 4., 6., 8.])"}
{"_id": "q_5792", "text": "Convert a two-sided PSD to a one-sided PSD\n\n In order to keep the power in the twosided PSD the same\n as in the onesided version, the twosided values are 2 times\n lower than the input data (except for the zero-lag and N-lag\n values).\n\n ::\n\n >>> twosided_2_onesided([10, 4, 6, 8])\n array([ 10., 2., 3., 3., 2., 8.])"}
{"_id": "q_5793", "text": "Convert a two-sided PSD to a center-dc PSD"}
{"_id": "q_5794", "text": "Convert a center-dc PSD to a twosided PSD"}
{"_id": "q_5795", "text": "A simple test example with two close frequencies"}
{"_id": "q_5796", "text": "Plot the data set, using the sampling information to set the x-axis\n correctly."}
{"_id": "q_5797", "text": "Returns the autocovariance of signal s at all lags.\n\n Adheres to the definition\n sxx[k] = E{S[n]S[n+k]} = cov{S[n],S[n+k]}\n where E{} is the expectation operator, and S is a zero mean process"}
{"_id": "q_5798", "text": "Separate `filename` content between docstring and the rest\n\n Strongly inspired from ast.get_docstring.\n\n Returns\n -------\n docstring: str\n docstring of `filename`\n rest: str\n `filename` content without the docstring"}
{"_id": "q_5799", "text": "Returns md5sum of file"}
{"_id": "q_5800", "text": "Returns True if src_file has a different md5sum"}
{"_id": "q_5801", "text": "Test existence of image file and no change in md5sum of\n example"}
{"_id": "q_5802", "text": "Save all open matplotlib figures of the example code-block\n\n Parameters\n ----------\n image_path : str\n Path where plots are saved (format string which accepts figure number)\n fig_count : int\n Previous figure number count. Figure number add from this number\n\n Returns\n -------\n list of strings containing the full path to each figure"}
{"_id": "q_5803", "text": "Save the thumbnail image"}
{"_id": "q_5804", "text": "Executes the code block of the example file"}
{"_id": "q_5805", "text": "This function solve Ax=B directly without taking care of the input\n matrix properties."}
{"_id": "q_5806", "text": "Simple periodogram, but matrices accepted.\n\n :param x: an array or matrix of data samples.\n :param NFFT: length of the data before FFT is computed (zero padding)\n :param bool detrend: detrend the data before co,puteing the FFT\n :param float sampling: sampling frequency of the input :attr:`data`.\n\n :param scale_by_freq:\n :param str window:\n\n :return: 2-sided PSD if complex data, 1-sided if real.\n\n if a matrix is provided (using numpy.matrix), then a periodogram\n is computed for each row. The returned matrix has the same shape as the input\n matrix.\n\n The mean of the input data is also removed from the data before computing\n the psd.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from pylab import grid, semilogy\n from spectrum import data_cosine, speriodogram\n data = data_cosine(N=1024, A=0.1, sampling=1024, freq=200)\n semilogy(speriodogram(data, detrend=False, sampling=1024), marker='o')\n grid(True)\n\n\n .. plot::\n :width: 80%\n :include-source:\n\n import numpy\n from spectrum import speriodogram, data_cosine\n from pylab import figure, semilogy, figure ,imshow\n # create N data sets and make the frequency dependent on the time\n N = 100\n m = numpy.concatenate([data_cosine(N=1024, A=0.1, sampling=1024, freq=x) \n for x in range(1, N)]);\n m.resize(N, 1024)\n res = speriodogram(m)\n figure(1)\n semilogy(res)\n figure(2)\n imshow(res.transpose(), aspect='auto')\n\n .. todo:: a proper spectrogram class/function that takes care of normalisation"}
{"_id": "q_5807", "text": "r\"\"\"Simple periodogram wrapper of numpy.psd function.\n\n :param A: the input data\n :param int NFFT: total length of the final data sets (padded \n with zero if needed; default is 4096)\n :param str window:\n\n :Technical documentation:\n\n When we calculate the periodogram of a set of data we get an estimation\n of the spectral density. In fact as we use a Fourier transform and a\n truncated segments the spectrum is the convolution of the data with a\n rectangular window which Fourier transform is\n\n .. math::\n\n W(s)= \\frac{1}{N^2} \\left[ \\frac{\\sin(\\pi s)}{\\sin(\\pi s/N)} \\right]^2\n\n Thus oscillations and sidelobes appears around the main frequency. One aim of t he tapering is to reduced this effects. We multiply data by a window whose sidelobes are much smaller than the main lobe. Classical window is hanning window. But other windows are available. However we must take into account this energy and divide the spectrum by energy of taper used. Thus periodogram becomes :\n\n .. math::\n\n D_k \\equiv \\sum_{j=0}^{N-1}c_jw_j \\; e^{2\\pi ijk/N} \\qquad k=0,...,N-1\n\n .. math::\n\n P(0)=P(f_0)=\\frac{1}{2\\pi W_{ss}}\\arrowvert{D_0}\\arrowvert^2\n\n .. math::\n\n P(f_k)=\\frac{1}{2\\pi W_{ss}} \\left[\\arrowvert{D_k}\\arrowvert^2+\\arrowvert{D_{N-k}}\\arrowvert^2\\right] \\qquad k=0,1,..., \\left( \\frac{1}{2}-1 \\right)\n\n .. math::\n\n P(f_c)=P(f_{N/2})= \\frac{1}{2\\pi W_{ss}} \\arrowvert{D_{N/2}}\\arrowvert^2\n\n with\n\n .. math::\n\n {W_{ss}} \\equiv N\\sum_{j=0}^{N-1}w_j^2\n\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import WelchPeriodogram, marple_data\n psd = WelchPeriodogram(marple_data, 256)"}
{"_id": "q_5808", "text": "Return the centered frequency range as a generator.\n\n ::\n\n >>> print(list(Range(8).centerdc_gen()))\n [-0.5, -0.375, -0.25, -0.125, 0.0, 0.125, 0.25, 0.375]"}
{"_id": "q_5809", "text": "Return the one-sided frequency range as a generator.\n\n If :attr:`N` is even, the length is N/2 + 1.\n If :attr:`N` is odd, the length is (N+1)/2.\n\n ::\n\n >>> print(list(Range(8).onesided()))\n [0.0, 0.125, 0.25, 0.375, 0.5]\n >>> print(list(Range(9).onesided()))\n [0.0, 0.1111, 0.2222, 0.3333, 0.4444]"}
{"_id": "q_5810", "text": "r\"\"\"Return the power contained in the PSD\n\n if scale_by_freq is False, the power is:\n\n .. math:: P = N \\sum_{k=1}^{N} P_{xx}(k)\n\n else, it is\n\n .. math:: P = \\sum_{k=1}^{N} P_{xx}(k) \\frac{df}{2\\pi}\n\n .. todo:: check these equations"}
{"_id": "q_5811", "text": "Returns a dictionary with the elements of a Jupyter notebook"}
{"_id": "q_5812", "text": "Converts the RST text from the examples docstrigs and comments\n into markdown text for the IPython notebooks"}
{"_id": "q_5813", "text": "Saves the notebook to a file"}
{"_id": "q_5814", "text": "Autoregressive and moving average estimators.\n\n This function provides an estimate of the autoregressive\n parameters, the moving average parameters, and the driving\n white noise variance of an ARMA(P,Q) for a complex or real data sequence.\n\n The parameters are estimated using three steps:\n\n * Estimate the AR parameters from the original data based on a least\n squares modified Yule-Walker technique,\n * Produce a residual time sequence by filtering the original data\n with a filter based on the AR parameters,\n * Estimate the MA parameters from the residual time sequence.\n\n :param array X: Array of data samples (length N)\n :param int P: Desired number of AR parameters\n :param int Q: Desired number of MA parameters\n :param int lag: Maximum lag to use for autocorrelation estimates\n\n :return:\n * A - Array of complex P AR parameter estimates\n * B - Array of complex Q MA parameter estimates\n * RHO - White noise variance estimate\n\n .. note::\n * lag must be >= Q (MA order)\n\n **dependencies**:\n * :meth:`spectrum.correlation.CORRELATION`\n * :meth:`spectrum.covar.arcovar`\n * :meth:`spectrum.arma.ma`\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import arma_estimate, arma2psd, marple_data\n import pylab\n\n a,b, rho = arma_estimate(marple_data, 15, 15, 30)\n psd = arma2psd(A=a, B=b, rho=rho, sides='centerdc', norm=True)\n pylab.plot(10 * pylab.log10(psd))\n pylab.ylim([-50,0])\n\n :reference: [Marple]_"}
{"_id": "q_5815", "text": "Moving average estimator.\n\n This program provides an estimate of the moving average parameters\n and driving noise variance for a data sequence based on a\n long AR model and a least squares fit.\n\n :param array X: The input data array\n :param int Q: Desired MA model order (must be >0 and <M)\n :param int M: Order of \"long\" AR model (suggest at least 2*Q )\n\n :return:\n * MA - Array of Q complex MA parameter estimates\n * RHO - Real scalar of white noise variance estimate\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import arma2psd, ma, marple_data\n import pylab\n\n # Estimate 15 Ma parameters\n b, rho = ma(marple_data, 15, 30)\n # Create the PSD from those MA parameters\n psd = arma2psd(B=b, rho=rho, sides='centerdc')\n # and finally plot the PSD\n pylab.plot(pylab.linspace(-0.5, 0.5, 4096), 10 * pylab.log10(psd/max(psd)))\n pylab.axis([-0.5, 0.5, -30, 0])\n\n :reference: [Marple]_"}
{"_id": "q_5816", "text": "PSD estimate using correlogram method.\n\n\n :param array X: complex or real data samples X(1) to X(N)\n :param array Y: complex data samples Y(1) to Y(N). If provided, computes\n the cross PSD, otherwise the PSD is returned\n :param int lag: highest lag index to compute. Must be less than N\n :param str window_name: see :mod:`window` for list of valid names\n :param str norm: one of the valid normalisation of :func:`xcorr` (biased, \n unbiased, coeff, None)\n :param int NFFT: total length of the final data sets (padded with zero \n if needed; default is 4096)\n :param str correlation_method: either `xcorr` or `CORRELATION`.\n CORRELATION should be removed in the future.\n\n :return:\n * Array of real (cross) power spectral density estimate values. This is\n a two sided array with negative values following the positive ones\n whatever is the input data (real or complex).\n\n .. rubric:: Description:\n\n The exact power spectral density is the Fourier transform of the\n autocorrelation sequence:\n\n .. math:: P_{xx}(f) = T \\sum_{m=-\\infty}^{\\infty} r_{xx}[m] exp^{-j2\\pi fmT}\n\n The correlogram method of PSD estimation substitutes a finite sequence of\n autocorrelation estimates :math:`\\hat{r}_{xx}` in place of :math:`r_{xx}`.\n This estimation can be computed with :func:`xcorr` or :func:`CORRELATION` by\n chosing a proprer lag `L`. The estimated PSD is then\n\n .. math:: \\hat{P}_{xx}(f) = T \\sum_{m=-L}^{L} \\hat{r}_{xx}[m] exp^{-j2\\pi fmT}\n\n The lag index must be less than the number of data samples `N`. Ideally, it\n should be around `L/10` [Marple]_ so as to avoid greater statistical\n variance associated with higher lags.\n\n To reduce the leakage of the implicit rectangular window and therefore to\n reduce the bias in the estimate, a tapering window is normally used and lead\n to the so-called Blackman and Tukey correlogram:\n\n .. math:: \\hat{P}_{BT}(f) = T \\sum_{m=-L}^{L} w[m] \\hat{r}_{xx}[m] exp^{-j2\\pi fmT}\n\n The correlogram for the cross power spectral estimate is\n\n .. math:: \\hat{P}_{xx}(f) = T \\sum_{m=-L}^{L} \\hat{r}_{xx}[m] exp^{-j2\\pi fmT}\n\n which is computed if :attr:`Y` is not provide. In such case,\n :math:`r_{yx} = r_{xy}` so we compute the correlation only once.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import CORRELOGRAMPSD, marple_data\n from spectrum.tools import cshift\n from pylab import log10, axis, grid, plot,linspace\n\n psd = CORRELOGRAMPSD(marple_data, marple_data, lag=15)\n f = linspace(-0.5, 0.5, len(psd))\n psd = cshift(psd, len(psd)/2)\n plot(f, 10*log10(psd/max(psd)))\n axis([-0.5,0.5,-50,0])\n grid(True)\n\n .. seealso:: :func:`create_window`, :func:`CORRELATION`, :func:`xcorr`,\n :class:`pcorrelogram`."}
{"_id": "q_5817", "text": "Select first block delimited by start_tag and end_tag"}
{"_id": "q_5818", "text": "Parse a dictionary from the search index"}
{"_id": "q_5819", "text": "Parse a Sphinx search index\n\n Parameters\n ----------\n searchindex : str\n The Sphinx search index (contents of searchindex.js)\n\n Returns\n -------\n filenames : list of str\n The file names parsed from the search index.\n objects : dict\n The objects parsed from the search index."}
{"_id": "q_5820", "text": "Get a valid link, False if not found"}
{"_id": "q_5821", "text": "r\"\"\"Return polynomial transfer function representation from zeros and poles\n\n :param ndarray z: Zeros of the transfer function.\n :param ndarray p: Poles of the transfer function.\n :param float k: System gain.\n\n :return:\n b : ndarray Numerator polynomial.\n a : ndarray Numerator and denominator polynomials.\n\n :func:`zpk2tf` forms transfer function polynomials from the zeros, poles, and gains\n of a system in factored form.\n\n zpk2tf(z,p,k) finds a rational transfer function\n\n .. math:: \\frac{B(s)}{A(s)} = \\frac{b_1 s^{n-1}+\\dots b_{n-1}s+b_n}{a_1 s^{m-1}+\\dots a_{m-1}s+a_m}\n\n given a system in factored transfer function form\n\n .. math:: H(s) = \\frac{Z(s)}{P(s)} = k \\frac{(s-z_1)(s-z_2)\\dots(s-z_m)}{(s-p_1)(s-p_2)\\dots(s-p_n)}\n\n\n with p being the pole locations, and z the zero locations, with as many.\n The gains for each numerator transfer function are in vector k.\n The zeros and poles must be real or come in complex conjugate pairs.\n The polynomial denominator coefficients are returned in row vector a and\n the polynomial numerator coefficients are returned in matrix b, which has\n as many rows as there are columns of z.\n\n Inf values can be used as place holders in z if some columns have fewer zeros than others.\n\n .. note:: wrapper of scipy function zpk2tf"}
{"_id": "q_5822", "text": "Zero-pole-gain representation to state-space representation\n\n :param sequence z,p: Zeros and poles.\n :param float k: System gain.\n\n :return:\n * A, B, C, D : ndarray State-space matrices.\n\n .. note:: wrapper of scipy function zpk2ss"}
{"_id": "q_5823", "text": "A Window visualisation tool\n\n :param N: length of the window\n :param name: name of the window\n :param NFFT: padding used by the FFT\n :param mindB: the minimum frequency power in dB\n :param maxdB: the maximum frequency power in dB\n :param kargs: optional arguments passed to :func:`create_window`\n\n This function plot the window shape and its equivalent in the Fourier domain.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'kaiser', beta=8.)"}
{"_id": "q_5824", "text": "r\"\"\"Kaiser window\n\n :param N: window length\n :param beta: kaiser parameter (default is 8.6)\n\n To obtain a Kaiser window that designs an FIR filter with\n sidelobe attenuation of :math:`\\alpha` dB, use the following :math:`\\beta` where\n :math:`\\beta = \\pi \\alpha`.\n\n .. math::\n\n w_n = \\frac{I_0\\left(\\pi\\alpha\\sqrt{1-\\left(\\frac{2n}{M}-1\\right)^2}\\right)} {I_0(\\pi \\alpha)}\n\n where\n\n * :math:`I_0` is the zeroth order Modified Bessel function of the first kind.\n * :math:`\\alpha` is a real number that determines the shape of the \n window. It determines the trade-off between main-lobe width and side \n lobe level.\n * the length of the sequence is N=M+1.\n\n The Kaiser window can approximate many other windows by varying \n the :math:`\\beta` parameter:\n\n ===== ========================\n beta Window shape\n ===== ========================\n 0 Rectangular\n 5 Similar to a Hamming\n 6 Similar to a Hanning\n 8.6 Similar to a Blackman\n ===== ========================\n\n .. plot::\n :width: 80%\n :include-source:\n\n from pylab import plot, legend, xlim\n from spectrum import window_kaiser\n N = 64\n for beta in [1,2,4,8,16]:\n plot(window_kaiser(N, beta), label='beta='+str(beta))\n xlim(0,N)\n legend()\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'kaiser', beta=8.)\n\n .. seealso:: numpy.kaiser, :func:`spectrum.window.create_window`"}
{"_id": "q_5825", "text": "r\"\"\"Blackman window\n\n :param N: window length\n\n .. math:: a_0 - a_1 \\cos(\\frac{2\\pi n}{N-1}) +a_2 \\cos(\\frac{4\\pi n }{N-1})\n\n with\n\n .. math::\n\n a_0 = (1-\\alpha)/2, a_1=0.5, a_2=\\alpha/2 \\rm{\\;and\\; \\alpha}=0.16\n\n When :math:`\\alpha=0.16`, this is the unqualified Blackman window with\n :math:`a_0=0.48` and :math:`a_2=0.08`.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'blackman')\n\n .. note:: Although Numpy implements a blackman window for :math:`\\alpha=0.16`,\n this implementation is valid for any :math:`\\alpha`.\n\n .. seealso:: numpy.blackman, :func:`create_window`, :class:`Window`"}
{"_id": "q_5826", "text": "r\"\"\"Gaussian window\n\n :param N: window length\n\n .. math:: \\exp^{-0.5 \\left( \\sigma\\frac{n}{N/2} \\right)^2}\n\n with :math:`\\frac{N-1}{2}\\leq n \\leq \\frac{N-1}{2}`.\n\n .. note:: N-1 is used to be in agreement with octave convention. The ENBW of\n 1.4 is also in agreement with [Harris]_\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'gaussian', alpha=2.5)\n\n\n\n .. seealso:: scipy.signal.gaussian, :func:`create_window`"}
{"_id": "q_5827", "text": "r\"\"\"Cosine tapering window also known as sine window.\n\n :param N: window length\n\n .. math:: w(n) = \\cos\\left(\\frac{\\pi n}{N-1} - \\frac{\\pi}{2}\\right) = \\sin \\left(\\frac{\\pi n}{N-1}\\right)\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'cosine')\n\n .. seealso:: :func:`create_window`, :class:`Window`"}
{"_id": "q_5828", "text": "r\"\"\"Lanczos window also known as sinc window.\n\n :param N: window length\n\n .. math:: w(n) = sinc \\left( \\frac{2n}{N-1} - 1 \\right)\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'lanczos')\n\n .. seealso:: :func:`create_window`, :class:`Window`"}
{"_id": "q_5829", "text": "r\"\"\"Nuttall tapering window\n\n :param N: window length\n\n .. math:: w(n) = a_0 - a_1 \\cos\\left(\\frac{2\\pi n}{N-1}\\right)+ a_2 \\cos\\left(\\frac{4\\pi n}{N-1}\\right)- a_3 \\cos\\left(\\frac{6\\pi n}{N-1}\\right)\n\n with :math:`a_0 = 0.355768`, :math:`a_1 = 0.487396`, :math:`a_2=0.144232` and :math:`a_3=0.012604`\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'nuttall', mindB=-80)\n\n\n .. seealso:: :func:`create_window`, :class:`Window`"}
{"_id": "q_5830", "text": "r\"\"\"Blackman Nuttall window\n\n returns a minimum, 4-term Blackman-Harris window. The window is minimum in the sense that its maximum sidelobes are minimized.\n The coefficients for this window differ from the Blackman-Harris window coefficients and produce slightly lower sidelobes.\n\n :param N: window length\n\n .. math:: w(n) = a_0 - a_1 \\cos\\left(\\frac{2\\pi n}{N-1}\\right)+ a_2 \\cos\\left(\\frac{4\\pi n}{N-1}\\right)- a_3 \\cos\\left(\\frac{6\\pi n}{N-1}\\right)\n\n with :math:`a_0 = 0.3635819`, :math:`a_1 = 0.4891775`, :math:`a_2=0.1365995` and :math:`0_3=.0106411`\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'blackman_nuttall', mindB=-80)\n\n .. seealso:: :func:`spectrum.window.create_window`\n .. seealso:: :func:`create_window`, :class:`Window`"}
{"_id": "q_5831", "text": "r\"\"\"Blackman Harris window\n\n :param N: window length\n\n .. math:: w(n) = a_0 - a_1 \\cos\\left(\\frac{2\\pi n}{N-1}\\right)+ a_2 \\cos\\left(\\frac{4\\pi n}{N-1}\\right)- a_3 \\cos\\left(\\frac{6\\pi n}{N-1}\\right)\n\n =============== =========\n coeff value\n =============== =========\n :math:`a_0` 0.35875\n :math:`a_1` 0.48829\n :math:`a_2` 0.14128\n :math:`a_3` 0.01168\n =============== =========\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'blackman_harris', mindB=-80)\n\n .. seealso:: :func:`spectrum.window.create_window`\n .. seealso:: :func:`create_window`, :class:`Window`"}
{"_id": "q_5832", "text": "r\"\"\"Bohman tapering window\n\n :param N: window length\n\n .. math:: w(n) = (1-|x|) \\cos (\\pi |x|) + \\frac{1}{\\pi} \\sin(\\pi |x|)\n\n where x is a length N vector of linearly spaced values between\n -1 and 1.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'bohman')\n\n .. seealso:: :func:`create_window`, :class:`Window`"}
{"_id": "q_5833", "text": "r\"\"\"Flat-top tapering window\n\n Returns symmetric or periodic flat top window.\n\n :param N: window length\n :param mode: way the data are normalised. If mode is *symmetric*, then\n divide n by N-1. IF mode is *periodic*, divide by N,\n to be consistent with octave code.\n\n When using windows for filter design, the *symmetric* mode\n should be used (default). When using windows for spectral analysis, the *periodic*\n mode should be used. The mathematical form of the flat-top window in the symmetric\n case is:\n\n .. math:: w(n) = a_0\n - a_1 \\cos\\left(\\frac{2\\pi n}{N-1}\\right)\n + a_2 \\cos\\left(\\frac{4\\pi n}{N-1}\\right)\n - a_3 \\cos\\left(\\frac{6\\pi n}{N-1}\\right)\n + a_4 \\cos\\left(\\frac{8\\pi n}{N-1}\\right)\n\n ===== =============\n coeff value\n ===== =============\n a0 0.21557895\n a1 0.41663158\n a2 0.277263158\n a3 0.083578947\n a4 0.006947368\n ===== =============\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'bohman')\n\n\n .. seealso:: :func:`create_window`, :class:`Window`"}
{"_id": "q_5834", "text": "Taylor tapering window\n\n Taylor windows allows you to make tradeoffs between the\n mainlobe width and sidelobe level (sll).\n\n Implemented as described by Carrara, Goodman, and Majewski \n in 'Spotlight Synthetic Aperture Radar: Signal Processing Algorithms'\n Pages 512-513\n\n :param N: window length\n :param float nbar:\n :param float sll:\n\n The default values gives equal height\n sidelobes (nbar) and maximum sidelobe level (sll).\n\n .. warning:: not implemented\n\n .. seealso:: :func:`create_window`, :class:`Window`"}
{"_id": "q_5835", "text": "r\"\"\"Riesz tapering window\n\n :param N: window length\n\n .. math:: w(n) = 1 - \\left| \\frac{n}{N/2} \\right|^2\n\n with :math:`-N/2 \\leq n \\leq N/2`.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'riesz')\n\n .. seealso:: :func:`create_window`, :class:`Window`"}
{"_id": "q_5836", "text": "r\"\"\"Riemann tapering window\n\n :param int N: window length\n\n .. math:: w(n) = 1 - \\left| \\frac{n}{N/2} \\right|^2\n\n with :math:`-N/2 \\leq n \\leq N/2`.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import window_visu\n window_visu(64, 'riesz')\n\n .. seealso:: :func:`create_window`, :class:`Window`"}
{"_id": "q_5837", "text": "Compute the window data frequency response\n\n :param norm: True by default. normalised the frequency data.\n :param int NFFT: total length of the final data sets( 2048 by default. \n if less than data length, then NFFT is set to the data length*2).\n\n The response is stored in :attr:`response`.\n\n .. note:: Units are dB (20 log10) since we plot the frequency response)"}
{"_id": "q_5838", "text": "Plot the window in the frequency domain\n\n :param mindB: change the default lower y bound\n :param maxdB: change the default upper lower bound\n :param bool norm: if True, normalise the frequency response.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum.window import Window\n w = Window(64, name='hamming')\n w.plot_frequencies()"}
{"_id": "q_5839", "text": "Plot the window in the time domain\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum.window import Window\n w = Window(64, name='hamming')\n w.plot_window()"}
{"_id": "q_5840", "text": "Plotting method to plot both time and frequency domain results.\n\n See :meth:`plot_frequencies` for the optional arguments.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum.window import Window\n w = Window(64, name='hamming')\n w.plot_time_freq()"}
{"_id": "q_5841", "text": "solve the general toeplitz linear equations\n\n Solve TX=Z\n \n :param T0: zero lag value\n :param TC: r1 to rN \n :param TR: r1 to rN\n\n returns X\n\n requires 3M^2+M operations instead of M^3 with gaussian elimination\n \n .. warning:: not used right now"}
{"_id": "q_5842", "text": "Builds a codeobj summary by identifying and resolving used names\n\n >>> code = '''\n ... from a.b import c\n ... import d as e\n ... print(c)\n ... e.HelloWorld().f.g\n ... '''\n >>> for name, o in sorted(identify_names(code).items()):\n ... print(name, o['name'], o['module'], o['module_short'])\n c c a.b a.b\n e.HelloWorld HelloWorld d d"}
{"_id": "q_5843", "text": "Generates RST to place a thumbnail in a gallery"}
{"_id": "q_5844", "text": "r\"\"\"Compute AR coefficients using Yule-Walker method\n\n :param X: Array of complex data values, X(1) to X(N)\n :param int order: Order of autoregressive process to be fitted (integer)\n :param str norm: Use a biased or unbiased correlation.\n :param bool allow_singularity:\n\n :return:\n * AR coefficients (complex)\n * variance of white noise (Real)\n * reflection coefficients for use in lattice filter\n\n .. rubric:: Description:\n\n The Yule-Walker method returns the polynomial A corresponding to the\n AR parametric signal model estimate of vector X using the Yule-Walker\n (autocorrelation) method. The autocorrelation may be computed using a\n **biased** or **unbiased** estimation. In practice, the biased estimate of\n the autocorrelation is used for the unknown true autocorrelation. Indeed,\n an unbiased estimate may result in nonpositive-definite autocorrelation\n matrix.\n So, a biased estimate leads to a stable AR filter.\n The following matrix form represents the Yule-Walker equations. The are\n solved by means of the Levinson-Durbin recursion:\n\n .. math::\n\n \\left( \\begin{array}{cccc}\n r(1) & r(2)^* & \\dots & r(n)^*\\\\\n r(2) & r(1)^* & \\dots & r(n-1)^*\\\\\n \\dots & \\dots & \\dots & \\dots\\\\\n r(n) & \\dots & r(2) & r(1) \\end{array} \\right)\n \\left( \\begin{array}{cccc}\n a(2)\\\\\n a(3) \\\\\n \\dots \\\\\n a(n+1) \\end{array} \\right)\n =\n \\left( \\begin{array}{cccc}\n -r(2)\\\\\n -r(3) \\\\\n \\dots \\\\\n -r(n+1) \\end{array} \\right)\n\n The outputs consists of the AR coefficients, the estimated variance of the\n white noise process, and the reflection coefficients. These outputs can be\n used to estimate the optimal order by using :mod:`~spectrum.criteria`.\n\n .. rubric:: Examples:\n\n From a known AR process or order 4, we estimate those AR parameters using\n the aryule function.\n\n .. doctest::\n\n >>> from scipy.signal import lfilter\n >>> from spectrum import *\n >>> from numpy.random import randn\n >>> A =[1, -2.7607, 3.8106, -2.6535, 0.9238]\n >>> noise = randn(1, 1024)\n >>> y = lfilter([1], A, noise);\n >>> #filter a white noise input to create AR(4) process\n >>> [ar, var, reflec] = aryule(y[0], 4)\n >>> # ar should contains values similar to A\n\n The PSD estimate of a data samples is computed and plotted as follows:\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import *\n from pylab import *\n\n ar, P, k = aryule(marple_data, 15, norm='biased')\n psd = arma2psd(ar)\n plot(linspace(-0.5, 0.5, 4096), 10 * log10(psd/max(psd)))\n axis([-0.5, 0.5, -60, 0])\n\n .. note:: The outputs have been double checked against (1) octave outputs\n (octave has norm='biased' by default) and (2) Marple test code.\n\n .. seealso:: This function uses :func:`~spectrum.levinson.LEVINSON` and\n :func:`~spectrum.correlation.CORRELATION`. See the :mod:`~spectrum.criteria`\n module for criteria to automatically select the AR order.\n\n :References: [Marple]_"}
{"_id": "q_5845", "text": "r\"\"\"Levinson-Durbin recursion.\n\n Find the coefficients of a length(r)-1 order autoregressive linear process\n\n :param r: autocorrelation sequence of length N + 1 (first element being the zero-lag autocorrelation)\n :param order: requested order of the autoregressive coefficients. default is N.\n :param allow_singularity: false by default. Other implementations may be True (e.g., octave)\n\n :return:\n * the `N+1` autoregressive coefficients :math:`A=(1, a_1...a_N)`\n * the prediction errors\n * the `N` reflections coefficients values\n\n This algorithm solves the set of complex linear simultaneous equations\n using Levinson algorithm.\n\n .. math::\n\n \\bold{T}_M \\left( \\begin{array}{c} 1 \\\\ \\bold{a}_M \\end{array} \\right) =\n \\left( \\begin{array}{c} \\rho_M \\\\ \\bold{0}_M \\end{array} \\right)\n\n where :math:`\\bold{T}_M` is a Hermitian Toeplitz matrix with elements\n :math:`T_0, T_1, \\dots ,T_M`.\n\n .. note:: Solving this equations by Gaussian elimination would\n require :math:`M^3` operations whereas the levinson algorithm\n requires :math:`M^2+M` additions and :math:`M^2+M` multiplications.\n\n This is equivalent to solve the following symmetric Toeplitz system of\n linear equations\n\n .. math::\n\n \\left( \\begin{array}{cccc}\n r_1 & r_2^* & \\dots & r_{n}^*\\\\\n r_2 & r_1^* & \\dots & r_{n-1}^*\\\\\n \\dots & \\dots & \\dots & \\dots\\\\\n r_n & \\dots & r_2 & r_1 \\end{array} \\right)\n \\left( \\begin{array}{cccc}\n a_2\\\\\n a_3 \\\\\n \\dots \\\\\n a_{N+1} \\end{array} \\right)\n =\n \\left( \\begin{array}{cccc}\n -r_2\\\\\n -r_3 \\\\\n \\dots \\\\\n -r_{N+1} \\end{array} \\right)\n\n where :math:`r = (r_1 ... r_{N+1})` is the input autocorrelation vector, and\n :math:`r_i^*` denotes the complex conjugate of :math:`r_i`. The input r is typically\n a vector of autocorrelation coefficients where lag 0 is the first\n element :math:`r_1`.\n\n\n .. doctest::\n\n >>> import numpy; from spectrum import LEVINSON\n >>> T = numpy.array([3., -2+0.5j, .7-1j])\n >>> a, e, k = LEVINSON(T)"}
{"_id": "q_5846", "text": "computes the autocorrelation coefficients, R based\n on the prediction polynomial A and the final prediction error Efinal,\n using the stepdown algorithm.\n\n Works for real or complex data\n\n :param a:\n :param efinal:\n\n :return:\n * R, the autocorrelation\n * U prediction coefficient\n * kr reflection coefficients\n * e errors\n\n A should be a minimum phase polynomial and A(1) is assumed to be unity.\n\n :returns: (P+1) by (P+1) upper triangular matrix, U,\n that holds the i'th order prediction polynomials\n Ai, i=1:P, where P is the order of the input\n polynomial, A.\n\n\n\n [ 1 a1(1)* a2(2)* ..... aP(P) * ]\n [ 0 1 a2(1)* ..... aP(P-1)* ]\n U = [ .................................]\n [ 0 0 0 ..... 1 ]\n\n from which the i'th order prediction polynomial can be extracted\n using Ai=U(i+1:-1:1,i+1)'. The first row of U contains the\n conjugates of the reflection coefficients, and the K's may be\n extracted using, K=conj(U(1,2:end)).\n\n .. todo:: remove the conjugate when data is real data, clean up the code\n test and doc."}
{"_id": "q_5847", "text": "LEVUP One step forward Levinson recursion\n\n :param acur:\n :param knxt:\n :return:\n * anxt the P+1'th order prediction polynomial based on the P'th order prediction polynomial, acur, and the\n P+1'th order reflection coefficient, Knxt.\n * enxt the P+1'th order prediction prediction error, based on the P'th order prediction error, ecur.\n\n\n :References: P. Stoica R. Moses, Introduction to Spectral Analysis Prentice Hall, N.J., 1997, Chapter 3."}
{"_id": "q_5848", "text": "r\"\"\"Simple and fast implementation of the covariance AR estimate\n\n This code is 10 times faster than :func:`arcovar_marple` and more importantly\n only 10 lines of code, compared to a 200 loc for :func:`arcovar_marple`\n\n\n :param array X: Array of complex data samples\n :param int oder: Order of linear prediction model\n\n :return:\n * a - Array of complex forward linear prediction coefficients\n * e - error\n\n The covariance method fits a Pth order autoregressive (AR) model to the\n input signal, which is assumed to be the output of\n an AR system driven by white noise. This method minimizes the forward\n prediction error in the least-squares sense. The output vector\n contains the normalized estimate of the AR system parameters\n\n The white noise input variance estimate is also returned.\n\n If is the power spectral density of y(n), then:\n\n .. math:: \\frac{e}{\\left| A(e^{jw}) \\right|^2} = \\frac{e}{\\left| 1+\\sum_{k-1}^P a(k)e^{-jwk}\\right|^2}\n\n Because the method characterizes the input data using an all-pole model,\n the correct choice of the model order p is important.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import arcovar, marple_data, arma2psd\n from pylab import plot, log10, linspace, axis\n\n ar_values, error = arcovar(marple_data, 15)\n psd = arma2psd(ar_values, sides='centerdc')\n plot(linspace(-0.5, 0.5, len(psd)), 10*log10(psd/max(psd)))\n axis([-0.5, 0.5, -60, 0])\n\n .. seealso:: :class:`pcovar`\n\n :validation: the AR parameters are the same as those returned by\n a completely different function :func:`arcovar_marple`.\n\n :References: [Mathworks]_"}
{"_id": "q_5849", "text": "Linear Predictor Coefficients.\n\n :param x:\n :param int N: default is length(X) - 1\n\n :Details:\n\n Finds the coefficients :math:`A=(1, a(2), \\dots a(N+1))`, of an Nth order\n forward linear predictor that predicts the current value value of the\n real-valued time series x based on past samples:\n\n .. math:: \\hat{x}(n) = -a(2)*x(n-1) - a(3)*x(n-2) - ... - a(N+1)*x(n-N)\n\n such that the sum of the squares of the errors\n\n .. math:: err(n) = X(n) - Xp(n)\n\n is minimized. This function uses the Levinson-Durbin recursion to\n solve the normal equations that arise from the least-squares formulation.\n\n .. seealso:: :func:`levinson`, :func:`aryule`, :func:`prony`, :func:`stmcb`\n\n .. todo:: matrix case, references\n\n :Example:\n\n ::\n\n from scipy.signal import lfilter\n noise = randn(50000,1); % Normalized white Gaussian noise\n x = filter([1], [1 1/2 1/3 1/4], noise)\n x = x[45904:50000]\n x.reshape(4096, 1)\n x = x[0]\n\n Compute the predictor coefficients, estimated signal, prediction error, and autocorrelation sequence of the prediction error:\n\n\n 1.00000 + 0.00000i 0.51711 - 0.00000i 0.33908 - 0.00000i 0.24410 - 0.00000i\n\n ::\n\n a = lpc(x, 3)\n est_x = lfilter([0 -a(2:end)],1,x); % Estimated signal\n e = x - est_x; % Prediction error\n [acs,lags] = xcorr(e,'coeff'); % ACS of prediction error"}
{"_id": "q_5850", "text": "Return Pascal matrix\n\n :param int n: size of the matrix\n\n .. doctest::\n\n >>> from spectrum import pascal\n >>> pascal(6)\n array([[ 1., 1., 1., 1., 1., 1.],\n [ 1., 2., 3., 4., 5., 6.],\n [ 1., 3., 6., 10., 15., 21.],\n [ 1., 4., 10., 20., 35., 56.],\n [ 1., 5., 15., 35., 70., 126.],\n [ 1., 6., 21., 56., 126., 252.]])\n\n .. todo:: use the symmetric property to improve computational time if needed"}
{"_id": "q_5851", "text": "SVD decomposition using numpy.linalg.svd\n\n :param A: a M by N matrix\n\n :return:\n * U, a M by M matrix\n * S the N eigen values\n * V a N by N matrix\n\n See :func:`numpy.linalg.svd` for a detailed documentation.\n\n Should return the same as in [Marple]_ , CSVD routine.\n\n ::\n\n U, S, V = numpy.linalg.svd(A)\n U, S, V = cvsd(A)"}
{"_id": "q_5852", "text": "Yield paths to standard modules."}
{"_id": "q_5853", "text": "Yield standard module names."}
{"_id": "q_5854", "text": "Yield line numbers of unused imports."}
{"_id": "q_5855", "text": "Yield line number and module name of unused imports."}
{"_id": "q_5856", "text": "Yield line number of star import usage."}
{"_id": "q_5857", "text": "Yield line number, undefined name, and its possible origin module."}
{"_id": "q_5858", "text": "Yield line numbers of duplicate keys."}
{"_id": "q_5859", "text": "Return dict mapping the key to list of messages."}
{"_id": "q_5860", "text": "Return messages from pyflakes."}
{"_id": "q_5861", "text": "Return package name in import statement."}
{"_id": "q_5862", "text": "Return True if import is spans multiples lines."}
{"_id": "q_5863", "text": "Parse and filter ``from something import a, b, c``.\n\n Return line without unused import modules, or `pass` if all of the\n module in import is unused."}
{"_id": "q_5864", "text": "Return dictionary that maps line number to message."}
{"_id": "q_5865", "text": "Return True if value is a literal or a name."}
{"_id": "q_5866", "text": "Yield line numbers of unneeded \"pass\" statements."}
{"_id": "q_5867", "text": "Yield code with useless \"pass\" lines removed."}
{"_id": "q_5868", "text": "Return leading whitespace."}
{"_id": "q_5869", "text": "Return line ending."}
{"_id": "q_5870", "text": "Return code with all filtering run on it."}
{"_id": "q_5871", "text": "Return a set of strings."}
{"_id": "q_5872", "text": "Write the data encoding the ObtainLease response payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."}
{"_id": "q_5873", "text": "Write the data encoding the Cancel request payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."}
{"_id": "q_5874", "text": "Returns a Name object, populated with the given value and type"}
{"_id": "q_5875", "text": "Read the data encoding the Digest object and decode it into its\n constituent parts.\n\n Args:\n istream (Stream): A data stream containing encoded object data,\n supporting a read method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5876", "text": "Write the data encoding the Digest object to a stream.\n\n Args:\n ostream (Stream): A data stream in which to encode object data,\n supporting a write method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5877", "text": "Construct a Digest object from provided digest values.\n\n Args:\n hashing_algorithm (HashingAlgorithm): An enumeration representing\n the hash algorithm used to compute the digest. Optional,\n defaults to HashingAlgorithm.SHA_256.\n digest_value (byte string): The bytes of the digest hash. Optional,\n defaults to the empty byte string.\n key_format_type (KeyFormatType): An enumeration representing the\n format of the key corresponding to the digest. Optional,\n defaults to KeyFormatType.RAW.\n\n Returns:\n Digest: The newly created Digest.\n\n Example:\n >>> x = Digest.create(HashingAlgorithm.MD5, b'\\x00',\n ... KeyFormatType.RAW)\n >>> x.hashing_algorithm\n HashingAlgorithm(value=HashingAlgorithm.MD5)\n >>> x.digest_value\n DigestValue(value=bytearray(b'\\x00'))\n >>> x.key_format_type\n KeyFormatType(value=KeyFormatType.RAW)"}
{"_id": "q_5878", "text": "Read the data encoding the ApplicationSpecificInformation object and\n decode it into its constituent parts.\n\n Args:\n istream (Stream): A data stream containing encoded object data,\n supporting a read method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5879", "text": "Write the data encoding the ApplicationSpecificInformation object to a\n stream.\n\n Args:\n ostream (Stream): A data stream in which to encode object data,\n supporting a write method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5880", "text": "Construct an ApplicationSpecificInformation object from provided data\n and namespace values.\n\n Args:\n application_namespace (str): The name of the application namespace.\n application_data (str): Application data related to the namespace.\n\n Returns:\n ApplicationSpecificInformation: The newly created set of\n application information.\n\n Example:\n >>> x = ApplicationSpecificInformation.create('namespace', 'data')\n >>> x.application_namespace.value\n 'namespace'\n >>> x.application_data.value\n 'data'"}
{"_id": "q_5881", "text": "Read the data encoding the DerivationParameters struct and decode it\n into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5882", "text": "Write the data encoding the DerivationParameters struct to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5883", "text": "Read the data encoding the Get request payload and decode it into its\n constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5884", "text": "Write the data encoding the Get request payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5885", "text": "Read the data encoding the Get response payload and decode it\n into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the object type, unique identifier, or\n secret attributes are missing from the encoded payload."}
{"_id": "q_5886", "text": "Write the data encoding the Get response payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the object type, unique identifier, or\n secret attributes are missing from the payload struct."}
{"_id": "q_5887", "text": "Read the data encoding the SignatureVerify request payload and decode\n it into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is missing from the\n encoded payload."}
{"_id": "q_5888", "text": "Write the data encoding the SignatureVerify request payload to a\n stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."}
{"_id": "q_5889", "text": "Process a KMIP request message.\n\n This routine is the main driver of the KmipEngine. It breaks apart and\n processes the request header, handles any message errors that may\n result, and then passes the set of request batch items on for\n processing. This routine is thread-safe, allowing multiple client\n connections to use the same KmipEngine.\n\n Args:\n request (RequestMessage): The request message containing the batch\n items to be processed.\n credential (string): Identifying information about the client\n obtained from the client certificate. Optional, defaults to\n None.\n\n Returns:\n ResponseMessage: The response containing all of the results from\n the request batch items."}
{"_id": "q_5890", "text": "Build a simple ResponseMessage with a single error result.\n\n Args:\n version (ProtocolVersion): The protocol version the response\n should be addressed with.\n reason (ResultReason): An enumeration classifying the type of\n error occurred.\n message (str): A string providing additional information about\n the error.\n\n Returns:\n ResponseMessage: The simple ResponseMessage containing a\n single error result."}
{"_id": "q_5891", "text": "Given a kmip.pie object and a dictionary of attributes, attempt to set\n the attribute values on the object."}
{"_id": "q_5892", "text": "Set the attribute value on the kmip.pie managed object."}
{"_id": "q_5893", "text": "Write the data encoding the Decrypt request payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."}
{"_id": "q_5894", "text": "Create a secret object of the specified type with the given value.\n\n Args:\n secret_type (ObjectType): An ObjectType enumeration specifying the\n type of secret to create.\n value (dict): A dictionary containing secret data. Optional,\n defaults to None.\n\n Returns:\n secret: The newly constructed secret object.\n\n Raises:\n TypeError: If the provided secret type is unrecognized.\n\n Example:\n >>> factory.create(ObjectType.SYMMETRIC_KEY)\n SymmetricKey(...)"}
{"_id": "q_5895", "text": "Load configuration settings from the file pointed to by path.\n\n This will overwrite all current setting values.\n\n Args:\n path (string): The path to the configuration file containing\n the settings to load. Required.\n Raises:\n ConfigurationError: Raised if the path does not point to an\n existing file or if a setting value is invalid."}
{"_id": "q_5896", "text": "Returns the integer value of the usage mask bitmask. This value is\n stored in the database.\n\n Args:\n value(list<enums.CryptographicUsageMask>): list of enums in the\n usage mask\n dialect(string): SQL dialect"}
{"_id": "q_5897", "text": "Returns a new list of enums.CryptographicUsageMask Enums. This converts\n the integer value into the list of enums.\n\n Args:\n value(int): The integer value stored in the database that is used\n to create the list of enums.CryptographicUsageMask Enums.\n dialect(string): SQL dialect"}
{"_id": "q_5898", "text": "Read the encoding of the LongInteger from the input stream.\n\n Args:\n istream (stream): A buffer containing the encoded bytes of a\n LongInteger. Usually a BytearrayStream object. Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidPrimitiveLength: if the long integer encoding read in has\n an invalid encoded length."}
{"_id": "q_5899", "text": "Write the encoding of the LongInteger to the output stream.\n\n Args:\n ostream (stream): A buffer to contain the encoded bytes of a\n LongInteger. Usually a BytearrayStream object. Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5900", "text": "Verify that the value of the LongInteger is valid.\n\n Raises:\n TypeError: if the value is not of type int or long\n ValueError: if the value cannot be represented by a signed 64-bit\n integer"}
{"_id": "q_5901", "text": "Read the encoding of the BigInteger from the input stream.\n\n Args:\n istream (stream): A buffer containing the encoded bytes of the\n value of a BigInteger. Usually a BytearrayStream object.\n Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidPrimitiveLength: if the big integer encoding read in has\n an invalid encoded length."}
{"_id": "q_5902", "text": "Write the encoding of the BigInteger to the output stream.\n\n Args:\n ostream (Stream): A buffer to contain the encoded bytes of a\n BigInteger object. Usually a BytearrayStream object.\n Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5903", "text": "Verify that the value of the BigInteger is valid.\n\n Raises:\n TypeError: if the value is not of type int or long"}
{"_id": "q_5904", "text": "Verify that the value of the Enumeration is valid.\n\n Raises:\n TypeError: if the enum is not of type Enum\n ValueError: if the value is not of the expected Enum subtype or if\n the value cannot be represented by an unsigned 32-bit integer"}
{"_id": "q_5905", "text": "Write the encoding of the Boolean object to the output stream.\n\n Args:\n ostream (Stream): A buffer to contain the encoded bytes of a\n Boolean object. Usually a BytearrayStream object. Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5906", "text": "Verify that the value of the Boolean object is valid.\n\n Raises:\n TypeError: if the value is not of type bool."}
{"_id": "q_5907", "text": "Read the encoding of the Interval from the input stream.\n\n Args:\n istream (stream): A buffer containing the encoded bytes of the\n value of an Interval. Usually a BytearrayStream object.\n Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidPrimitiveLength: if the Interval encoding read in has an\n invalid encoded length.\n InvalidPaddingBytes: if the Interval encoding read in does not use\n zeroes for its padding bytes."}
{"_id": "q_5908", "text": "Verify that the value of the Interval is valid.\n\n Raises:\n TypeError: if the value is not of type int or long\n ValueError: if the value cannot be represented by an unsigned\n 32-bit integer"}
{"_id": "q_5909", "text": "Set the key wrapping data attributes using a dictionary."}
{"_id": "q_5910", "text": "Verify that the contents of the PublicKey object are valid.\n\n Raises:\n TypeError: if the types of any PublicKey attributes are invalid."}
{"_id": "q_5911", "text": "A utility function that converts an attribute name string into the\n corresponding attribute tag.\n\n For example: 'State' -> enums.Tags.STATE\n\n Args:\n value (string): The string name of the attribute.\n\n Returns:\n enum: The Tags enumeration value that corresponds to the attribute\n name string.\n\n Raises:\n ValueError: if the attribute name string is not a string or if it is\n an unrecognized attribute name"}
{"_id": "q_5912", "text": "A utility function that converts an attribute tag into the corresponding\n attribute name string.\n\n For example: enums.Tags.STATE -> 'State'\n\n Args:\n value (enum): The Tags enumeration value of the attribute.\n\n Returns:\n string: The attribute name string that corresponds to the attribute\n tag.\n\n Raises:\n ValueError: if the attribute tag is not a Tags enumeration or if it\n is unrecognized attribute tag"}
{"_id": "q_5913", "text": "A utility function that computes a bit mask from a collection of\n enumeration values.\n\n Args:\n enumerations (list): A list of enumeration values to be combined in a\n composite bit mask.\n\n Returns:\n int: The composite bit mask."}
{"_id": "q_5914", "text": "A utility function that checks if the provided value is a composite bit\n mask of enumeration values in the specified enumeration class.\n\n Args:\n enumeration (class): One of the mask enumeration classes found in this\n file. These include:\n * Cryptographic Usage Mask\n * Protection Storage Mask\n * Storage Status Mask\n potential_mask (int): A potential bit mask composed of enumeration\n values belonging to the enumeration class.\n\n Returns:\n True: if the potential mask is a valid bit mask of the mask enumeration\n False: otherwise"}
{"_id": "q_5915", "text": "Write the data encoding the CreateKeyPair request payload to a buffer.\n\n Args:\n output_buffer (stream): A data buffer in which to encode object\n data, supporting a write method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5916", "text": "Write the data encoding the CreateKeyPair response payload to a buffer.\n\n Args:\n output_buffer (stream): A data buffer in which to encode object\n data, supporting a write method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidField: Raised if the private key unique identifier or the\n public key unique identifier is not defined."}
{"_id": "q_5917", "text": "Read the data encoding the GetAttributeList request payload and decode\n it into its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5918", "text": "Write the data encoding the GetAttributeList request payload to a\n stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5919", "text": "Write the data encoding the GetAttributeList response payload to a\n stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidField: Raised if the unique identifier or attribute name\n are not defined."}
{"_id": "q_5920", "text": "Scan the policy directory for policy data."}
{"_id": "q_5921", "text": "Start monitoring operation policy files."}
{"_id": "q_5922", "text": "Extract an X.509 certificate from a socket connection."}
{"_id": "q_5923", "text": "Given an X.509 certificate, extract and return all common names."}
{"_id": "q_5924", "text": "Given an X.509 certificate, extract and return the client identity."}
{"_id": "q_5925", "text": "Read the data encoding the Create request payload and decode it into\n its constituent parts.\n\n Args:\n input_buffer (stream): A data buffer containing encoded object\n data, supporting a read method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the object type or template\n attribute is missing from the encoded payload."}
{"_id": "q_5926", "text": "Write the data encoding the Create request payload to a buffer.\n\n Args:\n output_buffer (stream): A data buffer in which to encode object\n data, supporting a write method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidField: Raised if the object type attribute or template\n attribute is not defined."}
{"_id": "q_5927", "text": "Read the data encoding the Create response payload and decode it into\n its constituent parts.\n\n Args:\n input_buffer (stream): A data buffer containing encoded object\n data, supporting a read method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the object type or unique\n identifier is missing from the encoded payload."}
{"_id": "q_5928", "text": "Convert a Pie object into a core secret object and vice versa.\n\n Args:\n obj (various): A Pie or core secret object to convert into the\n opposite object space. Required.\n\n Raises:\n TypeError: if the object type is unrecognized or unsupported."}
{"_id": "q_5929", "text": "Read the data encoding the Encrypt response payload and decode it\n into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the unique_identifier or data attributes\n are missing from the encoded payload."}
{"_id": "q_5930", "text": "Read the data encoding the DeriveKey request payload and decode it\n into its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is missing from the\n encoded payload."}
{"_id": "q_5931", "text": "Write the data encoding the DeriveKey request payload to a stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."}
{"_id": "q_5932", "text": "Check if the attribute is supported by the current KMIP version.\n\n Args:\n attribute (string): The name of the attribute\n (e.g., 'Cryptographic Algorithm'). Required.\n Returns:\n bool: True if the attribute is supported by the current KMIP\n version. False otherwise."}
{"_id": "q_5933", "text": "Check if the attribute is deprecated by the current KMIP version.\n\n Args:\n attribute (string): The name of the attribute\n (e.g., 'Unique Identifier'). Required."}
{"_id": "q_5934", "text": "Check if the attribute is supported by the given object type.\n\n Args:\n attribute (string): The name of the attribute (e.g., 'Name').\n Required.\n object_type (ObjectType): An ObjectType enumeration\n (e.g., ObjectType.SYMMETRIC_KEY). Required.\n Returns:\n bool: True if the attribute is applicable to the object type.\n False otherwise."}
{"_id": "q_5935", "text": "Check if the attribute is allowed to have multiple instances.\n\n Args:\n attribute (string): The name of the attribute\n (e.g., 'State'). Required."}
{"_id": "q_5936", "text": "Returns a value that can be used as a parameter in client or\n server. If a direct_value is given, that value will be returned\n instead of the value from the config file. If the appropriate config\n file option is not found, the default_value is returned.\n\n :param direct_value: represents a direct value that should be used.\n supercedes values from config files\n :param config_section: which section of the config file to use\n :param config_option_name: name of config option value\n :param default_value: default value to be used if other options not\n found\n :returns: a value that can be used as a parameter"}
{"_id": "q_5937", "text": "Write the data encoding the Check response payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."}
{"_id": "q_5938", "text": "Write the AttributeReference structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n Attributes structure data, supporting a write method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidField: Raised if the vendor identification or attribute name\n fields are not defined.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the AttributeReference structure."}
{"_id": "q_5939", "text": "Write the Attributes structure encoding to the data stream.\n\n Args:\n output_stream (stream): A data stream in which to encode\n Attributes structure data, supporting a write method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n AttributeNotSupported: Raised if an unsupported attribute is\n found in the attribute list while encoding.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the Attributes object."}
{"_id": "q_5940", "text": "Write the data encoding the Nonce struct to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the nonce ID or nonce value is not defined."}
{"_id": "q_5941", "text": "Write the data encoding the UsernamePasswordCredential struct to a\n stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the username is not defined."}
{"_id": "q_5942", "text": "Read the data encoding the DeviceCredential struct and decode it into\n its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5943", "text": "Write the data encoding the DeviceCredential struct to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5944", "text": "Read the data encoding the Credential struct and decode it into its\n constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if either the credential type or value are\n missing from the encoding."}
{"_id": "q_5945", "text": "Read the data encoding the MACSignatureKeyInformation struct and\n decode it into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5946", "text": "Write the data encoding the KeyWrappingData struct to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5947", "text": "Read the data encoding the KeyWrappingSpecification struct and decode\n it into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5948", "text": "Write the data encoding the KeyWrappingSpecification struct to a\n stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5949", "text": "Read the data encoding the ExtensionInformation object and decode it\n into its constituent parts.\n\n Args:\n istream (Stream): A data stream containing encoded object data,\n supporting a read method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5950", "text": "Write the data encoding the ExtensionInformation object to a stream.\n\n Args:\n ostream (Stream): A data stream in which to encode object data,\n supporting a write method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5951", "text": "Construct an ExtensionInformation object from provided extension\n values.\n\n Args:\n extension_name (str): The name of the extension. Optional,\n defaults to None.\n extension_tag (int): The tag number of the extension. Optional,\n defaults to None.\n extension_type (int): The type index of the extension. Optional,\n defaults to None.\n\n Returns:\n ExtensionInformation: The newly created set of extension\n information.\n\n Example:\n >>> x = ExtensionInformation.create('extension', 1, 1)\n >>> x.extension_name.value\n ExtensionName(value='extension')\n >>> x.extension_tag.value\n ExtensionTag(value=1)\n >>> x.extension_type.value\n ExtensionType(value=1)"}
{"_id": "q_5952", "text": "Read the data encoding the RevocationReason object and decode it\n into its constituent parts.\n\n Args:\n istream (Stream): A data stream containing encoded object data,\n supporting a read method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5953", "text": "Write the data encoding the RevocationReason object to a stream.\n\n Args:\n ostream (Stream): A data stream in which to encode object data,\n supporting a write method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5954", "text": "validate the RevocationReason object"}
{"_id": "q_5955", "text": "Read the data encoding the ObjectDefaults structure and decode it into\n its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the object type or attributes are\n missing from the encoding.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the ObjectDefaults structure."}
{"_id": "q_5956", "text": "Read the data encoding the DefaultsInformation structure and decode it\n into its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the object defaults are missing\n from the encoding.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the DefaultsInformation structure."}
{"_id": "q_5957", "text": "Write the DefaultsInformation structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n Attributes structure data, supporting a write method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidField: Raised if the object defaults field is not defined.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the DefaultsInformation structure."}
{"_id": "q_5958", "text": "Read the data encoding the RNGParameters structure and decode it\n into its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the RNG algorithm is missing from\n the encoding.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the RNGParameters structure."}
{"_id": "q_5959", "text": "Write the RNGParameters structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n Attributes structure data, supporting a write method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidField: Raised if the RNG algorithm field is not defined.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the RNGParameters structure."}
{"_id": "q_5960", "text": "Read the data encoding the ProfileInformation structure and decode it\n into its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the profile name is missing from\n the encoding.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the ProfileInformation structure."}
{"_id": "q_5961", "text": "Write the ProfileInformation structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n ProfileInformation structure data, supporting a write method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidField: Raised if the profile name field is not defined.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the ProfileInformation structure."}
{"_id": "q_5962", "text": "Write the ValidationInformation structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n ValidationInformation structure data, supporting a write\n method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidField: Raised if the validation authority type, validation\n version major, validation type, and/or validation level fields\n are not defined.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the ValidationInformation structure."}
{"_id": "q_5963", "text": "Write the CapabilityInformation structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n CapabilityInformation structure data, supporting a write\n method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the CapabilityInformation structure."}
{"_id": "q_5964", "text": "Stop the server.\n\n Halt server client connections and clean up any existing connection\n threads.\n\n Raises:\n NetworkingError: Raised if a failure occurs while sutting down\n or closing the TLS server socket."}
{"_id": "q_5965", "text": "Serve client connections.\n\n Begin listening for client connections, spinning off new KmipSessions\n as connections are handled. Set up signal handling to shutdown\n connection service as needed."}
{"_id": "q_5966", "text": "Read the data encoding the Locate request payload and decode it into\n its constituent parts.\n\n Args:\n input_buffer (stream): A data buffer containing encoded object\n data, supporting a read method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the attributes structure is missing\n from the encoded payload for KMIP 2.0+ encodings."}
{"_id": "q_5967", "text": "Write the data encoding the Locate request payload to a buffer.\n\n Args:\n output_buffer (stream): A data buffer in which to encode object\n data, supporting a write method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5968", "text": "Write the data encoding the Locate response payload to a buffer.\n\n Args:\n output_buffer (stream): A data buffer in which to encode object\n data, supporting a write method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5969", "text": "Create an asymmetric key pair.\n\n Args:\n algorithm(CryptographicAlgorithm): An enumeration specifying the\n algorithm for which the created keys will be compliant.\n length(int): The length of the keys to be created. This value must\n be compliant with the constraints of the provided algorithm.\n\n Returns:\n dict: A dictionary containing the public key data, with at least\n the following key/value fields:\n * value - the bytes of the key\n * format - a KeyFormatType enumeration for the bytes format\n dict: A dictionary containing the private key data, identical in\n structure to the one above.\n\n Raises:\n InvalidField: Raised when the algorithm is unsupported or the\n length is incompatible with the algorithm.\n CryptographicFailure: Raised when the key generation process\n fails.\n\n Example:\n >>> engine = CryptographyEngine()\n >>> key = engine.create_asymmetric_key(\n ... CryptographicAlgorithm.RSA, 2048)"}
{"_id": "q_5970", "text": "Encrypt data using symmetric or asymmetric encryption.\n\n Args:\n encryption_algorithm (CryptographicAlgorithm): An enumeration\n specifying the encryption algorithm to use for encryption.\n encryption_key (bytes): The bytes of the encryption key to use for\n encryption.\n plain_text (bytes): The bytes to be encrypted.\n cipher_mode (BlockCipherMode): An enumeration specifying the\n block cipher mode to use with the encryption algorithm.\n Required in the general case. Optional if the encryption\n algorithm is RC4 (aka ARC4). If optional, defaults to None.\n padding_method (PaddingMethod): An enumeration specifying the\n padding method to use on the data before encryption. Required\n if the cipher mode is for block ciphers (e.g., CBC, ECB).\n Optional otherwise, defaults to None.\n iv_nonce (bytes): The IV/nonce value to use to initialize the mode\n of the encryption algorithm. Optional, defaults to None. If\n required and not provided, it will be autogenerated and\n returned with the cipher text.\n hashing_algorithm (HashingAlgorithm): An enumeration specifying\n the hashing algorithm to use with the encryption algorithm,\n if needed. Required for OAEP-based asymmetric encryption.\n Optional, defaults to None.\n\n Returns:\n dict: A dictionary containing the encrypted data, with at least\n the following key/value fields:\n * cipher_text - the bytes of the encrypted data\n * iv_nonce - the bytes of the IV/counter/nonce used if it\n was needed by the encryption scheme and if it was\n automatically generated for the encryption\n\n Raises:\n InvalidField: Raised when the algorithm is unsupported or the\n length is incompatible with the algorithm.\n CryptographicFailure: Raised when the key generation process\n fails.\n\n Example:\n >>> engine = CryptographyEngine()\n >>> result = engine.encrypt(\n ... encryption_algorithm=CryptographicAlgorithm.AES,\n ... encryption_key=(\n ... b'\\xF3\\x96\\xE7\\x1C\\xCF\\xCD\\xEC\\x1F'\n ... b'\\xFC\\xE2\\x8E\\xA6\\xF8\\x74\\x28\\xB0'\n ... ),\n ... plain_text=(\n ... b'\\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07'\n ... b'\\x08\\x09\\x0A\\x0B\\x0C\\x0D\\x0E\\x0F'\n ... ),\n ... cipher_mode=BlockCipherMode.CBC,\n ... padding_method=PaddingMethod.ANSI_X923,\n ... )\n >>> result.get('cipher_text')\n b'\\x18[\\xb9y\\x1bL\\xd1\\x8f\\x9a\\xa0e\\x02b\\xa3=c'\n >>> result.iv_counter_nonce\n b'8qA\\x05\\xc4\\x86\\x03\\xd9=\\xef\\xdf\\xb8ke\\x9a\\xa2'"}
{"_id": "q_5971", "text": "Encrypt data using symmetric encryption.\n\n Args:\n encryption_algorithm (CryptographicAlgorithm): An enumeration\n specifying the symmetric encryption algorithm to use for\n encryption.\n encryption_key (bytes): The bytes of the symmetric key to use for\n encryption.\n plain_text (bytes): The bytes to be encrypted.\n cipher_mode (BlockCipherMode): An enumeration specifying the\n block cipher mode to use with the encryption algorithm.\n Required in the general case. Optional if the encryption\n algorithm is RC4 (aka ARC4). If optional, defaults to None.\n padding_method (PaddingMethod): An enumeration specifying the\n padding method to use on the data before encryption. Required\n if the cipher mode is for block ciphers (e.g., CBC, ECB).\n Optional otherwise, defaults to None.\n iv_nonce (bytes): The IV/nonce value to use to initialize the mode\n of the encryption algorithm. Optional, defaults to None. If\n required and not provided, it will be autogenerated and\n returned with the cipher text.\n\n Returns:\n dict: A dictionary containing the encrypted data, with at least\n the following key/value fields:\n * cipher_text - the bytes of the encrypted data\n * iv_nonce - the bytes of the IV/counter/nonce used if it\n was needed by the encryption scheme and if it was\n automatically generated for the encryption\n\n Raises:\n InvalidField: Raised when the algorithm is unsupported or the\n encryption key is incompatible with the algorithm.\n CryptographicFailure: Raised when the key generation process\n fails."}
{"_id": "q_5972", "text": "Encrypt data using asymmetric encryption.\n\n Args:\n encryption_algorithm (CryptographicAlgorithm): An enumeration\n specifying the asymmetric encryption algorithm to use for\n encryption. Required.\n encryption_key (bytes): The bytes of the public key to use for\n encryption. Required.\n plain_text (bytes): The bytes to be encrypted. Required.\n padding_method (PaddingMethod): An enumeration specifying the\n padding method to use with the asymmetric encryption\n algorithm. Required.\n hashing_algorithm (HashingAlgorithm): An enumeration specifying\n the hashing algorithm to use with the encryption padding\n method. Required, if the padding method is OAEP. Optional\n otherwise, defaults to None.\n\n Returns:\n dict: A dictionary containing the encrypted data, with at least\n the following key/value field:\n * cipher_text - the bytes of the encrypted data\n\n Raises:\n InvalidField: Raised when the algorithm is unsupported or the\n length is incompatible with the algorithm.\n CryptographicFailure: Raised when the key generation process\n fails."}
{"_id": "q_5973", "text": "Create an RSA key pair.\n\n Args:\n length(int): The length of the keys to be created. This value must\n be compliant with the constraints of the provided algorithm.\n public_exponent(int): The value of the public exponent needed to\n generate the keys. Usually a small Fermat prime number.\n Optional, defaults to 65537.\n\n Returns:\n dict: A dictionary containing the public key data, with the\n following key/value fields:\n * value - the bytes of the key\n * format - a KeyFormatType enumeration for the bytes format\n * public_exponent - the public exponent integer\n dict: A dictionary containing the private key data, identical in\n structure to the one above.\n\n Raises:\n CryptographicFailure: Raised when the key generation process\n fails."}
{"_id": "q_5974", "text": "Derive key data using a variety of key derivation functions.\n\n Args:\n derivation_method (DerivationMethod): An enumeration specifying\n the key derivation method to use. Required.\n derivation_length (int): An integer specifying the size of the\n derived key data in bytes. Required.\n derivation_data (bytes): The non-cryptographic bytes to be used\n in the key derivation process (e.g., the data to be encrypted,\n hashed, HMACed). Required in the general case. Optional if the\n derivation method is Hash and the key material is provided.\n Optional, defaults to None.\n key_material (bytes): The bytes of the key material to use for\n key derivation. Required in the general case. Optional if\n the derivation_method is HASH and derivation_data is provided.\n Optional, defaults to None.\n hash_algorithm (HashingAlgorithm): An enumeration specifying the\n hashing algorithm to use with the key derivation method.\n Required in the general case, optional if the derivation\n method specifies encryption. Optional, defaults to None.\n salt (bytes): Bytes representing a randomly generated salt.\n Required if the derivation method is PBKDF2. Optional,\n defaults to None.\n iteration_count (int): An integer representing the number of\n iterations to use when deriving key material. Required if\n the derivation method is PBKDF2. Optional, defaults to None.\n encryption_algorithm (CryptographicAlgorithm): An enumeration\n specifying the symmetric encryption algorithm to use for\n encryption-based key derivation. Required if the derivation\n method specifies encryption. Optional, defaults to None.\n cipher_mode (BlockCipherMode): An enumeration specifying the\n block cipher mode to use with the encryption algorithm.\n Required in in the general case if the derivation method\n specifies encryption and the encryption algorithm is\n specified. Optional if the encryption algorithm is RC4 (aka\n ARC4). Optional, defaults to None.\n padding_method (PaddingMethod): An enumeration specifying the\n padding method to use on the data before encryption. Required\n in in the general case if the derivation method specifies\n encryption and the encryption algorithm is specified. Required\n if the cipher mode is for block ciphers (e.g., CBC, ECB).\n Optional otherwise, defaults to None.\n iv_nonce (bytes): The IV/nonce value to use to initialize the mode\n of the encryption algorithm. Required in the general case if\n the derivation method specifies encryption and the encryption\n algorithm is specified. Optional, defaults to None. If\n required and not provided, it will be autogenerated.\n\n Returns:\n bytes: the bytes of the derived data\n\n Raises:\n InvalidField: Raised when cryptographic data and/or settings are\n unsupported or incompatible with the derivation method.\n\n Example:\n >>> engine = CryptographyEngine()\n >>> result = engine.derive_key(\n ... derivation_method=enums.DerivationMethod.HASH,\n ... derivation_length=16,\n ... derivation_data=b'abc',\n ... hash_algorithm=enums.HashingAlgorithm.MD5\n ... )\n >>> result\n b'\\x90\\x01P\\x98<\\xd2O\\xb0\\xd6\\x96?}(\\xe1\\x7fr'"}
{"_id": "q_5975", "text": "Instantiates an RSA key from bytes.\n\n Args:\n bytes (byte string): Bytes of RSA private key.\n Returns:\n private_key\n (cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateKey):\n RSA private key created from key bytes."}
{"_id": "q_5976", "text": "Verify a message signature.\n\n Args:\n signing_key (bytes): The bytes of the signing key to use for\n signature verification. Required.\n message (bytes): The bytes of the message that corresponds with\n the signature. Required.\n signature (bytes): The bytes of the signature to be verified.\n Required.\n padding_method (PaddingMethod): An enumeration specifying the\n padding method to use during signature verification. Required.\n signing_algorithm (CryptographicAlgorithm): An enumeration\n specifying the cryptographic algorithm to use for signature\n verification. Only RSA is supported. Optional, must match the\n algorithm specified by the digital signature algorithm if both\n are provided. Defaults to None.\n hashing_algorithm (HashingAlgorithm): An enumeration specifying\n the hashing algorithm to use with the cryptographic algortihm,\n if needed. Optional, must match the algorithm specified by the\n digital signature algorithm if both are provided. Defaults to\n None.\n digital_signature_algorithm (DigitalSignatureAlgorithm): An\n enumeration specifying both the cryptographic and hashing\n algorithms to use for signature verification. Optional, must\n match the cryptographic and hashing algorithms if both are\n provided. Defaults to None.\n\n Returns:\n boolean: the result of signature verification, True for valid\n signatures, False for invalid signatures\n\n Raises:\n InvalidField: Raised when various settings or values are invalid.\n CryptographicFailure: Raised when the signing key bytes cannot be\n loaded, or when the signature verification process fails\n unexpectedly."}
{"_id": "q_5977", "text": "Read the data encoding the Sign response payload and decode it.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the unique_identifier or signature attributes\n are missing from the encoded payload."}
{"_id": "q_5978", "text": "Write the data encoding the Sign response to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n\n Raises:\n ValueError: Raised if the unique_identifier or signature\n attributes are not defined."}
{"_id": "q_5979", "text": "Read the data encoding the GetUsageAllocation request payload and\n decode it into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is missing from the\n encoded payload."}
{"_id": "q_5980", "text": "Convert a ProtocolVersion struct to its KMIPVersion enumeration equivalent.\n\n Args:\n value (ProtocolVersion): A ProtocolVersion struct to be converted into\n a KMIPVersion enumeration.\n\n Returns:\n KMIPVersion: The enumeration equivalent of the struct. If the struct\n cannot be converted to a valid enumeration, None is returned."}
{"_id": "q_5981", "text": "Read the data encoding the ProtocolVersion struct and decode it into\n its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if either the major or minor protocol versions\n are missing from the encoding."}
{"_id": "q_5982", "text": "Write the data encoding the Authentication struct to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_5983", "text": "Read the data encoding the Poll request payload and decode it into\n its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is missing from the\n encoded payload."}
{"_id": "q_5984", "text": "Query the configured SLUGS service with the provided credentials.\n\n Args:\n connection_certificate (cryptography.x509.Certificate): An X.509\n certificate object obtained from the connection being\n authenticated. Required for SLUGS authentication.\n connection_info (tuple): A tuple of information pertaining to the\n connection being authenticated, including the source IP address\n and a timestamp (e.g., ('127.0.0.1', 1519759267.467451)).\n Optional, defaults to None. Ignored for SLUGS authentication.\n request_credentials (list): A list of KMIP Credential structures\n containing credential information to use for authentication.\n Optional, defaults to None. Ignored for SLUGS authentication."}
{"_id": "q_5985", "text": "Read the data encoding the Archive response payload and decode it\n into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is missing from the\n encoded payload."}
{"_id": "q_5986", "text": "Write the data encoding the Archive response payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."}
{"_id": "q_5987", "text": "The main thread routine executed by invoking thread.start.\n\n This method manages the new client connection, running a message\n handling loop. Once this method completes, the thread is finished."}
{"_id": "q_5988", "text": "Read the data encoding the Rekey response payload and decode it into\n its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the unique identifier attribute is missing\n from the encoded payload."}
{"_id": "q_5989", "text": "Check if a profile is supported by the client.\n\n Args:\n conformance_clause (ConformanceClause):\n authentication_suite (AuthenticationSuite):\n\n Returns:\n bool: True if the profile is supported, False otherwise.\n\n Example:\n >>> client.is_profile_supported(\n ... ConformanceClause.DISCOVER_VERSIONS,\n ... AuthenticationSuite.BASIC)\n True"}
{"_id": "q_5990", "text": "Derive a new key or secret data from an existing managed object.\n\n Args:\n object_type (ObjectType): An ObjectType enumeration specifying\n what type of object to create. Required.\n unique_identifiers (list): A list of strings specifying the unique\n IDs of the existing managed objects to use for key derivation.\n Required.\n derivation_method (DerivationMethod): A DerivationMethod\n enumeration specifying what key derivation method to use.\n Required.\n derivation_parameters (DerivationParameters): A\n DerivationParameters struct containing the settings and\n options to use for key derivation.\n template_attribute (TemplateAttribute): A TemplateAttribute struct\n containing the attributes to set on the newly derived object.\n credential (Credential): A Credential struct containing a set of\n authorization parameters for the operation. Optional, defaults\n to None.\n\n Returns:\n dict: The results of the derivation operation, containing the\n following key/value pairs:\n\n Key | Value\n ---------------------|-----------------------------------------\n 'unique_identifier' | (string) The unique ID of the newly\n | derived object.\n 'template_attribute' | (TemplateAttribute) A struct containing\n | any attributes set on the newly derived\n | object.\n 'result_status' | (ResultStatus) An enumeration indicating\n | the status of the operation result.\n 'result_reason' | (ResultReason) An enumeration providing\n | context for the result status.\n 'result_message' | (string) A message providing additional\n | context for the operation result."}
{"_id": "q_5991", "text": "Send a GetAttributes request to the server.\n\n Args:\n uuid (string): The ID of the managed object with which the\n retrieved attributes should be associated. Optional, defaults\n to None.\n attribute_names (list): A list of AttributeName values indicating\n what object attributes the client wants from the server.\n Optional, defaults to None.\n\n Returns:\n result (GetAttributesResult): A structure containing the results\n of the operation."}
{"_id": "q_5992", "text": "Send a GetAttributeList request to the server.\n\n Args:\n uid (string): The ID of the managed object with which the retrieved\n attribute names should be associated.\n\n Returns:\n result (GetAttributeListResult): A structure containing the results\n of the operation."}
{"_id": "q_5993", "text": "Send a Query request to the server.\n\n Args:\n batch (boolean): A flag indicating if the operation should be sent\n with a batch of additional operations. Defaults to False.\n query_functions (list): A list of QueryFunction enumerations\n indicating what information the client wants from the server.\n Optional, defaults to None.\n credential (Credential): A Credential object containing\n authentication information for the server. Optional, defaults\n to None."}
{"_id": "q_5994", "text": "Open the client connection.\n\n Raises:\n ClientConnectionFailure: if the client connection is already open\n Exception: if an error occurs while trying to open the connection"}
{"_id": "q_5995", "text": "Close the client connection.\n\n Raises:\n Exception: if an error occurs while trying to close the connection"}
{"_id": "q_5996", "text": "Create a symmetric key on a KMIP appliance.\n\n Args:\n algorithm (CryptographicAlgorithm): An enumeration defining the\n algorithm to use to generate the symmetric key.\n length (int): The length in bits for the symmetric key.\n operation_policy_name (string): The name of the operation policy\n to use for the new symmetric key. Optional, defaults to None\n name (string): The name to give the key. Optional, defaults to None\n cryptographic_usage_mask (list): list of enumerations of crypto\n usage mask passing to the symmetric key. Optional, defaults to\n None\n\n Returns:\n string: The uid of the newly created symmetric key.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input arguments are invalid"}
{"_id": "q_5997", "text": "Create an asymmetric key pair on a KMIP appliance.\n\n Args:\n algorithm (CryptographicAlgorithm): An enumeration defining the\n algorithm to use to generate the key pair.\n length (int): The length in bits for the key pair.\n operation_policy_name (string): The name of the operation policy\n to use for the new key pair. Optional, defaults to None.\n public_name (string): The name to give the public key. Optional,\n defaults to None.\n public_usage_mask (list): A list of CryptographicUsageMask\n enumerations indicating how the public key should be used.\n Optional, defaults to None.\n private_name (string): The name to give the public key. Optional,\n defaults to None.\n private_usage_mask (list): A list of CryptographicUsageMask\n enumerations indicating how the private key should be used.\n Optional, defaults to None.\n\n Returns:\n string: The uid of the newly created public key.\n string: The uid of the newly created private key.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input arguments are invalid"}
{"_id": "q_5998", "text": "Register a managed object with a KMIP appliance.\n\n Args:\n managed_object (ManagedObject): A managed object to register. An\n instantiatable subclass of ManagedObject from the Pie API.\n\n Returns:\n string: The uid of the newly registered managed object.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input argument is invalid"}
{"_id": "q_5999", "text": "Rekey an existing key.\n\n Args:\n uid (string): The unique ID of the symmetric key to rekey.\n Optional, defaults to None.\n offset (int): The time delta, in seconds, between the new key's\n initialization date and activation date. Optional, defaults\n to None.\n **kwargs (various): A placeholder for object attributes that\n should be set on the newly rekeyed key. Currently\n supported attributes include:\n activation_date (int)\n process_start_date (int)\n protect_stop_date (int)\n deactivation_date (int)\n\n Returns:\n string: The unique ID of the newly rekeyed key.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input arguments are invalid"}
{"_id": "q_6000", "text": "Derive a new key or secret data from existing managed objects.\n\n Args:\n object_type (ObjectType): An ObjectType enumeration specifying\n what type of object to derive. Only SymmetricKeys and\n SecretData can be specified. Required.\n unique_identifiers (list): A list of strings specifying the\n unique IDs of the existing managed objects to use for\n derivation. Multiple objects can be specified to fit the\n requirements of the given derivation method. Required.\n derivation_method (DerivationMethod): A DerivationMethod\n enumeration specifying how key derivation should be done.\n Required.\n derivation_parameters (dict): A dictionary containing various\n settings for the key derivation process. See Note below.\n Required.\n **kwargs (various): A placeholder for object attributes that\n should be set on the newly derived object. Currently\n supported attributes include:\n cryptographic_algorithm (enums.CryptographicAlgorithm)\n cryptographic_length (int)\n\n Returns:\n string: The unique ID of the newly derived object.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input arguments are invalid\n\n Notes:\n The derivation_parameters argument is a dictionary that can\n contain the following key/value pairs:\n\n Key | Value\n ---------------------------|---------------------------------------\n 'cryptographic_parameters' | A dictionary containing additional\n | cryptographic settings. See the\n | decrypt method for more information.\n 'initialization_vector' | Bytes to be used to initialize the key\n | derivation function, if needed.\n 'derivation_data' | Bytes to be used as the basis for the\n | key derivation process (e.g., the\n | bytes to be encrypted, hashed, etc).\n 'salt' | Bytes to used as a salt value for the\n | key derivation function, if needed.\n | Usually used with PBKDF2.\n 'iteration_count' | An integer defining how many\n | iterations should be used with the key\n | derivation function, if needed.\n | Usually used with PBKDF2."}
{"_id": "q_6001", "text": "Check the constraints for a managed object.\n\n Args:\n uid (string): The unique ID of the managed object to check.\n Optional, defaults to None.\n usage_limits_count (int): The number of items that can be secured\n with the specified managed object. Optional, defaults to None.\n cryptographic_usage_mask (list): A list of CryptographicUsageMask\n enumerations specifying the operations possible with the\n specified managed object. Optional, defaults to None.\n lease_time (int): The number of seconds that can be leased for the\n specified managed object. Optional, defaults to None."}
{"_id": "q_6002", "text": "Get a managed object from a KMIP appliance.\n\n Args:\n uid (string): The unique ID of the managed object to retrieve.\n key_wrapping_specification (dict): A dictionary containing various\n settings to be used when wrapping the key during retrieval.\n See Note below. Optional, defaults to None.\n\n Returns:\n ManagedObject: The retrieved managed object object.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input argument is invalid\n\n Notes:\n The derivation_parameters argument is a dictionary that can\n contain the following key/value pairs:\n\n Key | Value\n --------------------------------|---------------------------------\n 'wrapping_method' | A WrappingMethod enumeration\n | that specifies how the object\n | should be wrapped.\n 'encryption_key_information' | A dictionary containing the ID\n | of the wrapping key and\n | associated cryptographic\n | parameters.\n 'mac_signature_key_information' | A dictionary containing the ID\n | of the wrapping key and\n | associated cryptographic\n | parameters.\n 'attribute_names' | A list of strings representing\n | the names of attributes that\n | should be included with the\n | wrapped object.\n 'encoding_option' | An EncodingOption enumeration\n | that specifies the encoding of\n | the object before it is wrapped."}
{"_id": "q_6003", "text": "Get the attributes associated with a managed object.\n\n If the uid is not specified, the appliance will use the ID placeholder\n by default.\n\n If the attribute_names list is not specified, the appliance will\n return all viable attributes for the managed object.\n\n Args:\n uid (string): The unique ID of the managed object with which the\n retrieved attributes should be associated. Optional, defaults\n to None.\n attribute_names (list): A list of string attribute names\n indicating which attributes should be retrieved. Optional,\n defaults to None."}
{"_id": "q_6004", "text": "Revoke a managed object stored by a KMIP appliance.\n\n Args:\n revocation_reason (RevocationReasonCode): An enumeration indicating\n the revocation reason.\n uid (string): The unique ID of the managed object to revoke.\n Optional, defaults to None.\n revocation_message (string): A message regarding the revocation.\n Optional, defaults to None.\n compromise_occurrence_date (int): An integer, the number of seconds\n since the epoch, which will be converted to the Datetime when\n the managed object was first believed to be compromised.\n Optional, defaults to None.\n\n Returns:\n None\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input argument is invalid"}
{"_id": "q_6005", "text": "Get the message authentication code for data.\n\n Args:\n data (string): The data to be MACed.\n uid (string): The unique ID of the managed object that is the key\n to use for the MAC operation.\n algorithm (CryptographicAlgorithm): An enumeration defining the\n algorithm to use to generate the MAC.\n\n Returns:\n string: The unique ID of the managed object that is the key\n to use for the MAC operation.\n string: The data MACed\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input arguments are invalid"}
{"_id": "q_6006", "text": "Build a CryptographicParameters struct from a dictionary.\n\n Args:\n value (dict): A dictionary containing the key/value pairs for a\n CryptographicParameters struct.\n\n Returns:\n None: if value is None\n CryptographicParameters: a CryptographicParameters struct\n\n Raises:\n TypeError: if the input argument is invalid"}
{"_id": "q_6007", "text": "Build an EncryptionKeyInformation struct from a dictionary.\n\n Args:\n value (dict): A dictionary containing the key/value pairs for a\n EncryptionKeyInformation struct.\n\n Returns:\n EncryptionKeyInformation: an EncryptionKeyInformation struct\n\n Raises:\n TypeError: if the input argument is invalid"}
{"_id": "q_6008", "text": "Build an MACSignatureKeyInformation struct from a dictionary.\n\n Args:\n value (dict): A dictionary containing the key/value pairs for a\n MACSignatureKeyInformation struct.\n\n Returns:\n MACSignatureInformation: a MACSignatureKeyInformation struct\n\n Raises:\n TypeError: if the input argument is invalid"}
{"_id": "q_6009", "text": "Build a KeyWrappingSpecification struct from a dictionary.\n\n Args:\n value (dict): A dictionary containing the key/value pairs for a\n KeyWrappingSpecification struct.\n\n Returns:\n KeyWrappingSpecification: a KeyWrappingSpecification struct\n\n Raises:\n TypeError: if the input argument is invalid"}
{"_id": "q_6010", "text": "Build a name attribute, returned in a list for ease\n of use in the caller"}
{"_id": "q_6011", "text": "Read the data encoding the QueryRequestPayload object and decode it\n into its constituent parts.\n\n Args:\n input_buffer (Stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the query functions are missing\n from the encoded payload."}
{"_id": "q_6012", "text": "Write the data encoding the QueryResponsePayload object to a stream.\n\n Args:\n output_buffer (Stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."}
{"_id": "q_6013", "text": "Find a group of entry points with unique names.\n\n Returns a dictionary of names to :class:`EntryPoint` objects."}
{"_id": "q_6014", "text": "Find all entry points in a group.\n\n Returns a list of :class:`EntryPoint` objects."}
{"_id": "q_6015", "text": "Load the object to which this entry point refers."}
{"_id": "q_6016", "text": "Parse an entry point from the syntax in entry_points.txt\n\n :param str epstr: The entry point string (not including 'name =')\n :param str name: The name of this entry point\n :param Distribution distro: The distribution in which the entry point was found\n :rtype: EntryPoint\n :raises BadEntryPoint: if *epstr* can't be parsed as an entry point."}
{"_id": "q_6017", "text": "Try to return a path to static the static files compatible all\n the way back to Django 1.2. If anyone has a cleaner or better\n way to do this let me know!"}
{"_id": "q_6018", "text": "Run livereload server"}
{"_id": "q_6019", "text": "Generate controller, include the controller file, template & css & js directories."}
{"_id": "q_6020", "text": "Generate action."}
{"_id": "q_6021", "text": "Generate model."}
{"_id": "q_6022", "text": "Genarate macro."}
{"_id": "q_6023", "text": "mkdir -p path"}
{"_id": "q_6024", "text": "Friendly time gap"}
{"_id": "q_6025", "text": "Check url schema."}
{"_id": "q_6026", "text": "JSON decorator."}
{"_id": "q_6027", "text": "Absolute url for endpoint."}
{"_id": "q_6028", "text": "Get current user."}
{"_id": "q_6029", "text": "Register routes."}
{"_id": "q_6030", "text": "Register HTTP error pages."}
{"_id": "q_6031", "text": "Register hooks."}
{"_id": "q_6032", "text": "Returns csv data as a pandas Dataframe object"}
{"_id": "q_6033", "text": "Serialize the specified DataFrame and replace the existing dataset.\n\n Parameters\n ----------\n dataframe : pandas.DataFrame\n Data to serialize.\n data_type_id : str, optional\n Format to serialize to.\n If None, the existing format is preserved.\n Supported formats are:\n 'PlainText'\n 'GenericCSV'\n 'GenericTSV'\n 'GenericCSVNoHeader'\n 'GenericTSVNoHeader'\n See the azureml.DataTypeIds class for constants.\n name : str, optional\n Name for the dataset.\n If None, the name of the existing dataset is used.\n description : str, optional\n Description for the dataset.\n If None, the name of the existing dataset is used."}
{"_id": "q_6034", "text": "Upload already serialized raw data and replace the existing dataset.\n\n Parameters\n ----------\n raw_data: bytes\n Dataset contents to upload.\n data_type_id : str\n Serialization format of the raw data.\n If None, the format of the existing dataset is used.\n Supported formats are:\n 'PlainText'\n 'GenericCSV'\n 'GenericTSV'\n 'GenericCSVNoHeader'\n 'GenericTSVNoHeader'\n 'ARFF'\n See the azureml.DataTypeIds class for constants.\n name : str, optional\n Name for the dataset.\n If None, the name of the existing dataset is used.\n description : str, optional\n Description for the dataset.\n If None, the name of the existing dataset is used."}
{"_id": "q_6035", "text": "Upload already serialized raw data as a new dataset.\n\n Parameters\n ----------\n raw_data: bytes\n Dataset contents to upload.\n data_type_id : str\n Serialization format of the raw data.\n Supported formats are:\n 'PlainText'\n 'GenericCSV'\n 'GenericTSV'\n 'GenericCSVNoHeader'\n 'GenericTSVNoHeader'\n 'ARFF'\n See the azureml.DataTypeIds class for constants.\n name : str\n Name for the new dataset.\n description : str\n Description for the new dataset.\n\n Returns\n -------\n SourceDataset\n Dataset that was just created.\n Use open(), read_as_binary(), read_as_text() or to_dataframe() on\n the dataset object to get its contents as a stream, bytes, str or\n pandas DataFrame."}
{"_id": "q_6036", "text": "Open and return a stream for the dataset contents."}
{"_id": "q_6037", "text": "Read and return the dataset contents as text."}
{"_id": "q_6038", "text": "Read and return the dataset contents as a pandas DataFrame."}
{"_id": "q_6039", "text": "Get an intermediate dataset.\n\n Parameters\n ----------\n node_id : str\n Module node id from the experiment graph.\n port_name : str\n Output port of the module.\n data_type_id : str\n Serialization format of the raw data.\n See the azureml.DataTypeIds class for constants.\n\n Returns\n -------\n IntermediateDataset\n Dataset object.\n Use open(), read_as_binary(), read_as_text() or to_dataframe() on\n the dataset object to get its contents as a stream, bytes, str or\n pandas DataFrame."}
{"_id": "q_6040", "text": "Runs HTTP GET request to retrieve the list of datasets."}
{"_id": "q_6041", "text": "Runs HTTP GET request to retrieve a single dataset."}
{"_id": "q_6042", "text": "publishes a callable function or decorates a function to be published. \n\nReturns a callable, iterable object. Calling the object will invoke the published service.\nIterating the object will give the API URL, API key, and API help url.\n \nTo define a function which will be published to Azure you can simply decorate it with\nthe @publish decorator. This will publish the service, and then future calls to the\nfunction will run against the operationalized version of the service in the cloud.\n\n>>> @publish(workspace_id, workspace_token)\n>>> def func(a, b): \n>>> return a + b\n\nAfter publishing you can then invoke the function using:\nfunc.service(1, 2)\n\nOr continue to invoke the function locally:\nfunc(1, 2)\n\nYou can also just call publish directly to publish a function:\n\n>>> def func(a, b): return a + b\n>>> \n>>> res = publish(func, workspace_id, workspace_token)\n>>> \n>>> url, api_key, help_url = res\n>>> res(2, 3)\n5\n>>> url, api_key, help_url = res.url, res.api_key, res.help_url\n\nThe returned result will be the published service.\n\nYou can specify a list of files which should be published along with the function.\nThe resulting files will be stored in a subdirectory called 'Script Bundle'. The\nlist of files can be one of:\n (('file1.txt', None), ) # file is read from disk\n (('file1.txt', b'contents'), ) # file contents are provided\n ('file1.txt', 'file2.txt') # files are read from disk, written with same filename\n ((('file1.txt', 'destname.txt'), None), ) # file is read from disk, written with different destination name\n\nThe various formats for each filename can be freely mixed and matched."}
{"_id": "q_6043", "text": "Specifies the types used for the arguments of a published service.\n\n@types(a=int, b = str)\ndef f(a, b):\n pass"}
{"_id": "q_6044", "text": "Specifies the return type for a published service.\n\n@returns(int)\ndef f(...):\n pass"}
{"_id": "q_6045", "text": "attaches a file to the payload to be uploaded.\n\nIf contents is omitted the file is read from disk.\nIf name is a tuple it specifies the on-disk filename and the destination filename."}
{"_id": "q_6046", "text": "walks the byte code to find the variables which are actually globals"}
{"_id": "q_6047", "text": "Return a list of all implemented keyrings that can be constructed without\n parameters."}
{"_id": "q_6048", "text": "The keyring name, suitable for display.\n\n The name is derived from module and class name."}
{"_id": "q_6049", "text": "Gets the username and password for the service.\n Returns a Credential instance.\n\n The *username* argument is optional and may be omitted by\n the caller or ignored by the backend. Callers must use the\n returned username."}
{"_id": "q_6050", "text": "If self.preferred_collection contains a D-Bus path,\n the collection at that address is returned. Otherwise,\n the default collection is returned."}
{"_id": "q_6051", "text": "Discover all keyrings for chaining."}
{"_id": "q_6052", "text": "Set current keyring backend."}
{"_id": "q_6053", "text": "Load the keyring class indicated by name.\n\n These popular names are tested to ensure their presence.\n\n >>> popular_names = [\n ... 'keyring.backends.Windows.WinVaultKeyring',\n ... 'keyring.backends.OS_X.Keyring',\n ... 'keyring.backends.kwallet.DBusKeyring',\n ... 'keyring.backends.SecretService.Keyring',\n ... ]\n >>> list(map(_load_keyring_class, popular_names))\n [...]\n\n These legacy names are retained for compatibility.\n\n >>> legacy_names = [\n ... ]\n >>> list(map(_load_keyring_class, legacy_names))\n [...]"}
{"_id": "q_6054", "text": "Load a keyring using the config file in the config root."}
{"_id": "q_6055", "text": "Use freedesktop.org Base Dir Specfication to determine storage\n location."}
{"_id": "q_6056", "text": "Use freedesktop.org Base Dir Specfication to determine config\n location."}
{"_id": "q_6057", "text": "Returns a callable that outputs the data. Defaults to print."}
{"_id": "q_6058", "text": "Runs the subcommand configured in args on the netgear session"}
{"_id": "q_6059", "text": "Scan for devices and print results."}
{"_id": "q_6060", "text": "Try to autodetect the base URL of the router SOAP service.\n\n Returns None if it can't be found."}
{"_id": "q_6061", "text": "Convert value to to_type, returns default if fails."}
{"_id": "q_6062", "text": "Login to the router.\n\n Will be called automatically by other actions."}
{"_id": "q_6063", "text": "Return list of connected devices to the router with details.\n\n This call is slower and probably heavier on the router load.\n\n Returns None if error occurred."}
{"_id": "q_6064", "text": "Make an API request to the router."}
{"_id": "q_6065", "text": "Return RGBA values of color c\n\n c should be either an X11 color or a brewer color set and index\n e.g. \"navajowhite\", \"greens3/2\""}
{"_id": "q_6066", "text": "Draw this shape with the given cairo context"}
{"_id": "q_6067", "text": "Find extremas of a function of real domain defined by evaluating\n a cubic bernstein polynomial of given bernstein coefficients."}
{"_id": "q_6068", "text": "Build choices list runtime using 'sitetree_tree' tag"}
{"_id": "q_6069", "text": "Compatibility function to get rid of optparse in management commands after Django 1.10.\n\n :param tuple command_options: tuple with `CommandOption` objects."}
{"_id": "q_6070", "text": "Registers a hook callable to process tree items right before they are passed to templates.\n\n Callable should be able to:\n\n a) handle ``tree_items`` and ``tree_sender`` key params.\n ``tree_items`` will contain a list of extended TreeItem objects ready to pass to template.\n ``tree_sender`` will contain navigation type identifier\n (e.g.: `menu`, `sitetree`, `breadcrumbs`, `menu.children`, `sitetree.children`)\n\n b) return a list of extended TreeItems objects to pass to template.\n\n\n Example::\n\n # Put the following code somewhere where it'd be triggered as expected. E.g. in app view.py.\n\n # First import the register function.\n from sitetree.sitetreeapp import register_items_hook\n\n # The following function will be used as items processor.\n def my_items_processor(tree_items, tree_sender):\n # Suppose we want to process only menu child items.\n if tree_sender == 'menu.children':\n # Lets add 'Hooked: ' to resolved titles of every item.\n for item in tree_items:\n item.title_resolved = 'Hooked: %s' % item.title_resolved\n # Return items list mutated or not.\n return tree_items\n\n # And we register items processor.\n register_items_hook(my_items_processor)\n\n :param func:"}
{"_id": "q_6071", "text": "Returns a structure describing a dynamic sitetree.utils\n The structure can be built from various sources,\n\n :param str|iterable src: If a string is passed to `src`, it'll be treated as the name of an app,\n from where one want to import sitetrees definitions. `src` can be an iterable\n of tree definitions (see `sitetree.toolbox.tree()` and `item()` functions).\n\n :param str|unicode target_tree_alias: Static tree alias to attach items from dynamic trees to.\n\n :param str|unicode parent_tree_item_alias: Tree item alias from a static tree to attach items from dynamic trees to.\n\n :param list include_trees: Sitetree aliases to filter `src`.\n\n :rtype: dict"}
{"_id": "q_6072", "text": "Initializes local cache from Django cache."}
{"_id": "q_6073", "text": "Updates cache entry parameter with new data.\n\n :param str|unicode entry_name:\n :param key:\n :param value:"}
{"_id": "q_6074", "text": "Initializes sitetree to handle new request.\n\n :param Context|None context:"}
{"_id": "q_6075", "text": "Resolves internationalized tree alias.\n Verifies whether a separate sitetree is available for currently active language.\n If so, returns i18n alias. If not, returns the initial alias.\n\n :param str|unicode alias:\n :rtype: str|unicode"}
{"_id": "q_6076", "text": "Returns boolean whether current application is Admin contrib.\n\n :rtype: bool"}
{"_id": "q_6077", "text": "Calculates depth of the item in the tree.\n\n :param str|unicode tree_alias:\n :param int item_id:\n :param int depth:\n :rtype: int"}
{"_id": "q_6078", "text": "Builds and returns menu structure for 'sitetree_menu' tag.\n\n :param str|unicode tree_alias:\n :param str|unicode tree_branches:\n :param Context context:\n :rtype: list|str"}
{"_id": "q_6079", "text": "Checks whether a current user has an access to a certain item.\n\n :param TreeItemBase item:\n :param Context context:\n :rtype: bool"}
{"_id": "q_6080", "text": "Builds and returns breadcrumb trail structure for 'sitetree_breadcrumbs' tag.\n\n :param str|unicode tree_alias:\n :param Context context:\n :rtype: list|str"}
{"_id": "q_6081", "text": "Builds and returns tree structure for 'sitetree_tree' tag.\n\n :param str|unicode tree_alias:\n :param Context context:\n :rtype: list|str"}
{"_id": "q_6082", "text": "Builds and returns site tree item children structure for 'sitetree_children' tag.\n\n :param TreeItemBase parent_item:\n :param str|unicode navigation_type: menu, sitetree\n :param str|unicode use_template:\n :param Context context:\n :rtype: list"}
{"_id": "q_6083", "text": "Returns item's children.\n\n :param str|unicode tree_alias:\n :param TreeItemBase|None item:\n :rtype: list"}
{"_id": "q_6084", "text": "Updates 'has_children' attribute for tree items inplace.\n\n :param str|unicode tree_alias:\n :param list tree_items:\n :param str|unicode navigation_type: sitetree, breadcrumbs, menu"}
{"_id": "q_6085", "text": "Filters sitetree item's children if hidden and by navigation type.\n\n NB: We do not apply any filters to sitetree in admin app.\n\n :param list items:\n :param str|unicode navigation_type: sitetree, breadcrumbs, menu\n :rtype: list"}
{"_id": "q_6086", "text": "Climbs up the site tree to resolve root item for chosen one.\n\n :param str|unicode tree_alias:\n :param TreeItemBase base_item:\n :rtype: TreeItemBase"}
{"_id": "q_6087", "text": "Resolves name as a variable in a given context.\n\n If no context specified page context' is considered as context.\n\n :param str|unicode varname:\n :param Context context:\n :return:"}
{"_id": "q_6088", "text": "Parses sitetree_breadcrumbs tag parameters.\n\n Two notation types are possible:\n 1. Two arguments:\n {% sitetree_breadcrumbs from \"mytree\" %}\n Used to render breadcrumb path for \"mytree\" site tree.\n\n 2. Four arguments:\n {% sitetree_breadcrumbs from \"mytree\" template \"sitetree/mycrumb.html\" %}\n Used to render breadcrumb path for \"mytree\" site tree using specific\n template \"sitetree/mycrumb.html\""}
{"_id": "q_6089", "text": "Parses sitetree_menu tag parameters.\n\n {% sitetree_menu from \"mytree\" include \"trunk,1,level3\" %}\n Used to render trunk, branch with id 1 and branch aliased 'level3'\n elements from \"mytree\" site tree as a menu.\n\n These are reserved aliases:\n * 'trunk' - items without parents\n * 'this-children' - items under item resolved as current for the current page\n * 'this-siblings' - items under parent of item resolved as current for\n the current page (current item included)\n * 'this-ancestor-children' - items under grandparent item (closest to root)\n for the item resolved as current for the current page\n\n {% sitetree_menu from \"mytree\" include \"trunk,1,level3\" template \"sitetree/mymenu.html\" %}"}
{"_id": "q_6090", "text": "Render helper is used by template node functions\n to render given template with given tree items in context."}
{"_id": "q_6091", "text": "Node constructor to be used in tags."}
{"_id": "q_6092", "text": "Fixes Admin contrib redirects compatibility problems\n introduced in Django 1.4 by url handling changes."}
{"_id": "q_6093", "text": "Redirects to the appropriate items' 'continue' page on item add.\n\n As we administer tree items within tree itself, we\n should make some changes to redirection process."}
{"_id": "q_6094", "text": "Redirects to the appropriate items' 'add' page on item change.\n\n As we administer tree items within tree itself, we\n should make some changes to redirection process."}
{"_id": "q_6095", "text": "Returns modified form for TreeItem model.\n 'Parent' field choices are built by sitetree itself."}
{"_id": "q_6096", "text": "Fetches Tree for current or given TreeItem."}
{"_id": "q_6097", "text": "Moves item up or down by swapping 'sort_order' field values of neighboring items."}
{"_id": "q_6098", "text": "Manages not only TreeAdmin URLs but also TreeItemAdmin URLs."}
{"_id": "q_6099", "text": "Dumps sitetrees with items using django-smuggler.\n\n :param request:\n :return:"}
{"_id": "q_6100", "text": "Dynamically creates and returns a sitetree.\n\n :param str|unicode alias:\n :param str|unicode title:\n :param iterable items: dynamic sitetree items objects created by `item` function.\n :param kwargs: Additional arguments to pass to tree item initializer.\n\n :rtype: TreeBase"}
{"_id": "q_6101", "text": "Dynamically creates and returns a sitetree item object.\n\n :param str|unicode title:\n\n :param str|unicode url:\n\n :param list, set children: a list of children for tree item. Children should also be created by `item` function.\n\n :param bool url_as_pattern: consider URL as a name of a named URL\n\n :param str|unicode hint: hints are usually shown to users\n\n :param str|unicode alias: item name to address it from templates\n\n :param str|unicode description: additional information on item (usually is not shown to users)\n\n :param bool in_menu: show this item in menus\n\n :param bool in_breadcrumbs: show this item in breadcrumbs\n\n :param bool in_sitetree: show this item in sitetrees\n\n :param bool access_loggedin: show item to logged in users only\n\n :param bool access_guest: show item to guest users only\n\n :param list|str||unicode|int, Permission access_by_perms: restrict access to users with these permissions.\n\n This can be set to one or a list of permission names, IDs or Permission instances.\n\n Permission names are more portable and should be in a form `<app_label>.<perm_codename>`, e.g.:\n my_app.allow_save\n\n\n :param bool perms_mode_all: permissions set interpretation rule:\n True - user should have all the permissions;\n False - user should have any of chosen permissions.\n\n :rtype: TreeItemBase"}
{"_id": "q_6102", "text": "Imports sitetree module from a given app.\n\n :param str|unicode app: Application name\n :return: module|None"}
{"_id": "q_6103", "text": "Returns a certain sitetree model as defined in the project settings.\n\n :param str|unicode settings_entry_name:\n :rtype: TreeItemBase|TreeBase"}
{"_id": "q_6104", "text": "Wrapper function for IPv4 and IPv6 converters.\n\n :arg ip: IPv4 or IPv6 address"}
{"_id": "q_6105", "text": "Populate location dict for converted IP.\n Returns dict with numerous location properties.\n\n :arg ipnum: Result of ip2long conversion"}
{"_id": "q_6106", "text": "Hostname lookup method, supports both IPv4 and IPv6."}
{"_id": "q_6107", "text": "Returns the database ID for specified hostname.\n The id might be useful as array index. 0 is unknown.\n\n :arg hostname: Hostname to get ID from."}
{"_id": "q_6108", "text": "Returns the database ID for specified address.\n The ID might be useful as array index. 0 is unknown.\n\n :arg addr: IPv4 or IPv6 address (eg. 203.0.113.30)"}
{"_id": "q_6109", "text": "Returns full country name for specified hostname.\n\n :arg hostname: Hostname (e.g. example.com)"}
{"_id": "q_6110", "text": "Returns Organization, ISP, or ASNum name for given IP address.\n\n :arg addr: IP address (e.g. 203.0.113.30)"}
{"_id": "q_6111", "text": "Returns Organization, ISP, or ASNum name for given hostname.\n\n :arg hostname: Hostname (e.g. example.com)"}
{"_id": "q_6112", "text": "Returns time zone from country and region code.\n\n :arg country_code: Country code\n :arg region_code: Region code"}
{"_id": "q_6113", "text": "If the given filename should be compressed, returns the\n compressed filename.\n\n A file can be compressed if:\n\n - It is a whitelisted extension\n - The compressed file does not exist\n - The compressed file exists by is older than the file itself\n\n Otherwise, it returns False."}
{"_id": "q_6114", "text": "Copy or symlink the file."}
{"_id": "q_6115", "text": "Transform path to url, converting backslashes to slashes if needed."}
{"_id": "q_6116", "text": "Reads markdown file, converts output and fetches title and meta-data for\n further processing."}
{"_id": "q_6117", "text": "Loads the exif data of all images in an album from cache"}
{"_id": "q_6118", "text": "Restores the exif data cache from the cache file"}
{"_id": "q_6119", "text": "Stores the exif data of all images in the gallery"}
{"_id": "q_6120", "text": "Removes all filtered Media and subdirs from an Album"}
{"_id": "q_6121", "text": "Run sigal to process a directory.\n\n If provided, 'source', 'destination' and 'theme' will override the\n corresponding values from the settings file."}
{"_id": "q_6122", "text": "Run a simple web server."}
{"_id": "q_6123", "text": "Write metadata keys to .md file.\n\n TARGET can be a media file or an album directory. KEYS are key/value pairs.\n\n Ex, to set the title of test.jpg to \"My test image\":\n\n sigal set_meta test.jpg title \"My test image\""}
{"_id": "q_6124", "text": "Create output directories for thumbnails and original images."}
{"_id": "q_6125", "text": "URL of the album, relative to its parent."}
{"_id": "q_6126", "text": "Path to the thumbnail of the album."}
{"_id": "q_6127", "text": "Make a ZIP archive with all media files and return its path.\n\n If the ``zip_gallery`` setting is set,it contains the location of a zip\n archive with all original images of the corresponding directory."}
{"_id": "q_6128", "text": "Create the image gallery"}
{"_id": "q_6129", "text": "Process a list of images in a directory."}
{"_id": "q_6130", "text": "Returns an image with reduced opacity."}
{"_id": "q_6131", "text": "Returns the dimensions of the video."}
{"_id": "q_6132", "text": "Video processor.\n\n :param source: path to a video\n :param outname: path to the generated video\n :param settings: settings dict\n :param options: array of options passed to ffmpeg"}
{"_id": "q_6133", "text": "Generate the HTML page and save it."}
{"_id": "q_6134", "text": "Return the path to the thumb.\n\n examples:\n >>> default_settings = create_settings()\n >>> get_thumb(default_settings, \"bar/foo.jpg\")\n \"bar/thumbnails/foo.jpg\"\n >>> get_thumb(default_settings, \"bar/foo.png\")\n \"bar/thumbnails/foo.png\"\n\n for videos, it returns a jpg file:\n >>> get_thumb(default_settings, \"bar/foo.webm\")\n \"bar/thumbnails/foo.jpg\""}
{"_id": "q_6135", "text": "Generate the media page and save it"}
{"_id": "q_6136", "text": "Create a configuration from a mapping.\n\n This allows either a mapping to be directly passed or as\n keyword arguments, for example,\n\n .. code-block:: python\n\n config = {'keep_alive_timeout': 10}\n Config.from_mapping(config)\n Config.form_mapping(keep_alive_timeout=10)\n\n Arguments:\n mapping: Optionally a mapping object.\n kwargs: Optionally a collection of keyword arguments to\n form a mapping."}
{"_id": "q_6137", "text": "Create a configuration from a Python file.\n\n .. code-block:: python\n\n Config.from_pyfile('hypercorn_config.py')\n\n Arguments:\n filename: The filename which gives the path to the file."}
{"_id": "q_6138", "text": "Load the configuration values from a TOML formatted file.\n\n This allows configuration to be loaded as so\n\n .. code-block:: python\n\n Config.from_toml('config.toml')\n\n Arguments:\n filename: The filename which gives the path to the file."}
{"_id": "q_6139", "text": "Create a configuration from a Python object.\n\n This can be used to reference modules or objects within\n modules for example,\n\n .. code-block:: python\n\n Config.from_object('module')\n Config.from_object('module.instance')\n from module import instance\n Config.from_object(instance)\n\n are valid.\n\n Arguments:\n instance: Either a str referencing a python object or the\n object itself."}
{"_id": "q_6140", "text": "Creates a set of zipkin attributes for a span.\n\n :param sample_rate: Float between 0.0 and 100.0 to determine sampling rate\n :type sample_rate: float\n :param trace_id: Optional 16-character hex string representing a trace_id.\n If this is None, a random trace_id will be generated.\n :type trace_id: str\n :param span_id: Optional 16-character hex string representing a span_id.\n If this is None, a random span_id will be generated.\n :type span_id: str\n :param use_128bit_trace_id: If true, generate 128-bit trace_ids\n :type use_128bit_trace_id: boolean"}
{"_id": "q_6141", "text": "Exit the span context. Zipkin attrs are pushed onto the\n threadlocal stack regardless of sampling, so they always need to be\n popped off. The actual logging of spans depends on sampling and that\n the logging was correctly set up."}
{"_id": "q_6142", "text": "Adds a 'sa' binary annotation to the current span.\n\n 'sa' binary annotations are useful for situations where you need to log\n where a request is going but the destination doesn't support zipkin.\n\n Note that the span must have 'cs'/'cr' annotations.\n\n :param port: The port number of the destination\n :type port: int\n :param service_name: The name of the destination service\n :type service_name: str\n :param host: Host address of the destination\n :type host: str"}
{"_id": "q_6143", "text": "Overrides the current span name.\n\n This is useful if you don't know the span name yet when you create the\n zipkin_span object. i.e. pyramid_zipkin doesn't know which route the\n request matched until the function wrapped by the context manager\n completes.\n\n :param name: New span name\n :type name: str"}
{"_id": "q_6144", "text": "Creates a new Endpoint object.\n\n :param port: TCP/UDP port. Defaults to 0.\n :type port: int\n :param service_name: service name as a str. Defaults to 'unknown'.\n :type service_name: str\n :param host: ipv4 or ipv6 address of the host. Defaults to the\n current host ip.\n :type host: str\n :param use_defaults: whether to use defaults.\n :type use_defaults: bool\n :returns: zipkin Endpoint object"}
{"_id": "q_6145", "text": "Creates a copy of a given endpoint with a new service name.\n\n :param endpoint: existing Endpoint object\n :type endpoint: Endpoint\n :param new_service_name: new service name\n :type new_service_name: str\n :returns: zipkin new Endpoint object"}
{"_id": "q_6146", "text": "Builds and returns a V1 Span.\n\n :return: newly generated _V1Span\n :rtype: _V1Span"}
{"_id": "q_6147", "text": "Encode list of protobuf Spans to binary.\n\n :param pb_spans: list of protobuf Spans.\n :type pb_spans: list of zipkin_pb2.Span\n :return: encoded list.\n :rtype: bytes"}
{"_id": "q_6148", "text": "Converts a py_zipkin Span in a protobuf Span.\n\n :param span: py_zipkin Span to convert.\n :type span: py_zipkin.encoding.Span\n :return: protobuf's Span\n :rtype: zipkin_pb2.Span"}
{"_id": "q_6149", "text": "Encodes to hexadecimal ids to big-endian binary.\n\n :param hex_id: hexadecimal id to encode.\n :type hex_id: str\n :return: binary representation.\n :type: bytes"}
{"_id": "q_6150", "text": "Converts py_zipkin's Kind to Protobuf's Kind.\n\n :param kind: py_zipkin's Kind.\n :type kind: py_zipkin.Kind\n :return: correcponding protobuf's kind value.\n :rtype: zipkin_pb2.Span.Kind"}
{"_id": "q_6151", "text": "Converts py_zipkin's Endpoint to Protobuf's Endpoint.\n\n :param endpoint: py_zipkins' endpoint to convert.\n :type endpoint: py_zipkin.encoding.Endpoint\n :return: corresponding protobuf's endpoint.\n :rtype: zipkin_pb2.Endpoint"}
{"_id": "q_6152", "text": "Create a zipkin annotation object\n\n :param timestamp: timestamp of when the annotation occured in microseconds\n :param value: name of the annotation, such as 'sr'\n :param host: zipkin endpoint object\n\n :returns: zipkin annotation object"}
{"_id": "q_6153", "text": "Create a zipkin binary annotation object\n\n :param key: name of the annotation, such as 'http.uri'\n :param value: value of the annotation, such as a URI\n :param annotation_type: type of annotation, such as AnnotationType.I32\n :param host: zipkin endpoint object\n\n :returns: zipkin binary annotation object"}
{"_id": "q_6154", "text": "Create a zipkin Endpoint object.\n\n An Endpoint object holds information about the network context of a span.\n\n :param port: int value of the port. Defaults to 0\n :param service_name: service name as a str. Defaults to 'unknown'\n :param ipv4: ipv4 host address\n :param ipv6: ipv6 host address\n :returns: thrift Endpoint object"}
{"_id": "q_6155", "text": "Copies a copy of a given endpoint with a new service name.\n This should be very fast, on the order of several microseconds.\n\n :param endpoint: existing zipkin_core.Endpoint object\n :param service_name: str of new service name\n :returns: zipkin Endpoint object"}
{"_id": "q_6156", "text": "Reformat annotations dict to return list of corresponding zipkin_core objects.\n\n :param annotations: dict containing key as annotation name,\n value being timestamp in seconds(float).\n :type host: :class:`zipkin_core.Endpoint`\n :returns: a list of annotation zipkin_core objects\n :rtype: list"}
{"_id": "q_6157", "text": "Takes a bunch of span attributes and returns a thriftpy2 representation\n of the span. Timestamps passed in are in seconds, they're converted to\n microseconds before thrift encoding."}
{"_id": "q_6158", "text": "Returns a TBinaryProtocol encoded Thrift span.\n\n :param thrift_span: thrift object to encode.\n :returns: thrift object in TBinaryProtocol format bytes."}
{"_id": "q_6159", "text": "Returns a TBinaryProtocol encoded list of Thrift objects.\n\n :param binary_thrift_obj_list: list of TBinaryProtocol objects to encode.\n :returns: bynary object representing the encoded list."}
{"_id": "q_6160", "text": "Returns the span type and encoding for the message provided.\n\n The logic in this function is a Python port of\n https://github.com/openzipkin/zipkin/blob/master/zipkin/src/main/java/zipkin/internal/DetectingSpanDecoder.java\n\n :param message: span to perform operations on.\n :type message: byte array\n :returns: span encoding.\n :rtype: Encoding"}
{"_id": "q_6161", "text": "Converts encoded spans to a different encoding.\n\n param spans: encoded input spans.\n type spans: byte array\n param output_encoding: desired output encoding.\n type output_encoding: Encoding\n param input_encoding: optional input encoding. If this is not specified, it'll\n try to understand the encoding automatically by inspecting the input spans.\n type input_encoding: Encoding\n :returns: encoded spans.\n :rtype: byte array"}
{"_id": "q_6162", "text": "Encodes the current span to thrift."}
{"_id": "q_6163", "text": "Encodes a single span to protobuf."}
{"_id": "q_6164", "text": "Decodes an encoded list of spans.\n\n :param spans: encoded list of spans\n :type spans: bytes\n :return: list of spans\n :rtype: list of Span"}
{"_id": "q_6165", "text": "Accepts a thrift decoded endpoint and converts it to an Endpoint.\n\n :param thrift_endpoint: thrift encoded endpoint\n :type thrift_endpoint: thrift endpoint\n :returns: decoded endpoint\n :rtype: Encoding"}
{"_id": "q_6166", "text": "Accepts a thrift annotation and converts it to a v1 annotation.\n\n :param thrift_annotations: list of thrift annotations.\n :type thrift_annotations: list of zipkin_core.Span.Annotation\n :returns: (annotations, local_endpoint, kind)"}
{"_id": "q_6167", "text": "Accepts a thrift decoded binary annotation and converts it\n to a v1 binary annotation."}
{"_id": "q_6168", "text": "Decodes a thrift span.\n\n :param thrift_span: thrift span\n :type thrift_span: thrift Span object\n :returns: span builder representing this span\n :rtype: Span"}
{"_id": "q_6169", "text": "Converts the provided unsigned long value to a hex string.\n\n :param value: the value to convert\n :type value: unsigned long\n :returns: value as a hex string"}
{"_id": "q_6170", "text": "Writes an unsigned long value across a byte array.\n\n :param data: the buffer to write the value to\n :type data: bytearray\n :param pos: the starting position\n :type pos: int\n :param value: the value to write\n :type value: unsigned long"}
{"_id": "q_6171", "text": "mBank Collect uses transaction code 911 to distinguish icoming mass\n payments transactions, adding transaction_code may be helpful in further\n processing"}
{"_id": "q_6172", "text": "mBank Collect uses ID IPH to distinguish between virtual accounts,\n adding iph_id may be helpful in further processing"}
{"_id": "q_6173", "text": "mBank Collect states TNR in transaction details as unique id for\n transactions, that may be used to identify the same transactions in\n different statement files eg. partial mt942 and full mt940\n Information about tnr uniqueness has been obtained from mBank support,\n it lacks in mt940 mBank specification."}
{"_id": "q_6174", "text": "Parses mt940 data and returns transactions object\n\n :param src: file handler to read, filename to read or raw data as string\n :return: Collection of transactions\n :rtype: Transactions"}
{"_id": "q_6175", "text": "Join strings together and strip whitespace in between if needed"}
{"_id": "q_6176", "text": "Handles the message shown when we are ratelimited"}
{"_id": "q_6177", "text": "Handles requests to the API"}
{"_id": "q_6178", "text": "Gets the information of the given Bot ID"}
{"_id": "q_6179", "text": "Gets an object of bots on DBL"}
{"_id": "q_6180", "text": "Write outgoing message."}
{"_id": "q_6181", "text": "Encode Erlang external term."}
{"_id": "q_6182", "text": "Asks user for removal of project directory and eventually removes it"}
{"_id": "q_6183", "text": "Check the defined project name against keywords, builtins and existing\n modules to avoid name clashing"}
{"_id": "q_6184", "text": "Checks and validate provided input"}
{"_id": "q_6185", "text": "Converts the current version to the next one for inserting into requirements\n in the ' < version' format"}
{"_id": "q_6186", "text": "Parse config file.\n\n Returns a list of additional args."}
{"_id": "q_6187", "text": "Install aldryn boilerplate\n\n :param config_data: configuration data"}
{"_id": "q_6188", "text": "Create admin user without user input\n\n :param config_data: configuration data"}
{"_id": "q_6189", "text": "Method sleeps, if nothing to do"}
{"_id": "q_6190", "text": "cleans up and stops the discovery server"}
{"_id": "q_6191", "text": "construct a a raw SOAP XML string, given a prepared SoapEnvelope object"}
{"_id": "q_6192", "text": "Return a list of RelatedObject records for child relations of the given model,\n including ones attached to ancestors of the model"}
{"_id": "q_6193", "text": "Return a list of ParentalManyToManyFields on the given model,\n including ones attached to ancestors of the model"}
{"_id": "q_6194", "text": "Save the model and commit all child relations."}
{"_id": "q_6195", "text": "Build an instance of this model from the JSON-like structure passed in,\n recursing into related objects as required.\n If check_fks is true, it will check whether referenced foreign keys still\n exist in the database.\n - dangling foreign keys on related objects are dealt with by either nullifying the key or\n dropping the related object, according to the 'on_delete' setting.\n - dangling foreign keys on the base object will be nullified, unless strict_fks is true,\n in which case any dangling foreign keys with on_delete=CASCADE will cause None to be\n returned for the entire object."}
{"_id": "q_6196", "text": "This clean method will check for unique_together condition"}
{"_id": "q_6197", "text": "Return True if data differs from initial."}
{"_id": "q_6198", "text": "Returns the address with a valid checksum attached."}
{"_id": "q_6199", "text": "Generates the correct checksum for this address."}
{"_id": "q_6200", "text": "Returns the argument parser that will be used to interpret\n arguments and options from argv."}
{"_id": "q_6201", "text": "Returns whether a sequence of signature fragments is valid.\n\n :param fragments:\n Sequence of signature fragments (usually\n :py:class:`iota.transaction.Fragment` instances).\n\n :param hash_:\n Hash used to generate the signature fragments (usually a\n :py:class:`iota.transaction.BundleHash` instance).\n\n :param public_key:\n The public key value used to verify the signature digest (usually a\n :py:class:`iota.types.Address` instance).\n\n :param sponge_type:\n The class used to create the cryptographic sponge (i.e., Curl or Kerl)."}
{"_id": "q_6202", "text": "Generates the key associated with the specified address.\n\n Note that this method will generate the wrong key if the input\n address was generated from a different key!"}
{"_id": "q_6203", "text": "Creates a generator that can be used to progressively generate\n new keys.\n\n :param start:\n Starting index.\n\n Warning: This method may take awhile to reset if ``start``\n is a large number!\n\n :param step:\n Number of indexes to advance after each key.\n\n This value can be negative; the generator will exit if it\n reaches an index < 0.\n\n Warning: The generator may take awhile to advance between\n iterations if ``step`` is a large number!\n\n :param security_level:\n Number of _transform iterations to apply to each key.\n Must be >= 1.\n\n Increasing this value makes key generation slower, but more\n resistant to brute-forcing."}
{"_id": "q_6204", "text": "Absorb trits into the sponge.\n\n :param trits:\n Sequence of trits to absorb.\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to absorb. Defaults to ``len(trits)``."}
{"_id": "q_6205", "text": "Squeeze trits from the sponge.\n\n :param trits:\n Sequence that the squeezed trits will be copied to.\n Note: this object will be modified!\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to squeeze, default to ``HASH_LENGTH``"}
{"_id": "q_6206", "text": "Transforms internal state."}
{"_id": "q_6207", "text": "Generates one or more private keys from the seed.\n\n As the name implies, private keys should not be shared.\n However, in a few cases it may be necessary (e.g., for M-of-N\n transactions).\n\n :param index:\n The starting key index.\n\n :param count:\n Number of keys to generate.\n\n :param security_level:\n Number of iterations to use when generating new keys.\n\n Larger values take longer, but the resulting signatures are\n more secure.\n\n This value must be between 1 and 3, inclusive.\n\n :return:\n Dict with the following items::\n\n {\n 'keys': List[PrivateKey],\n Always contains a list, even if only one key was\n generated.\n }\n\n References:\n\n - :py:class:`iota.crypto.signing.KeyGenerator`\n - https://github.com/iotaledger/wiki/blob/master/multisigs.md#how-m-of-n-works"}
{"_id": "q_6208", "text": "Prepares a bundle that authorizes the spending of IOTAs from a\n multisig address.\n\n .. note::\n This method is used exclusively to spend IOTAs from a\n multisig address.\n\n If you want to spend IOTAs from non-multisig addresses, or\n if you want to create 0-value transfers (i.e., that don't\n require inputs), use\n :py:meth:`iota.api.Iota.prepare_transfer` instead.\n\n :param transfers:\n Transaction objects to prepare.\n\n .. important::\n Must include at least one transaction that spends IOTAs\n (i.e., has a nonzero ``value``). If you want to prepare\n a bundle that does not spend any IOTAs, use\n :py:meth:`iota.api.prepare_transfer` instead.\n\n :param multisig_input:\n The multisig address to use as the input for the transfers.\n\n .. note::\n This method only supports creating a bundle with a\n single multisig input.\n\n If you would like to spend from multiple multisig\n addresses in the same bundle, create the\n :py:class:`iota.multisig.transaction.ProposedMultisigBundle`\n object manually.\n\n :param change_address:\n If inputs are provided, any unspent amount will be sent to\n this address.\n\n If the bundle has no unspent inputs, ``change_address` is\n ignored.\n\n .. important::\n Unlike :py:meth:`iota.api.Iota.prepare_transfer`, this\n method will NOT generate a change address automatically.\n If there are unspent inputs and ``change_address`` is\n empty, an exception will be raised.\n\n This is because multisig transactions typically involve\n multiple individuals, and it would be unfair to the\n participants if we generated a change address\n automatically using the seed of whoever happened to run\n the ``prepare_multisig_transfer`` method!\n\n .. danger::\n Note that this protective measure is not a\n substitute for due diligence!\n\n Always verify the details of every transaction in a\n bundle (including the change transaction) before\n signing the input(s)!\n\n :return:\n Dict containing the following values::\n\n {\n 'trytes': List[TransactionTrytes],\n Finalized bundle, as trytes.\n The input transactions are not signed.\n }\n\n In order to authorize the spending of IOTAs from the multisig\n input, you must generate the correct private keys and invoke\n the :py:meth:`iota.crypto.types.PrivateKey.sign_input_at`\n method for each key, in the correct order.\n\n Once the correct signatures are applied, you can then perform\n proof of work (``attachToTangle``) and broadcast the bundle\n using :py:meth:`iota.api.Iota.send_trytes`."}
{"_id": "q_6209", "text": "Adds two individual trits together.\n\n The result is always a single trit."}
{"_id": "q_6210", "text": "Outputs the user's seed to stdout, along with lots of warnings\n about security."}
{"_id": "q_6211", "text": "Find the transactions which match the specified input and\n return.\n\n All input values are lists, for which a list of return values\n (transaction hashes), in the same order, is returned for all\n individual elements.\n\n Using multiple of these input fields returns the intersection of\n the values.\n\n :param bundles:\n List of bundle IDs.\n\n :param addresses:\n List of addresses.\n\n :param tags:\n List of tags.\n\n :param approvees:\n List of approvee transaction IDs.\n\n References:\n\n - https://iota.readme.io/docs/findtransactions"}
{"_id": "q_6212", "text": "Gets all possible inputs of a seed and returns them, along with\n the total balance.\n\n This is either done deterministically (by generating all\n addresses until :py:meth:`find_transactions` returns an empty\n result), or by providing a key range to search.\n\n :param start:\n Starting key index.\n Defaults to 0.\n\n :param stop:\n Stop before this index.\n\n Note that this parameter behaves like the ``stop`` attribute\n in a :py:class:`slice` object; the stop index is *not*\n included in the result.\n\n If ``None`` (default), then this method will not stop until\n it finds an unused address.\n\n :param threshold:\n If set, determines the minimum threshold for a successful\n result:\n\n - As soon as this threshold is reached, iteration will stop.\n - If the command runs out of addresses before the threshold\n is reached, an exception is raised.\n\n .. note::\n This method does not attempt to \"optimize\" the result\n (e.g., smallest number of inputs, get as close to\n ``threshold`` as possible, etc.); it simply accumulates\n inputs in order until the threshold is met.\n\n If ``threshold`` is 0, the first address in the key range\n with a non-zero balance will be returned (if it exists).\n\n If ``threshold`` is ``None`` (default), this method will\n return **all** inputs in the specified key range.\n\n :param security_level:\n Number of iterations to use when generating new addresses\n (see :py:meth:`get_new_addresses`).\n\n This value must be between 1 and 3, inclusive.\n\n If not set, defaults to\n :py:attr:`AddressGenerator.DEFAULT_SECURITY_LEVEL`.\n\n :return:\n Dict with the following structure::\n\n {\n 'inputs': List[Address],\n Addresses with nonzero balances that can be used\n as inputs.\n\n 'totalBalance': int,\n Aggregate balance from all matching addresses.\n }\n\n Note that each Address in the result has its ``balance``\n attribute set.\n\n Example:\n\n .. code-block:: python\n\n response = iota.get_inputs(...)\n\n input0 = response['inputs'][0] # type: Address\n input0.balance # 42\n\n :raise:\n - :py:class:`iota.adapter.BadApiResponse` if ``threshold``\n is not met. Not applicable if ``threshold`` is ``None``.\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#getinputs"}
{"_id": "q_6213", "text": "Generates one or more new addresses from the seed.\n\n :param index:\n The key index of the first new address to generate (must be\n >= 1).\n\n :param count:\n Number of addresses to generate (must be >= 1).\n\n .. tip::\n This is more efficient than calling ``get_new_address``\n inside a loop.\n\n If ``None``, this method will progressively generate\n addresses and scan the Tangle until it finds one that has no\n transactions referencing it.\n\n :param security_level:\n Number of iterations to use when generating new addresses.\n\n Larger values take longer, but the resulting signatures are\n more secure.\n\n This value must be between 1 and 3, inclusive.\n\n :param checksum:\n Specify whether to return the address with the checksum.\n Defaults to ``False``.\n\n :return:\n Dict with the following structure::\n\n {\n 'addresses': List[Address],\n Always a list, even if only one address was\n generated.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#getnewaddress"}
{"_id": "q_6214", "text": "Promotes a transaction by adding spam on top of it.\n\n :return:\n Dict with the following structure::\n\n {\n 'bundle': Bundle,\n The newly-published bundle.\n }"}
{"_id": "q_6215", "text": "Takes a tail transaction hash as input, gets the bundle\n associated with the transaction and then replays the bundle by\n attaching it to the Tangle.\n\n :param transaction:\n Transaction hash. Must be a tail.\n\n :param depth:\n Depth at which to attach the bundle.\n Defaults to 3.\n\n :param min_weight_magnitude:\n Min weight magnitude, used by the node to calibrate Proof of\n Work.\n\n If not provided, a default value will be used.\n\n :return:\n Dict with the following structure::\n\n {\n 'trytes': List[TransactionTrytes],\n Raw trytes that were published to the Tangle.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#replaytransfer"}
{"_id": "q_6216", "text": "Prepares a set of transfers and creates the bundle, then\n attaches the bundle to the Tangle, and broadcasts and stores the\n transactions.\n\n :param transfers:\n Transfers to include in the bundle.\n\n :param depth:\n Depth at which to attach the bundle.\n Defaults to 3.\n\n :param inputs:\n List of inputs used to fund the transfer.\n Not needed for zero-value transfers.\n\n :param change_address:\n If inputs are provided, any unspent amount will be sent to\n this address.\n\n If not specified, a change address will be generated\n automatically.\n\n :param min_weight_magnitude:\n Min weight magnitude, used by the node to calibrate Proof of\n Work.\n\n If not provided, a default value will be used.\n\n :param security_level:\n Number of iterations to use when generating new addresses\n (see :py:meth:`get_new_addresses`).\n\n This value must be between 1 and 3, inclusive.\n\n If not set, defaults to\n :py:attr:`AddressGenerator.DEFAULT_SECURITY_LEVEL`.\n\n :return:\n Dict with the following structure::\n\n {\n 'bundle': Bundle,\n The newly-published bundle.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#sendtransfer"}
{"_id": "q_6217", "text": "Attaches transaction trytes to the Tangle, then broadcasts and\n stores them.\n\n :param trytes:\n Transaction encoded as a tryte sequence.\n\n :param depth:\n Depth at which to attach the bundle.\n Defaults to 3.\n\n :param min_weight_magnitude:\n Min weight magnitude, used by the node to calibrate Proof of\n Work.\n\n If not provided, a default value will be used.\n\n :return:\n Dict with the following structure::\n\n {\n 'trytes': List[TransactionTrytes],\n Raw trytes that were published to the Tangle.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#sendtrytes"}
{"_id": "q_6218", "text": "Given a URI, returns a properly-configured adapter instance."}
{"_id": "q_6219", "text": "Sends an API request to the node.\n\n :param payload:\n JSON payload.\n\n :param kwargs:\n Additional keyword arguments for the adapter.\n\n :return:\n Decoded response from the node.\n\n :raise:\n - :py:class:`BadApiResponse` if a non-success response was\n received."}
{"_id": "q_6220", "text": "Sends a message to the instance's logger, if configured."}
{"_id": "q_6221", "text": "Sends the actual HTTP request.\n\n Split into its own method so that it can be mocked during unit\n tests."}
{"_id": "q_6222", "text": "Absorbs a digest into the sponge.\n\n .. important::\n Keep track of the order that digests are added!\n\n To spend inputs from a multisig address, you must provide\n the private keys in the same order!\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/multisigs.md#spending-inputs"}
{"_id": "q_6223", "text": "Creates an iterator that can be used to progressively generate new\n addresses.\n\n :param start:\n Starting index.\n\n Warning: This method may take awhile to reset if ``start``\n is a large number!\n\n :param step:\n Number of indexes to advance after each address.\n\n Warning: The generator may take awhile to advance between\n iterations if ``step`` is a large number!"}
{"_id": "q_6224", "text": "Generates an address from a private key digest."}
{"_id": "q_6225", "text": "Generates a new address.\n\n Used in the event of a cache miss."}
{"_id": "q_6226", "text": "Scans the Tangle for used addresses.\n\n This is basically the opposite of invoking ``getNewAddresses`` with\n ``stop=None``."}
{"_id": "q_6227", "text": "Determines which codec to use for the specified encoding.\n\n References:\n\n - https://docs.python.org/3/library/codecs.html#codecs.register"}
{"_id": "q_6228", "text": "Encodes a byte string into trytes."}
{"_id": "q_6229", "text": "Decodes a tryte string into bytes."}
{"_id": "q_6230", "text": "Adds a route to the wrapper.\n\n :param command:\n The name of the command to route (e.g., \"attachToTangle\").\n\n :param adapter:\n The adapter object or URI to route requests to."}
{"_id": "q_6231", "text": "Returns a JSON-compatible representation of the object.\n\n References:\n\n - :py:class:`iota.json.JsonEncoder`."}
{"_id": "q_6232", "text": "Returns the values needed to validate the transaction's\n ``signature_message_fragment`` value."}
{"_id": "q_6233", "text": "Returns TryteString representations of the transactions in this\n bundle.\n\n :param head_to_tail:\n Determines the order of the transactions:\n\n - ``True``: head txn first, tail txn last.\n - ``False`` (default): tail txn first, head txn last.\n\n Note that the order is reversed by default, as this is the\n way bundles are typically broadcast to the Tangle."}
{"_id": "q_6234", "text": "Automatically discover commands in the specified package.\n\n :param package:\n Package path or reference.\n\n :param recursively:\n If True, will descend recursively into sub-packages.\n\n :return:\n All commands discovered in the specified package, indexed by\n command name (note: not class name)."}
{"_id": "q_6235", "text": "Sends the request object to the adapter and returns the response.\n\n The command name will be automatically injected into the request\n before it is sent (note: this will modify the request object)."}
{"_id": "q_6236", "text": "Applies a filter to a value. If the value does not pass the\n filter, an exception will be raised with lots of contextual info\n attached to it."}
{"_id": "q_6237", "text": "Returns the URL to check job status.\n\n :param job_id:\n The ID of the job to check."}
{"_id": "q_6238", "text": "Validates the signature fragments in the bundle.\n\n :return:\n List of error messages.\n If empty, signature fragments are valid."}
{"_id": "q_6239", "text": "Validates the signature fragments for a group of transactions\n using the specified sponge type.\n\n Note: this method assumes that the transactions in the group\n have already passed basic validation (see\n :py:meth:`_create_validator`).\n\n :return:\n - ``None``: Indicates that the signature fragments are valid.\n - ``Text``: Error message indicating the fragments are invalid."}
{"_id": "q_6240", "text": "Recursively traverse the Tangle, collecting transactions until\n we hit a new bundle.\n\n This method is (usually) faster than ``findTransactions``, and\n it ensures we don't collect transactions from replayed bundles."}
{"_id": "q_6241", "text": "Starts the REPL."}
{"_id": "q_6242", "text": "Generates a random seed using a CSPRNG.\n\n :param length:\n Length of seed, in trytes.\n\n For maximum security, this should always be set to 81, but\n you can change it if you're 110% sure you know what you're\n doing.\n\n See https://iota.stackexchange.com/q/249 for more info."}
{"_id": "q_6243", "text": "Generates the digest used to do the actual signing.\n\n Signing keys can have variable length and tend to be quite long,\n which makes them not-well-suited for use in crypto algorithms.\n\n The digest is essentially the result of running the signing key\n through a PBKDF, yielding a constant-length hash that can be\n used for crypto."}
{"_id": "q_6244", "text": "Makes JSON-serializable objects play nice with IPython's default\n pretty-printer.\n\n Sadly, :py:func:`pprint.pprint` does not have a similar\n mechanism.\n\n References:\n\n - http://ipython.readthedocs.io/en/stable/api/generated/IPython.lib.pretty.html\n - :py:meth:`IPython.lib.pretty.RepresentationPrinter.pretty`\n - :py:func:`pprint._safe_repr`"}
{"_id": "q_6245", "text": "Absorb trits into the sponge from a buffer.\n\n :param trits:\n Buffer that contains the trits to absorb.\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to absorb. Defaults to ``len(trits)``."}
{"_id": "q_6246", "text": "Squeeze trits from the sponge into a buffer.\n\n :param trits:\n Buffer that will hold the squeezed trits.\n\n IMPORTANT: If ``trits`` is too small, it will be extended!\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to squeeze from the sponge.\n\n If not specified, defaults to :py:data:`TRIT_HASH_LENGTH`\n (i.e., by default, we will try to squeeze exactly 1 hash)."}
{"_id": "q_6247", "text": "Increments the transaction's legacy tag, used to fix insecure\n bundle hashes when finalizing a bundle.\n\n References:\n\n - https://github.com/iotaledger/iota.lib.py/issues/84"}
{"_id": "q_6248", "text": "Determines the most relevant tag for the bundle."}
{"_id": "q_6249", "text": "Adds a transaction to the bundle.\n\n If the transaction message is too long, it will be split\n automatically into multiple transactions."}
{"_id": "q_6250", "text": "Finalizes the bundle, preparing it to be attached to the Tangle."}
{"_id": "q_6251", "text": "Signs the input at the specified index.\n\n :param start_index:\n The index of the first input transaction.\n\n If necessary, the resulting signature will be split across\n multiple transactions automatically (i.e., if an input has\n ``security_level=2``, you still only need to call\n :py:meth:`sign_input_at` once).\n\n :param private_key:\n The private key that will be used to generate the signature.\n\n .. important::\n Be sure that the private key was generated using the\n correct seed, or the resulting signature will be\n invalid!"}
{"_id": "q_6252", "text": "Creates transactions for the specified input address."}
{"_id": "q_6253", "text": "Converts between any two standard units of iota.\n\n :param value:\n Value (affixed) to convert. For example: '1.618 Mi'.\n\n :param symbol:\n Unit symbol of iota to convert to. For example: 'Gi'.\n\n :return:\n Float as units of given symbol to convert to."}
{"_id": "q_6254", "text": "Pass an argument list to SoX.\n\n Parameters\n ----------\n args : iterable\n Argument list for SoX. The first item can, but does not\n need to, be 'sox'.\n\n Returns:\n --------\n status : bool\n True on success."}
{"_id": "q_6255", "text": "Calls SoX help for a lists of audio formats available with the current\n install of SoX.\n\n Returns:\n --------\n formats : list\n List of audio file extensions that SoX can process."}
{"_id": "q_6256", "text": "Base call to SoXI.\n\n Parameters\n ----------\n filepath : str\n Path to audio file.\n\n argument : str\n Argument to pass to SoXI.\n\n Returns\n -------\n shell_output : str\n Command line output of SoXI"}
{"_id": "q_6257", "text": "Pass an argument list to play.\n\n Parameters\n ----------\n args : iterable\n Argument list for play. The first item can, but does not\n need to, be 'play'.\n\n Returns:\n --------\n status : bool\n True on success."}
{"_id": "q_6258", "text": "Validate that combine method can be performed with given files.\n Raises IOError if input file formats are incompatible."}
{"_id": "q_6259", "text": "Check if files in input file list have the same sample rate"}
{"_id": "q_6260", "text": "Set input formats given input_volumes.\n\n Parameters\n ----------\n input_filepath_list : list of str\n List of input files\n input_volumes : list of float, default=None\n List of volumes to be applied upon combining input files. Volumes\n are applied to the input files in order.\n If None, input files will be combined at their original volumes.\n input_format : list of lists, default=None\n List of input formats to be applied to each input file. Formatting\n arguments are applied to the input files in order.\n If None, the input formats will be inferred from the file header."}
{"_id": "q_6261", "text": "Check input_volumes contains a valid list of volumes.\n\n Parameters\n ----------\n input_volumes : list\n list of volume values. Castable to numbers."}
{"_id": "q_6262", "text": "Input file validation function. Checks that file exists and can be\n processed by SoX.\n\n Parameters\n ----------\n input_filepath : str\n The input filepath."}
{"_id": "q_6263", "text": "Output file validation function. Checks that file can be written, and\n has a valid file extension. Throws a warning if the path already exists,\n as it will be overwritten on build.\n\n Parameters\n ----------\n output_filepath : str\n The output filepath.\n\n Returns:\n --------\n output_filepath : str\n The output filepath."}
{"_id": "q_6264", "text": "Get a dictionary of file information\n\n Parameters\n ----------\n filepath : str\n File path.\n\n Returns:\n --------\n info_dictionary : dict\n Dictionary of file information. Fields are:\n * channels\n * sample_rate\n * bitrate\n * duration\n * num_samples\n * encoding\n * silent"}
{"_id": "q_6265", "text": "Call sox's stat function.\n\n Parameters\n ----------\n filepath : str\n File path.\n\n Returns\n -------\n stat_output : str\n Sox output from stderr."}
{"_id": "q_6266", "text": "Apply a biquad IIR filter with the given coefficients.\n\n Parameters\n ----------\n b : list of floats\n Numerator coefficients. Must be length 3\n a : list of floats\n Denominator coefficients. Must be length 3\n\n See Also\n --------\n fir, treble, bass, equalizer"}
{"_id": "q_6267", "text": "Change the number of channels in the audio signal. If decreasing the\n number of channels it mixes channels together, if increasing the number\n of channels it duplicates.\n\n Note: This overrides arguments used in the convert effect!\n\n Parameters\n ----------\n n_channels : int\n Desired number of channels.\n\n See Also\n --------\n convert"}
{"_id": "q_6268", "text": "Comparable with compression, this effect modifies an audio signal to\n make it sound louder.\n\n Parameters\n ----------\n amount : float\n Amount of enhancement between 0 and 100.\n\n See Also\n --------\n compand, mcompand"}
{"_id": "q_6269", "text": "Apply a DC shift to the audio.\n\n Parameters\n ----------\n shift : float\n Amount to shift audio between -2 and 2. (Audio is between -1 and 1)\n\n See Also\n --------\n highpass"}
{"_id": "q_6270", "text": "Apply a flanging effect to the audio.\n\n Parameters\n ----------\n delay : float, default=0\n Base delay (in miliseconds) between 0 and 30.\n depth : float, default=2\n Added swept delay (in miliseconds) between 0 and 10.\n regen : float, default=0\n Percentage regeneration between -95 and 95.\n width : float, default=71,\n Percentage of delayed signal mixed with original between 0 and 100.\n speed : float, default=0.5\n Sweeps per second (in Hz) between 0.1 and 10.\n shape : 'sine' or 'triangle', default='sine'\n Swept wave shape\n phase : float, default=25\n Swept wave percentage phase-shift for multi-channel flange between\n 0 and 100. 0 = 100 = same phase on each channel\n interp : 'linear' or 'quadratic', default='linear'\n Digital delay-line interpolation type.\n\n See Also\n --------\n tremolo"}
{"_id": "q_6271", "text": "Apply amplification or attenuation to the audio signal.\n\n Parameters\n ----------\n gain_db : float, default=0.0\n Gain adjustment in decibels (dB).\n normalize : bool, default=True\n If True, audio is normalized to gain_db relative to full scale.\n If False, simply adjusts the audio power level by gain_db.\n limiter : bool, default=False\n If True, a simple limiter is invoked to prevent clipping.\n balance : str or None, default=None\n Balance gain across channels. Can be one of:\n * None applies no balancing (default)\n * 'e' applies gain to all channels other than that with the\n highest peak level, such that all channels attain the same\n peak level\n * 'B' applies gain to all channels other than that with the\n highest RMS level, such that all channels attain the same\n RMS level\n * 'b' applies gain with clipping protection to all channels other\n than that with the highest RMS level, such that all channels\n attain the same RMS level\n If normalize=True, 'B' and 'b' are equivalent.\n\n See Also\n --------\n loudness"}
{"_id": "q_6272", "text": "Loudness control. Similar to the gain effect, but provides\n equalisation for the human auditory system.\n\n The gain is adjusted by gain_db and the signal is equalised according\n to ISO 226 w.r.t. reference_level.\n\n Parameters\n ----------\n gain_db : float, default=-10.0\n Loudness adjustment amount (in dB)\n reference_level : float, default=65.0\n Reference level (in dB) according to which the signal is equalized.\n Must be between 50 and 75 (dB)\n\n See Also\n --------\n gain"}
{"_id": "q_6273", "text": "Calculate a profile of the audio for use in noise reduction.\n Running this command does not effect the Transformer effects\n chain. When this function is called, the calculated noise profile\n file is saved to the `profile_path`.\n\n Parameters\n ----------\n input_filepath : str\n Path to audiofile from which to compute a noise profile.\n profile_path : str\n Path to save the noise profile file.\n\n See Also\n --------\n noisered"}
{"_id": "q_6274", "text": "Normalize an audio file to a particular db level.\n This behaves identically to the gain effect with normalize=True.\n\n Parameters\n ----------\n db_level : float, default=-3.0\n Output volume (db)\n\n See Also\n --------\n gain, loudness"}
{"_id": "q_6275", "text": "Add silence to the beginning or end of a file.\n Calling this with the default arguments has no effect.\n\n Parameters\n ----------\n start_duration : float\n Number of seconds of silence to add to beginning.\n end_duration : float\n Number of seconds of silence to add to end.\n\n See Also\n --------\n delay"}
{"_id": "q_6276", "text": "Pitch shift the audio without changing the tempo.\n\n This effect uses the WSOLA algorithm. The audio is chopped up into\n segments which are then shifted in the time domain and overlapped\n (cross-faded) at points where their waveforms are most similar as\n determined by measurement of least squares.\n\n Parameters\n ----------\n n_semitones : float\n The number of semitones to shift. Can be positive or negative.\n quick : bool, default=False\n If True, this effect will run faster but with lower sound quality.\n\n See Also\n --------\n bend, speed, tempo"}
{"_id": "q_6277", "text": "Remix the channels of an audio file.\n\n Note: volume options are not yet implemented\n\n Parameters\n ----------\n remix_dictionary : dict or None\n Dictionary mapping output channel to list of input channel(s).\n Empty lists indicate the corresponding output channel should be\n empty. If None, mixes all channels down to a single mono file.\n num_output_channels : int or None\n The number of channels in the output file. If None, the number of\n output channels is equal to the largest key in remix_dictionary.\n If remix_dictionary is None, this variable is ignored.\n\n Examples\n --------\n Remix a 4-channel input file. The output file will have\n input channel 2 in channel 1, a mixdown of input channels 1 an 3 in\n channel 2, an empty channel 3, and a copy of input channel 4 in\n channel 4.\n\n >>> import sox\n >>> tfm = sox.Transformer()\n >>> remix_dictionary = {1: [2], 2: [1, 3], 4: [4]}\n >>> tfm.remix(remix_dictionary)"}
{"_id": "q_6278", "text": "Repeat the entire audio count times.\n\n Parameters\n ----------\n count : int, default=1\n The number of times to repeat the audio."}
{"_id": "q_6279", "text": "Reverse the audio completely"}
{"_id": "q_6280", "text": "Removes silent regions from an audio file.\n\n Parameters\n ----------\n location : int, default=0\n Where to remove silence. One of:\n * 0 to remove silence throughout the file (default),\n * 1 to remove silence from the beginning,\n * -1 to remove silence from the end,\n silence_threshold : float, default=0.1\n Silence threshold as percentage of maximum sample amplitude.\n Must be between 0 and 100.\n min_silence_duration : float, default=0.1\n The minimum ammount of time in seconds required for a region to be\n considered non-silent.\n buffer_around_silence : bool, default=False\n If True, leaves a buffer of min_silence_duration around removed\n silent regions.\n\n See Also\n --------\n vad"}
{"_id": "q_6281", "text": "Display time domain statistical information about the audio\n channels. Audio is passed unmodified through the SoX processing chain.\n Statistics are calculated and displayed for each audio channel\n\n Unlike other Transformer methods, this does not modify the transformer\n effects chain. Instead it computes statistics on the output file that\n would be created if the build command were invoked.\n\n Note: The file is downmixed to mono prior to computation.\n\n Parameters\n ----------\n input_filepath : str\n Path to input file to compute stats on.\n\n Returns\n -------\n stats_dict : dict\n List of frequency (Hz), amplitude pairs.\n\n See Also\n --------\n stat, sox.file_info"}
{"_id": "q_6282", "text": "Swap stereo channels. If the input is not stereo, pairs of channels\n are swapped, and a possible odd last channel passed through.\n\n E.g., for seven channels, the output order will be 2, 1, 4, 3, 6, 5, 7.\n\n See Also\n ----------\n remix"}
{"_id": "q_6283", "text": "Excerpt a clip from an audio file, given the start timestamp and end timestamp of the clip within the file, expressed in seconds. If the end timestamp is set to `None` or left unspecified, it defaults to the duration of the audio file.\n\n Parameters\n ----------\n start_time : float\n Start time of the clip (seconds)\n end_time : float or None, default=None\n End time of the clip (seconds)"}
{"_id": "q_6284", "text": "Voice Activity Detector. Attempts to trim silence and quiet\n background sounds from the ends of recordings of speech. The algorithm\n currently uses a simple cepstral power measurement to detect voice, so\n may be fooled by other things, especially music.\n\n The effect can trim only from the front of the audio, so in order to\n trim from the back, the reverse effect must also be used.\n\n Parameters\n ----------\n location : 1 or -1, default=1\n If 1, trims silence from the beginning\n If -1, trims silence from the end\n normalize : bool, default=True\n If true, normalizes audio before processing.\n activity_threshold : float, default=7.0\n The measurement level used to trigger activity detection. This may\n need to be cahnged depending on the noise level, signal level, and\n other characteristics of the input audio.\n min_activity_duration : float, default=0.25\n The time constant (in seconds) used to help ignore short bursts of\n sound.\n initial_search_buffer : float, default=1.0\n The amount of audio (in seconds) to search for quieter/shorter\n bursts of audio to include prior to the detected trigger point.\n max_gap : float, default=0.25\n The allowed gap (in seconds) between quiteter/shorter bursts of\n audio to include prior to the detected trigger point\n initial_pad : float, default=0.0\n The amount of audio (in seconds) to preserve before the trigger\n point and any found quieter/shorter bursts.\n\n See Also\n --------\n silence\n\n Examples\n --------\n >>> tfm = sox.Transformer()\n\n Remove silence from the beginning of speech\n\n >>> tfm.vad(initial_pad=0.3)\n\n Remove silence from the end of speech\n\n >>> tfm.vad(location=-1, initial_pad=0.2)"}
{"_id": "q_6285", "text": "Apply an amplification or an attenuation to the audio signal.\n\n Parameters\n ----------\n gain : float\n Interpreted according to the given `gain_type`.\n If `gain_type' = 'amplitude', `gain' is a positive amplitude ratio.\n If `gain_type' = 'power', `gain' is a power (voltage squared).\n If `gain_type' = 'db', `gain' is in decibels.\n gain_type : string, default='amplitude'\n Type of gain. One of:\n - 'amplitude'\n - 'power'\n - 'db'\n limiter_gain : float or None, default=None\n If specified, a limiter is invoked on peaks greater than\n `limiter_gain' to prevent clipping.\n `limiter_gain` should be a positive value much less than 1.\n\n See Also\n --------\n gain, compand"}
{"_id": "q_6286", "text": "Extended euclidean algorithm to find modular inverses for integers"}
{"_id": "q_6287", "text": "Lets a user join a room on a specific Namespace."}
{"_id": "q_6288", "text": "Lets a user leave a room on a specific Namespace."}
{"_id": "q_6289", "text": "Main SocketIO management function, call from within your Framework of\n choice's view.\n\n The ``environ`` variable is the WSGI ``environ``. It is used to extract\n Socket object from the underlying server (as the 'socketio' key), and will\n be attached to both the ``Socket`` and ``Namespace`` objects.\n\n The ``namespaces`` parameter is a dictionary of the namespace string\n representation as key, and the BaseNamespace namespace class descendant as\n a value. The empty string ('') namespace is the global namespace. You can\n use Socket.GLOBAL_NS to be more explicit. So it would look like:\n\n .. code-block:: python\n\n namespaces={'': GlobalNamespace,\n '/chat': ChatNamespace}\n\n The ``request`` object is not required, but will probably be useful to pass\n framework-specific things into your Socket and Namespace functions. It will\n simply be attached to the Socket and Namespace object (accessible through\n ``self.request`` in both cases), and it is not accessed in any case by the\n ``gevent-socketio`` library.\n\n Pass in an ``error_handler`` if you want to override the default\n error_handler (which is :func:`socketio.virtsocket.default_error_handler`.\n The callable you pass in should have the same signature as the default\n error handler.\n\n The ``json_loads`` and ``json_dumps`` are overrides for the default\n ``json.loads`` and ``json.dumps`` function calls. Override these at\n the top-most level here. This will affect all sockets created by this\n socketio manager, and all namespaces inside.\n\n This function will block the current \"view\" or \"controller\" in your\n framework to do the recv/send on the socket, and dispatch incoming messages\n to your namespaces.\n\n This is a simple example using Pyramid:\n\n .. code-block:: python\n\n def my_view(request):\n socketio_manage(request.environ, {'': GlobalNamespace}, request)\n\n NOTE: You must understand that this function is going to be called\n *only once* per socket opening, *even though* you are using a long\n polling mechanism. The subsequent calls (for long polling) will\n be hooked directly at the server-level, to interact with the\n active ``Socket`` instance. This means you will *not* get access\n to the future ``request`` or ``environ`` objects. This is of\n particular importance regarding sessions (like Beaker). The\n session will be opened once at the opening of the Socket, and not\n closed until the socket is closed. You are responsible for\n opening and closing the cookie-based session yourself if you want\n to keep its data in sync with the rest of your GET/POST calls."}
{"_id": "q_6290", "text": "Keep a reference of the callback on this socket."}
{"_id": "q_6291", "text": "Fetch the callback for a given msgid, if it exists, otherwise,\n return None"}
{"_id": "q_6292", "text": "Get multiple messages, in case we're going through the various\n XHR-polling methods, on which we can pack more than one message if the\n rate is high, and encode the payload for the HTTP channel."}
{"_id": "q_6293", "text": "This removes a Namespace object from the socket.\n\n This is usually called by\n :meth:`~socketio.namespace.BaseNamespace.disconnect`."}
{"_id": "q_6294", "text": "Low-level interface to queue a packet on the wire (encoded as wire\n protocol"}
{"_id": "q_6295", "text": "Spawn a new Greenlet, attached to this Socket instance.\n\n It will be monitored by the \"watcher\" method"}
{"_id": "q_6296", "text": "Start the heartbeat Greenlet to check connection health."}
{"_id": "q_6297", "text": "You should always use this function to call the methods,\n as it checks if the user is allowed according to the ACLs.\n\n If you override :meth:`process_packet` or\n :meth:`process_event`, you should definitely want to use this\n instead of ``getattr(self, 'my_method')()``"}
{"_id": "q_6298", "text": "Use this to use the configured ``error_handler`` yield an\n error message to your application.\n\n :param error_name: is a short string, to associate messages to recovery\n methods\n :param error_message: is some human-readable text, describing the error\n :param msg_id: is used to associate with a request\n :param quiet: specific to error_handlers. The default doesn't send a\n message to the user, but shows a debug message on the\n developer console."}
{"_id": "q_6299", "text": "Use send to send a simple string message.\n\n If ``json`` is True, the message will be encoded as a JSON object\n on the wire, and decoded on the other side.\n\n This is mostly for backwards compatibility. ``emit()`` is more fun.\n\n :param callback: This is a callback function that will be\n called automatically by the client upon\n reception. It does not verify that the\n listener over there was completed with\n success. It just tells you that the browser\n got a hold of the packet.\n :type callback: callable"}
{"_id": "q_6300", "text": "Spawn a new process, attached to this Namespace.\n\n It will be monitored by the \"watcher\" process in the Socket. If the\n socket disconnects, all these greenlets are going to be killed, after\n calling BaseNamespace.disconnect()\n\n This method uses the ``exception_handler_decorator``. See\n Namespace documentation for more information."}
{"_id": "q_6301", "text": "Return an existing or new client Socket."}
{"_id": "q_6302", "text": "Handles post from the \"Add room\" form on the homepage, and\n redirects to the new room."}
{"_id": "q_6303", "text": "This will fetch the messages from the Socket's queue, and if\n there are many messes, pack multiple messages in one payload and return"}
{"_id": "q_6304", "text": "Just quote out stuff before sending it out"}
{"_id": "q_6305", "text": "This is sent to all in the sockets in this particular Namespace,\n including itself."}
{"_id": "q_6306", "text": "Add a parent to this role,\n and add role itself to the parent's children set.\n you should override this function if neccessary.\n\n Example::\n\n logged_user = RoleMixin('logged_user')\n student = RoleMixin('student')\n student.add_parent(logged_user)\n\n :param parent: Parent role to add in."}
{"_id": "q_6307", "text": "Add allowing rules.\n\n :param role: Role of this rule.\n :param method: Method to allow in rule, include GET, POST, PUT etc.\n :param resource: Resource also view function.\n :param with_children: Allow role's children in rule as well\n if with_children is `True`"}
{"_id": "q_6308", "text": "Add denying rules.\n\n :param role: Role of this rule.\n :param method: Method to deny in rule, include GET, POST, PUT etc.\n :param resource: Resource also view function.\n :param with_children: Deny role's children in rule as well\n if with_children is `True`"}
{"_id": "q_6309", "text": "Check whether role is allowed to access resource\n\n :param role: Role to be checked.\n :param method: Method to be checked.\n :param resource: View function to be checked."}
{"_id": "q_6310", "text": "Check wherther role is denied to access resource\n\n :param role: Role to be checked.\n :param method: Method to be checked.\n :param resource: View function to be checked."}
{"_id": "q_6311", "text": "This is a decorator function.\n\n You can allow roles to access the view func with it.\n\n An example::\n\n @app.route('/website/setting', methods=['GET', 'POST'])\n @rbac.allow(['administrator', 'super_user'], ['GET', 'POST'])\n def website_setting():\n return Response('Setting page.')\n\n :param roles: List, each name of roles. Please note that,\n `anonymous` is refered to anonymous.\n If you add `anonymous` to the rule,\n everyone can access the resource,\n unless you deny other roles.\n :param methods: List, each name of methods.\n methods is valid in ['GET', 'POST', 'PUT', 'DELETE']\n :param with_children: Whether allow children of roles as well.\n True by default."}
{"_id": "q_6312", "text": "Given a string and a category, finds and combines words into\n groups based on their proximity.\n\n Args:\n text (str): Some text.\n tokens (list): A list of regex strings.\n\n Returns:\n list. The combined strings it found.\n\n Example:\n COLOURS = [r\"red(?:dish)?\", r\"grey(?:ish)?\", r\"green(?:ish)?\"]\n s = 'GREYISH-GREEN limestone with RED or GREY sandstone.'\n find_word_groups(s, COLOURS) --> ['greyish green', 'red', 'grey']"}
{"_id": "q_6313", "text": "Given a string and a dict of synonyms, returns the 'preferred'\n word. Case insensitive.\n\n Args:\n word (str): A word.\n\n Returns:\n str: The preferred word, or the input word if not found.\n\n Example:\n >>> syn = {'snake': ['python', 'adder']}\n >>> find_synonym('adder', syn)\n 'snake'\n >>> find_synonym('rattler', syn)\n 'rattler'\n\n TODO:\n Make it handle case, returning the same case it received."}
{"_id": "q_6314", "text": "Parse a piece of text and replace any abbreviations with their full\n word equivalents. Uses the lexicon.abbreviations dictionary to find\n abbreviations.\n\n Args:\n text (str): The text to parse.\n\n Returns:\n str: The text with abbreviations replaced."}
{"_id": "q_6315", "text": "Split a description into parts, each of which can be turned into\n a single component."}
{"_id": "q_6316", "text": "Returns a minimal Decor with a random colour."}
{"_id": "q_6317", "text": "Make a simple plot of the Decor.\n\n Args:\n fmt (str): A Python format string for the component summaries.\n fig (Pyplot figure): A figure, optional. Use either fig or ax, not\n both.\n ax (Pyplot axis): An axis, optional. Use either fig or ax, not\n both.\n\n Returns:\n fig or ax or None. If you pass in an ax, you get it back. If you pass\n in a fig, you get it. If you pass nothing, the function creates a\n plot object as a side-effect."}
{"_id": "q_6318", "text": "Generate a default legend.\n\n Args:\n name (str): The name of the legend you want. Not case sensitive.\n 'nsdoe': Nova Scotia Dept. of Energy\n 'canstrat': Canstrat\n 'nagmdm__6_2': USGS N. Am. Geol. Map Data Model 6.2\n 'nagmdm__6_1': USGS N. Am. Geol. Map Data Model 6.1\n 'nagmdm__4_3': USGS N. Am. Geol. Map Data Model 4.3\n 'sgmc': USGS State Geologic Map Compilation\n\n Default 'nagmdm__6_2'.\n\n Returns:\n Legend: The legend stored in `defaults.py`."}
{"_id": "q_6319", "text": "Generate a random legend for a given list of components.\n\n Args:\n components (list or Striplog): A list of components. If you pass\n a Striplog, it will use the primary components. If you pass a\n component on its own, you will get a random Decor.\n width (bool): Also generate widths for the components, based on the\n order in which they are encountered.\n colour (str): If you want to give the Decors all the same colour,\n provide a hex string.\n Returns:\n Legend or Decor: A legend (or Decor) with random colours.\n TODO:\n It might be convenient to have a partial method to generate an\n 'empty' legend. Might be an easy way for someone to start with a\n template, since it'll have the components in it already."}
{"_id": "q_6320", "text": "A slightly easier way to make legends from images.\n\n Args:\n filename (str)\n components (list)\n ignore (list): Colours to ignore, e.g. \"#FFFFFF\" to ignore white.\n col_offset (Number): If < 1, interpreted as proportion of way\n across the image. If > 1, interpreted as pixels from left.\n row_offset (int): Number of pixels to skip at the top of each\n interval."}
{"_id": "q_6321", "text": "Read CSV text and generate a Legend.\n\n Args:\n string (str): The CSV string.\n\n In the first row, list the properties. Precede the properties of the\n component with 'comp ' or 'component '. For example:\n\n colour, width, comp lithology, comp colour\n #FFFFFF, 0, ,\n #F7E9A6, 3, Sandstone, Grey\n #FF99CC, 2, Anhydrite,\n ... etc\n\n Note:\n To edit a legend, the easiest thing to do is probably this:\n\n - `legend.to_csv()`\n - Edit the legend, call it `new_legend`.\n - `legend = Legend.from_csv(text=new_legend)`"}
{"_id": "q_6322", "text": "Renders a legend as a CSV string.\n\n No arguments.\n\n Returns:\n str: The legend as a CSV."}
{"_id": "q_6323", "text": "The maximum width of all the Decors in the Legend. This is needed\n to scale a Legend or Striplog when plotting with widths turned on."}
{"_id": "q_6324", "text": "Get the decor for a component.\n\n Args:\n c (component): The component to look up.\n match_only (list of str): The component attributes to include in the\n comparison. Default: All of them.\n\n Returns:\n Decor. The matching Decor from the Legend, or None if not found."}
{"_id": "q_6325", "text": "Get the component corresponding to a display colour. This is for\n generating a Striplog object from a colour image of a striplog.\n\n Args:\n colour (str): The hex colour string to look up.\n tolerance (float): The colourspace distance within which to match.\n default (component or None): The component to return in the event\n of no match.\n\n Returns:\n component. The component best matching the provided colour."}
{"_id": "q_6326", "text": "Generate a Component from a text string, using a Lexicon.\n\n Args:\n text (str): The text string to parse.\n lexicon (Lexicon): The dictionary to use for the\n categories and lexemes.\n required (str): An attribute that we must have. If a required\n attribute is missing from the component, then None is returned.\n first_only (bool): Whether to only take the first\n match of a lexeme against the text string.\n\n Returns:\n Component: A Component object, or None if there was no\n must-have field."}
{"_id": "q_6327", "text": "Given a format string, return a summary description of a component.\n\n Args:\n component (dict): A component dictionary.\n fmt (str): Describes the format with a string. If no format is\n given, you will just get a list of attributes. If you give the\n empty string (''), you'll get `default` back. By default this\n gives you the empty string, effectively suppressing the\n summary.\n initial (bool): Whether to capitialize the first letter. Default is\n True.\n default (str): What to give if there's no component defined.\n\n Returns:\n str: A summary string.\n\n Example:\n\n r = Component({'colour': 'Red',\n 'grainsize': 'VF-F',\n 'lithology': 'Sandstone'})\n\n r.summary() --> 'Red, vf-f, sandstone'"}
{"_id": "q_6328", "text": "Graceful deprecation for old class name."}
{"_id": "q_6329", "text": "Processes a single row from the file."}
{"_id": "q_6330", "text": "Read all the rows and return a dict of the results."}
{"_id": "q_6331", "text": "Private method. Checks if striplog is monotonically increasing in\n depth.\n\n Returns:\n Bool."}
{"_id": "q_6332", "text": "Property. Summarize a Striplog with some statistics.\n\n Returns:\n List. A list of (Component, total thickness thickness) tuples."}
{"_id": "q_6333", "text": "Private method. Take a sequence of tops in an arbitrary dimension,\n and provide a list of intervals from which a striplog can be made.\n\n This is only intended to be used by ``from_image()``.\n\n Args:\n tops (iterable). A list of floats.\n values (iterable). A list of values to look up.\n basis (iterable). A list of components.\n components (iterable). A list of Components.\n\n Returns:\n List. A list of Intervals."}
{"_id": "q_6334", "text": "Private function. Make sure we have what we need to make a striplog."}
{"_id": "q_6335", "text": "Private function. Takes a data dictionary and reconstructs a list\n of Intervals from it.\n\n Args:\n data_dict (dict)\n stop (float): Where to end the last interval.\n points (bool)\n include (dict)\n exclude (dict)\n ignore (list)\n lexicon (Lexicon)\n\n Returns:\n list."}
{"_id": "q_6336", "text": "Load from a CSV file or text."}
{"_id": "q_6337", "text": "Turn a 1D array into a striplog, given a cutoff.\n\n Args:\n log (array-like): A 1D array or a list of integers.\n cutoff (number or array-like): The log value(s) at which to bin\n the log. Optional.\n components (array-like): A list of components. Use this or\n ``legend``.\n legend (``Legend``): A legend object. Use this or ``components``.\n legend_field ('str'): If you're not trying to match against\n components, then you can match the log values to this field in\n the Decors.\n field (str): The field in the Interval's ``data`` to store the log\n values as.\n right (bool): Which side of the cutoff to send things that are\n equal to, i.e. right on, the cutoff.\n basis (array-like): A depth basis for the log, so striplog knows\n where to put the boundaries.\n source (str): The source of the data. Default 'Log'.\n\n Returns:\n Striplog: The ``striplog`` object."}
{"_id": "q_6338", "text": "Turn LAS3 'lithology' section into a Striplog.\n\n Args:\n string (str): A section from an LAS3 file.\n lexicon (Lexicon): The language for conversion to components.\n source (str): A source for the data.\n dlm (str): The delimiter.\n abbreviations (bool): Whether to expand abbreviations.\n\n Returns:\n Striplog: The ``striplog`` object.\n\n Note:\n Handles multiple 'Data' sections. It would be smarter for it\n to handle one at a time, and to deal with parsing the multiple\n sections in the Well object.\n\n Does not read an actual LAS file. Use the Well object for that."}
{"_id": "q_6339", "text": "Eat a Canstrat DAT file and make a striplog."}
{"_id": "q_6340", "text": "Returns a shallow copy."}
{"_id": "q_6341", "text": "Returns an LAS 3.0 section string.\n\n Args:\n use_descriptions (bool): Whether to use descriptions instead\n of summaries, if available.\n dlm (str): The delimiter.\n source (str): The sourse of the data.\n\n Returns:\n str: A string forming Lithology section of an LAS3 file."}
{"_id": "q_6342", "text": "Get data from the striplog."}
{"_id": "q_6343", "text": "'Extract' a log into the components of a striplog.\n\n Args:\n log (array_like). A log or other 1D data.\n basis (array_like). The depths or elevations of the log samples.\n name (str). The name of the attribute to store in the components.\n function (function). A function that takes an array as the only\n input, and returns whatever you want to store in the 'name'\n attribute of the primary component.\n Returns:\n None. The function works on the striplog in place."}
{"_id": "q_6344", "text": "Look for a regex expression in the descriptions of the striplog.\n If there's no description, it looks in the summaries.\n\n If you pass a Component, then it will search the components, not the\n descriptions or summaries.\n\n Case insensitive.\n\n Args:\n search_term (string or Component): The thing you want to search\n for. Strings are treated as regular expressions.\n index (bool): Whether to return the index instead of the interval.\n Returns:\n Striplog: A striplog that contains only the 'hit' Intervals.\n However, if ``index`` was ``True``, then that's what you get."}
{"_id": "q_6345", "text": "Find overlaps in a striplog.\n\n Args:\n index (bool): If True, returns indices of intervals with\n gaps after them.\n\n Returns:\n Striplog: A striplog of all the overlaps as intervals."}
{"_id": "q_6346", "text": "Finds gaps in a striplog.\n\n Args:\n index (bool): If True, returns indices of intervals with\n gaps after them.\n\n Returns:\n Striplog: A striplog of all the gaps. A sort of anti-striplog."}
{"_id": "q_6347", "text": "Remove intervals below a certain limit thickness. In place.\n\n Args:\n limit (float): Anything thinner than this will be pruned.\n n (int): The n thinnest beds will be pruned.\n percentile (float): The thinnest specified percentile will be\n pruned.\n keep_ends (bool): Whether to keep the first and last, regardless\n of whether they meet the pruning criteria."}
{"_id": "q_6348", "text": "Fill in empty intervals by growing from top and base.\n\n Note that this operation happens in-place and destroys any information\n about the ``Position`` (e.g. metadata associated with the top or base).\n See GitHub issue #54."}
{"_id": "q_6349", "text": "Fill gaps with the component provided.\n\n Example\n t = s.fill(Component({'lithology': 'cheese'}))"}
{"_id": "q_6350", "text": "Makes a striplog of all intersections.\n\n Args:\n Striplog. The striplog instance to intersect with.\n\n Returns:\n Striplog. The result of the intersection."}
{"_id": "q_6351", "text": "Merges overlaps by merging overlapping Intervals.\n\n The function takes no arguments and returns ``None``. It operates on\n the striplog 'in place'\n\n TODO: This function will not work if any interval overlaps more than\n one other intervals at either its base or top."}
{"_id": "q_6352", "text": "Plots a histogram and returns the data for it.\n\n Args:\n lumping (str): If given, the bins will be lumped based on this\n attribute of the primary components of the intervals\n encountered.\n summary (bool): If True, the summaries of the components are\n returned as the bins. Otherwise, the default behaviour is to\n return the Components themselves.\n sort (bool): If True (default), the histogram is sorted by value,\n starting with the largest.\n plot (bool): If True (default), produce a bar plot.\n legend (Legend): The legend with which to colour the bars.\n ax (axis): An axis object, which will be returned if provided.\n If you don't provide one, it will be created but not returned.\n\n Returns:\n Tuple: A tuple of tuples of entities and counts.\n\n TODO:\n Deal with numeric properties, so I can histogram 'Vp' values, say."}
{"_id": "q_6353", "text": "Inverts the striplog, changing its order and the order of its contents.\n\n Operates in place by default.\n\n Args:\n copy (bool): Whether to operate in place or make a copy.\n\n Returns:\n None if operating in-place, or an inverted copy of the striplog\n if not."}
{"_id": "q_6354", "text": "Run a series of tests and return the corresponding results.\n\n Based on curve testing for ``welly``.\n\n Args:\n tests (list): a list of functions.\n\n Returns:\n list. The results. Stick to booleans (True = pass) or ints."}
{"_id": "q_6355", "text": "Get a log-like stream of RGB values from an image.\n\n Args:\n filename (str): The filename of a PNG image.\n offset (Number): If < 1, interpreted as proportion of way across\n the image. If > 1, interpreted as pixels from left.\n\n Returns:\n ndarray: A 2d array (a column of RGB triples) at the specified\n offset.\n\n TODO:\n Generalize this to extract 'logs' from images in other ways, such\n as giving the mean of a range of pixel columns, or an array of\n columns. See also a similar routine in pythonanywhere/freqbot."}
{"_id": "q_6356", "text": "Return an underscore if the attribute is absent.\n Not all components have the same attributes."}
{"_id": "q_6357", "text": "Lists all the jobs registered with Nomad.\n\n https://www.nomadproject.io/docs/http/jobs.html\n arguments:\n - prefix :(str) optional, specifies a string to filter jobs on based on an prefix.\n This is specified as a querystring parameter.\n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6358", "text": "Parse a HCL Job file. Returns a dict with the JSON formatted job.\n This API endpoint is only supported from Nomad version 0.8.3.\n\n https://www.nomadproject.io/api/jobs.html#parse-job\n\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6359", "text": "Update token.\n\n https://www.nomadproject.io/api/acl-tokens.html\n\n arguments:\n - AccdesorID\n - token\n returns: dict\n\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6360", "text": "Lists all the allocations.\n\n https://www.nomadproject.io/docs/http/allocs.html\n arguments:\n - prefix :(str) optional, specifies a string to filter allocations on based on an prefix.\n This is specified as a querystring parameter.\n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6361", "text": "This endpoint is used to mark a deployment as failed. This should be done to force the scheduler to stop\n creating allocations as part of the deployment or to cause a rollback to a previous job version.\n\n https://www.nomadproject.io/docs/http/deployments.html\n\n arguments:\n - id\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6362", "text": "Toggle the drain mode of the node.\n When enabled, no further allocations will be\n assigned and existing allocations will be migrated.\n\n https://www.nomadproject.io/docs/http/node.html\n\n arguments:\n - id (str uuid): node id\n - enable (bool): enable node drain or not to enable node drain\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6363", "text": "This endpoint toggles the drain mode of the node. When draining is enabled,\n no further allocations will be assigned to this node, and existing allocations\n will be migrated to new nodes.\n\n If an empty dictionary is given as drain_spec this will disable/toggle the drain.\n\n https://www.nomadproject.io/docs/http/node.html\n\n arguments:\n - id (str uuid): node id\n - drain_spec (dict): https://www.nomadproject.io/api/nodes.html#drainspec\n - mark_eligible (bool): https://www.nomadproject.io/api/nodes.html#markeligible\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6364", "text": "Toggle the eligibility of the node.\n\n https://www.nomadproject.io/docs/http/node.html\n\n arguments:\n - id (str uuid): node id\n - eligible (bool): Set to True to mark node eligible\n - ineligible (bool): Set to True to mark node ineligible\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6365", "text": "This endpoint streams the contents of a file in an allocation directory.\n\n https://www.nomadproject.io/api/client.html#stream-file\n\n arguments:\n - id: (str) allocation_id required\n - offset: (int) required\n - origin: (str) either start|end\n - path: (str) optional\n returns: (str) text\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.BadRequestNomadException"}
{"_id": "q_6366", "text": "Stat a file in an allocation directory.\n\n https://www.nomadproject.io/docs/http/client-fs-stat.html\n\n arguments:\n - id\n - path\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6367", "text": "Initiate a join between the agent and target peers.\n\n https://www.nomadproject.io/docs/http/agent-join.html\n\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6368", "text": "Lists all the evaluations.\n\n https://www.nomadproject.io/docs/http/evals.html\n arguments:\n - prefix :(str) optional, specifies a string to filter evaluations on based on an prefix.\n This is specified as a querystring parameter.\n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6369", "text": "Lists all the namespaces registered with Nomad.\n\n https://www.nomadproject.io/docs/enterprise/namespaces/index.html\n arguments:\n - prefix :(str) optional, specifies a string to filter namespaces on based on an prefix.\n This is specified as a querystring parameter.\n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6370", "text": "Dispatches a new instance of a parameterized job.\n\n https://www.nomadproject.io/docs/http/job.html\n\n arguments:\n - id\n - payload\n - meta\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6371", "text": "Deregisters a job, and stops all allocations part of it.\n\n https://www.nomadproject.io/docs/http/job.html\n\n arguments:\n - id\n - purge (bool), optionally specifies whether the job should be\n stopped and purged immediately (`purge=True`) or deferred to the\n Nomad garbage collector (`purge=False`).\n\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n - nomad.api.exceptions.InvalidParameters"}
{"_id": "q_6372", "text": "Query the status of a client node registered with Nomad.\n\n https://www.nomadproject.io/docs/http/operator.html\n\n returns: dict\n optional arguments:\n - stale, (defaults to False), Specifies if the cluster should respond without an active leader.\n This is specified as a querystring parameter.\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6373", "text": "Remove the Nomad server with given address from the Raft configuration.\n The return code signifies success or failure.\n\n https://www.nomadproject.io/docs/http/operator.html\n\n arguments:\n - peer_address, The address specifies the server to remove and is given as an IP:port\n optional arguments:\n - stale, (defaults to False), Specifies if the cluster should respond without an active leader.\n This is specified as a querystring parameter.\n returns: Boolean\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6374", "text": "This endpoint lists all deployments.\n\n https://www.nomadproject.io/docs/http/deployments.html\n\n optional_arguments:\n - prefix, (default \"\") Specifies a string to filter deployments on based on an index prefix.\n This is specified as a querystring parameter.\n\n returns: list of dicts\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"}
{"_id": "q_6375", "text": "Get a random mutator from a list of mutators"}
{"_id": "q_6376", "text": "Return a polyglot attack containing the original object"}
{"_id": "q_6377", "text": "Perform the fuzzing"}
{"_id": "q_6378", "text": "Safely return an unicode encoded string"}
{"_id": "q_6379", "text": "Kill the servers"}
{"_id": "q_6380", "text": "Serve custom HTML page"}
{"_id": "q_6381", "text": "Serve fuzzed JSON object"}
{"_id": "q_6382", "text": "Generic fuzz mutator, use a decorator for the given type"}
{"_id": "q_6383", "text": "Spawn a new process using subprocess"}
{"_id": "q_6384", "text": "Try to get output in a separate thread"}
{"_id": "q_6385", "text": "Wait until we got output or until timeout is over"}
{"_id": "q_6386", "text": "Terminate the newly created process"}
{"_id": "q_6387", "text": "Parse the command line and start PyJFuzz"}
{"_id": "q_6388", "text": "Perform the actual external fuzzing, you may replace this method in order to increase performance"}
{"_id": "q_6389", "text": "Build the ``And`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."}
{"_id": "q_6390", "text": "Build the ``Quote`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."}
{"_id": "q_6391", "text": "Build the ``Or`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."}
{"_id": "q_6392", "text": "Build the current ``Opt`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."}
{"_id": "q_6393", "text": "Build the STAR field.\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."}
{"_id": "q_6394", "text": "Shutdown the running process and the monitor"}
{"_id": "q_6395", "text": "Run command in a loop and check exit status plus restart process when needed"}
{"_id": "q_6396", "text": "Fuzz all elements inside the object"}
{"_id": "q_6397", "text": "Mutate a generic object based on type"}
{"_id": "q_6398", "text": "When we get term signal\n if we are waiting and got a sigterm, we just exit.\n if we have a child running, we pass the signal first to the child\n then we exit.\n\n :param signum:\n :param frame:\n :return:"}
{"_id": "q_6399", "text": "\\\n if we have a running child we kill it and set our state to paused\n if we don't have a running child, we set our state to paused\n this will pause all the nodes in single-beat cluster\n\n its useful when you deploy some code and don't want your child to spawn\n randomly\n\n :param msg:\n :return:"}
{"_id": "q_6400", "text": "\\\n sets state to waiting - so we resume spawning children"}
{"_id": "q_6401", "text": "\\\n stops the running child process - if its running\n it will re-spawn in any single-beat node after sometime\n\n :param msg:\n :return:"}
{"_id": "q_6402", "text": "\\\n restart the subprocess\n i. we set our state to RESTARTING - on restarting we still send heartbeat\n ii. we kill the subprocess\n iii. we start again\n iv. if its started we set our state to RUNNING, else we set it to WAITING\n\n :param msg:\n :return:"}
{"_id": "q_6403", "text": "Close the connection to the TwinCAT message router."}
{"_id": "q_6404", "text": "Return the local AMS-address and the port number.\n\n :rtype: pyads.structs.AmsAddr\n :return: AMS-address"}
{"_id": "q_6405", "text": "Read data synchronous from an ADS-device.\n\n :param int port: local AMS port as returned by adsPortOpenEx()\n :param pyads.structs.AmsAddr address: local or remote AmsAddr\n :param int index_group: PLC storage area, according to the INDEXGROUP\n constants\n :param int index_offset: PLC storage address\n :param Type data_type: type of the data given to the PLC, according to\n PLCTYPE constants\n :param bool return_ctypes: return ctypes instead of python types if True\n (default: False)\n :rtype: data_type\n :return: value: **value**"}
{"_id": "q_6406", "text": "Remove a device notification.\n\n :param int port: local AMS port as returned by adsPortOpenEx()\n :param pyads.structs.AmsAddr adr: local or remote AmsAddr\n :param int notification_handle: Notification Handle\n :param int user_handle: User Handle"}
{"_id": "q_6407", "text": "Set Timeout.\n\n :param int port: local AMS port as returned by adsPortOpenEx()\n :param int nMs: timeout in ms"}
{"_id": "q_6408", "text": "Removes `node` from the hash ring and its replicas."}
{"_id": "q_6409", "text": "Return a new Lock object using key ``name`` that mimics\n the behavior of threading.Lock.\n\n If specified, ``timeout`` indicates a maximum life for the lock.\n By default, it will remain locked until release() is called.\n\n ``sleep`` indicates the amount of time to sleep per loop iteration\n when the lock is in blocking mode and another client is currently\n holding the lock."}
{"_id": "q_6410", "text": "Retrieve a list of events since the last poll. Multiple calls may be needed to retrieve all events.\n\n If no events occur, the API will block for up to 30 seconds, after which an empty list is returned. As soon as\n an event is received in this time, it is returned immediately.\n\n Returns:\n :class:`.SkypeEvent` list: a list of events, possibly empty"}
{"_id": "q_6411", "text": "Retrieve various metadata associated with a URL, as seen by Skype.\n\n Args:\n url (str): address to ping for info\n\n Returns:\n dict: metadata for the website queried"}
{"_id": "q_6412", "text": "Retrieve all details for a specific contact, including fields such as birthday and mood.\n\n Args:\n id (str): user identifier to lookup\n\n Returns:\n SkypeContact: resulting contact object"}
{"_id": "q_6413", "text": "Retrieve a list of all known bots.\n\n Returns:\n SkypeBotUser list: resulting bot user objects"}
{"_id": "q_6414", "text": "Retrieve a single bot.\n\n Args:\n id (str): UUID or username of the bot\n\n Returns:\n SkypeBotUser: resulting bot user object"}
{"_id": "q_6415", "text": "Search the Skype Directory for a user.\n\n Args:\n query (str): name to search for\n\n Returns:\n SkypeUser list: collection of possible results"}
{"_id": "q_6416", "text": "Retrieve any pending contact requests.\n\n Returns:\n :class:`SkypeRequest` list: collection of requests"}
{"_id": "q_6417", "text": "Create a new instance based on the raw properties of an API response.\n\n This can be overridden to automatically create subclass instances based on the raw content.\n\n Args:\n skype (Skype): parent Skype instance\n raw (dict): raw object, as provided by the API\n\n Returns:\n SkypeObj: the new class instance"}
{"_id": "q_6418", "text": "Copy properties from other into self, skipping ``None`` values. Also merges the raw data.\n\n Args:\n other (SkypeObj): second object to copy fields from"}
{"_id": "q_6419", "text": "Add a given object to the cache, or update an existing entry to include more fields.\n\n Args:\n obj (SkypeObj): object to add to the cache"}
{"_id": "q_6420", "text": "Follow and track sync state URLs provided by an API endpoint, in order to implicitly handle pagination.\n\n In the first call, ``url`` and ``params`` are used as-is. If a ``syncState`` endpoint is provided in the\n response, subsequent calls go to the latest URL instead.\n\n Args:\n method (str): HTTP request method\n url (str): full URL to connect to\n params (dict): query parameters to include in the URL\n kwargs (dict): any extra parameters to pass to :meth:`__call__`"}
{"_id": "q_6421", "text": "Store details of the current connection in the named file.\n\n This can be used by :meth:`readToken` to re-authenticate at a later time."}
{"_id": "q_6422", "text": "Ensure the authentication token for the given auth method is still valid.\n\n Args:\n auth (Auth): authentication type to check\n\n Raises:\n .SkypeAuthException: if Skype auth is required, and the current token has expired and can't be renewed"}
{"_id": "q_6423", "text": "Take the existing Skype token and refresh it, to extend the expiry time without other credentials.\n\n Raises:\n .SkypeAuthException: if the login request is rejected\n .SkypeApiException: if the login form can't be processed"}
{"_id": "q_6424", "text": "Acquire a new registration token.\n\n Once successful, all tokens and expiry times are written to the token file (if specified on initialisation)."}
{"_id": "q_6425", "text": "Retrieve all current endpoints for the connected user."}
{"_id": "q_6426", "text": "Query a username or email address to see if a corresponding Microsoft account exists.\n\n Args:\n user (str): username or email address of an account\n\n Returns:\n bool: whether the account exists"}
{"_id": "q_6427", "text": "Request a new registration token using a current Skype token.\n\n Args:\n skypeToken (str): existing Skype token\n\n Returns:\n (str, datetime.datetime, str, SkypeEndpoint) tuple: registration token, associated expiry if known,\n resulting endpoint hostname, endpoint if provided\n\n Raises:\n .SkypeAuthException: if the login request is rejected\n .SkypeApiException: if the login form can't be processed"}
{"_id": "q_6428", "text": "Configure this endpoint to allow setting presence.\n\n Args:\n name (str): display name for this endpoint"}
{"_id": "q_6429", "text": "Send a keep-alive request for the endpoint.\n\n Args:\n timeout (int): maximum amount of time for the endpoint to stay active"}
{"_id": "q_6430", "text": "Retrieve a selection of conversations with the most recent activity, and store them in the cache.\n\n Each conversation is only retrieved once, so subsequent calls will retrieve older conversations.\n\n Returns:\n :class:`SkypeChat` list: collection of recent conversations"}
{"_id": "q_6431", "text": "Get a single conversation by identifier.\n\n Args:\n id (str): single or group chat identifier"}
{"_id": "q_6432", "text": "Create a new group chat with the given users.\n\n The current user is automatically added to the conversation as an admin. Any other admin identifiers must also\n be present in the member list.\n\n Args:\n members (str list): user identifiers to initially join the conversation\n admins (str list): user identifiers to gain admin privileges"}
{"_id": "q_6433", "text": "Extract the username from a contact URL.\n\n Matches addresses containing ``users/<user>`` or ``users/ME/contacts/<user>``.\n\n Args:\n url (str): Skype API URL\n\n Returns:\n str: extracted identifier"}
{"_id": "q_6434", "text": "Extract the conversation ID from a conversation URL.\n\n Matches addresses containing ``conversations/<chat>``.\n\n Args:\n url (str): Skype API URL\n\n Returns:\n str: extracted identifier"}
{"_id": "q_6435", "text": "Repeatedly call a function, starting with init, until false-y, yielding each item in turn.\n\n The ``transform`` parameter can be used to map a collection to another format, for example iterating over a\n :class:`dict` by value rather than key.\n\n Use with state-synced functions to retrieve all results.\n\n Args:\n fn (method): function to call\n transform (method): secondary function to convert result into an iterable\n args (list): positional arguments to pass to ``fn``\n kwargs (dict): keyword arguments to pass to ``fn``\n\n Returns:\n generator: generator of objects produced from the method"}
{"_id": "q_6436", "text": "Return a language-server diagnostic from a line of the Mypy error report;\n optionally, use the whole document to provide more context on it."}
{"_id": "q_6437", "text": "Return unicode text, no matter what"}
{"_id": "q_6438", "text": "Figure out which handler to use, based on metadata.\n Returns a handler instance or None.\n\n ``text`` should be unicode text about to be parsed.\n\n ``handlers`` is a dictionary where keys are opening delimiters \n and values are handler instances."}
{"_id": "q_6439", "text": "Parse text with frontmatter, return metadata and content.\n Pass in optional metadata defaults as keyword args.\n\n If frontmatter is not found, returns an empty metadata dictionary\n (or defaults) and original text content.\n\n ::\n\n >>> with open('tests/hello-world.markdown') as f:\n ... metadata, content = frontmatter.parse(f.read())\n >>> print(metadata['title'])\n Hello, world!"}
{"_id": "q_6440", "text": "Post as a dict, for serializing"}
{"_id": "q_6441", "text": "Parse YAML front matter. This uses yaml.SafeLoader by default."}
{"_id": "q_6442", "text": "Export metadata as YAML. This uses yaml.SafeDumper by default."}
{"_id": "q_6443", "text": "Establishes a connection to the Lavalink server."}
{"_id": "q_6444", "text": "Waits to receive a payload from the Lavalink server and processes it."}
{"_id": "q_6445", "text": "Returns the voice channel the player is connected to."}
{"_id": "q_6446", "text": "Connects to a voice channel."}
{"_id": "q_6447", "text": "Disconnects from the voice channel, if any."}
{"_id": "q_6448", "text": "Stores custom user data."}
{"_id": "q_6449", "text": "Adds a track to beginning of the queue"}
{"_id": "q_6450", "text": "Adds a track at a specific index in the queue."}
{"_id": "q_6451", "text": "Plays previous track if it exist, if it doesn't raises a NoPreviousTrack error."}
{"_id": "q_6452", "text": "Seeks to a given position in the track."}
{"_id": "q_6453", "text": "Makes the player play the next song from the queue if a song has finished or an issue occurred."}
{"_id": "q_6454", "text": "Returns a player from the cache, or creates one if it does not exist."}
{"_id": "q_6455", "text": "Searches and plays a song from a given query."}
{"_id": "q_6456", "text": "Shows the player's queue."}
{"_id": "q_6457", "text": "Removes an item from the player's queue with the given index."}
{"_id": "q_6458", "text": "A few checks to make sure the bot can join a voice channel."}
{"_id": "q_6459", "text": "Dispatches an event to all registered hooks."}
{"_id": "q_6460", "text": "Returns a Dictionary containing search results for a given query."}
{"_id": "q_6461", "text": "Destroys the Lavalink client."}
{"_id": "q_6462", "text": "Plays immediately a song."}
{"_id": "q_6463", "text": "Plays the queue from a specific point. Disregards tracks before the index."}
{"_id": "q_6464", "text": "Return the match object for the current list."}
{"_id": "q_6465", "text": "Return items as a list of strings.\n\n Don't include sub-items and the start pattern."}
{"_id": "q_6466", "text": "Return the Lists inside the item with the given index.\n\n :param i: The index if the item which its sub-lists are desired.\n The performance is likely to be better if `i` is None.\n\n :param pattern: The starting symbol for the desired sub-lists.\n The `pattern` of the current list will be automatically added\n as prefix.\n Although this parameter is optional, but specifying it can improve\n the performance."}
{"_id": "q_6467", "text": "Convert to another list type by replacing starting pattern."}
{"_id": "q_6468", "text": "Parse template content. Create self.name and self.arguments."}
{"_id": "q_6469", "text": "Return the lists in all arguments.\n\n For performance reasons it is usually preferred to get a specific\n Argument and use the `lists` method of that argument instead."}
{"_id": "q_6470", "text": "Create a Trie out of a list of words and return an atomic regex pattern.\n\n The corresponding Regex should match much faster than a simple Regex union."}
{"_id": "q_6471", "text": "Insert the given string before the specified index.\n\n This method has the same effect as ``self[index:index] = string``;\n it only avoids some condition checks as it rules out the possibility\n of the key being an slice, or the need to shrink any of the sub-spans.\n\n If parse is False, don't parse the inserted string."}
{"_id": "q_6472", "text": "Partition self.string where `char`'s not in atomic sub-spans."}
{"_id": "q_6473", "text": "Return all the sub-span including self._span."}
{"_id": "q_6474", "text": "Update self._type_to_spans according to the removed span.\n\n Warning: If an operation involves both _shrink_update and\n _insert_update, you might wanna consider doing the\n _insert_update before the _shrink_update as this function\n can cause data loss in self._type_to_spans."}
{"_id": "q_6475", "text": "Return the nesting level of self.\n\n The minimum nesting_level is 0. Being part of any Template or\n ParserFunction increases the level by one."}
{"_id": "q_6476", "text": "Return a copy of self.string with specific sub-spans replaced.\n\n Comments blocks are replaced by spaces. Other sub-spans are replaced\n by underscores.\n\n The replaced sub-spans are: (\n 'Template', 'WikiLink', 'ParserFunction', 'ExtensionTag',\n 'Comment',\n )\n\n This function is called upon extracting tables or extracting the data\n inside them."}
{"_id": "q_6477", "text": "Replace the invalid chars of SPAN_PARSER_TYPES with b'_'.\n\n For comments, all characters are replaced, but for ('Template',\n 'ParserFunction', 'Parameter') only invalid characters are replaced."}
{"_id": "q_6478", "text": "Create the arguments for the parse function used in pformat method.\n\n Only return sub-spans and change the them to fit the new scope, i.e\n self.string."}
{"_id": "q_6479", "text": "Deprecated, use self.pformat instead."}
{"_id": "q_6480", "text": "Return a list of parameter objects."}
{"_id": "q_6481", "text": "Return a list of templates as template objects."}
{"_id": "q_6482", "text": "Return a list of comment objects."}
{"_id": "q_6483", "text": "Return a list of section in current wikitext.\n\n The first section will always be the lead section, even if it is an\n empty string."}
{"_id": "q_6484", "text": "Return a list of found table objects."}
{"_id": "q_6485", "text": "r\"\"\"Return a list of WikiList objects.\n\n :param pattern: The starting pattern for list items.\n Return all types of lists (ol, ul, and dl) if pattern is None.\n If pattern is not None, it will be passed to the regex engine,\n remember to escape the `*` character. Examples:\n\n - `\\#` means top-level ordered lists\n - `\\#\\*` means unordred lists inside an ordered one\n - Currently definition lists are not well supported, but you\n can use `[:;]` as their pattern.\n\n Tips and tricks:\n\n Be careful when using the following patterns as they will\n probably cause malfunction in the `sublists` method of the\n resultant List. (However don't worry about them if you are\n not going to use the `sublists` method.)\n\n - Use `\\*+` as a pattern and nested unordered lists will be\n treated as flat.\n - Use `\\*\\s*` as pattern to rtstrip `items` of the list.\n\n Although the pattern parameter is optional, but specifying it\n can improve the performance."}
{"_id": "q_6486", "text": "Yield all the sub-span indices excluding self._span."}
{"_id": "q_6487", "text": "Return the parent node of the current object.\n\n :param type_: the type of the desired parent object.\n Currently the following types are supported: {Template,\n ParserFunction, WikiLink, Comment, Parameter, ExtensionTag}.\n The default is None and means the first parent, of any type above.\n :return: parent WikiText object or None if no parent with the desired\n `type_` is found."}
{"_id": "q_6488", "text": "Return normal form of self.name.\n\n - Remove comments.\n - Remove language code.\n - Remove namespace (\"template:\" or any of `localized_namespaces`.\n - Use space instead of underscore.\n - Remove consecutive spaces.\n - Use uppercase for the first letter if `capitalize`.\n - Remove #anchor.\n\n :param rm_namespaces: is used to provide additional localized\n namespaces for the template namespace. They will be removed from\n the result. Default is ('Template',).\n :param capitalize: If True, convert the first letter of the\n template's name to a capital letter. See\n [[mw:Manual:$wgCapitalLinks]] for more info.\n :param code: is the language code.\n :param capital_links: deprecated.\n :param _code: deprecated.\n\n Example:\n >>> Template(\n ... '{{ eN : tEmPlAtE : <!-- c --> t_1 # b | a }}'\n ... ).normal_name(code='en')\n 'T 1'"}
{"_id": "q_6489", "text": "Eliminate duplicate arguments by removing the first occurrences.\n\n Remove the first occurrences of duplicate arguments, regardless of\n their value. Result of the rendered wikitext should remain the same.\n Warning: Some meaningful data may be removed from wikitext.\n\n Also see `rm_dup_args_safe` function."}
{"_id": "q_6490", "text": "Remove duplicate arguments in a safe manner.\n\n Remove the duplicate arguments only in the following situations:\n 1. Both arguments have the same name AND value. (Remove one of\n them.)\n 2. Arguments have the same name and one of them is empty. (Remove\n the empty one.)\n\n Warning: Although this is considered to be safe and no meaningful data\n is removed from wikitext, but the result of the rendered wikitext\n may actually change if the second arg is empty and removed but\n the first had had a value.\n\n If `tag` is defined, it should be a string that will be appended to\n the value of the remaining duplicate arguments.\n\n Also see `rm_first_of_dup_args` function."}
{"_id": "q_6491", "text": "Set the value for `name` argument. Add it if it doesn't exist.\n\n - Use `positional`, `before` and `after` keyword arguments only when\n adding a new argument.\n - If `before` is given, ignore `after`.\n - If neither `before` nor `after` are given and it's needed to add a\n new argument, then append the new argument to the end.\n - If `positional` is True, try to add the given value as a positional\n argument. Ignore `preserve_spacing` if positional is True.\n If it's None, do what seems more appropriate."}
{"_id": "q_6492", "text": "Delete all arguments with the given then."}
{"_id": "q_6493", "text": "Add suggestion terms to the AutoCompleter engine. Each suggestion has a score and string.\n\n If kwargs['increment'] is true and the terms are already in the server's dictionary, we increment their scores"}
{"_id": "q_6494", "text": "Get a list of suggestions from the AutoCompleter, for a given prefix\n\n ### Parameters:\n - **prefix**: the prefix we are searching. **Must be valid ascii or utf-8**\n - **fuzzy**: If set to true, the prefix search is done in fuzzy mode. \n **NOTE**: Running fuzzy searches on short (<3 letters) prefixes can be very slow, and even scan the entire index.\n - **with_scores**: if set to true, we also return the (refactored) score of each suggestion. \n This is normally not needed, and is NOT the original score inserted into the index\n - **with_payloads**: Return suggestion payloads\n - **num**: The maximum number of results we return. Note that we might return less. The algorithm trims irrelevant suggestions.\n \n Returns a list of Suggestion objects. If with_scores was False, the score of all suggestions is 1."}
{"_id": "q_6495", "text": "Create the search index. The index must not already exist.\n\n ### Parameters:\n\n - **fields**: a list of TextField or NumericField objects\n - **no_term_offsets**: If true, we will not save term offsets in the index\n - **no_field_flags**: If true, we will not save field flags that allow searching in specific fields\n - **stopwords**: If not None, we create the index with this custom stopword list. The list can be empty"}
{"_id": "q_6496", "text": "Internal add_document used for both batch and single doc indexing"}
{"_id": "q_6497", "text": "Add a single document to the index.\n\n ### Parameters\n\n - **doc_id**: the id of the saved document.\n - **nosave**: if set to true, we just index the document, and don't save a copy of it. This means that searches will just return ids.\n - **score**: the document ranking, between 0.0 and 1.0 \n - **payload**: optional inner-index payload we can save for fast access in scoring functions\n - **replace**: if True, and the document already is in the index, we perform an update and reindex the document\n - **partial**: if True, the fields specified will be added to the existing document.\n This has the added benefit that any fields specified with `no_index`\n will not be reindexed again. Implies `replace`\n - **language**: Specify the language used for document tokenization.\n - **fields** kwargs dictionary of the document fields to be saved and/or indexed. \n NOTE: Geo points shoule be encoded as strings of \"lon,lat\""}
{"_id": "q_6498", "text": "Delete a document from index\n Returns 1 if the document was deleted, 0 if not"}
{"_id": "q_6499", "text": "Load a single document by id"}
{"_id": "q_6500", "text": "Get info an stats about the the current index, including the number of documents, memory consumption, etc"}
{"_id": "q_6501", "text": "Search the index for a given query, and return a result of documents\n\n ### Parameters\n\n - **query**: the search query. Either a text for simple queries with default parameters, or a Query object for complex queries.\n See RediSearch's documentation on query format\n - **snippet_sizes**: A dictionary of {field: snippet_size} used to trim and format the result. e.g.e {'body': 500}"}
{"_id": "q_6502", "text": "Issue an aggregation query\n\n ### Parameters\n\n **query**: This can be either an `AggeregateRequest`, or a `Cursor`\n\n An `AggregateResult` object is returned. You can access the rows from its\n `rows` property, which will always yield the rows of the result"}
{"_id": "q_6503", "text": "Set the alias for this reducer.\n\n ### Parameters\n\n - **alias**: The value of the alias for this reducer. If this is the\n special value `aggregation.FIELDNAME` then this reducer will be\n aliased using the same name as the field upon which it operates.\n Note that using `FIELDNAME` is only possible on reducers which\n operate on a single field value.\n\n This method returns the `Reducer` object making it suitable for\n chaining."}
{"_id": "q_6504", "text": "Specify by which fields to group the aggregation.\n\n ### Parameters\n\n - **fields**: Fields to group by. This can either be a single string,\n or a list of strings. both cases, the field should be specified as\n `@field`.\n - **reducers**: One or more reducers. Reducers may be found in the\n `aggregation` module."}
{"_id": "q_6505", "text": "Sets the limit for the most recent group or query.\n\n If no group has been defined yet (via `group_by()`) then this sets\n the limit for the initial pool of results from the query. Otherwise,\n this limits the number of items operated on from the previous group.\n\n Setting a limit on the initial search results may be useful when\n attempting to execute an aggregation on a sample of a large data set.\n\n ### Parameters\n\n - **offset**: Result offset from which to begin paging\n - **num**: Number of results to return\n\n\n Example of sorting the initial results:\n\n ```\n AggregateRequest('@sale_amount:[10000, inf]')\\\n .limit(0, 10)\\\n .group_by('@state', r.count())\n ```\n\n Will only group by the states found in the first 10 results of the\n query `@sale_amount:[10000, inf]`. On the other hand,\n\n ```\n AggregateRequest('@sale_amount:[10000, inf]')\\\n .limit(0, 1000)\\\n .group_by('@state', r.count()\\\n .limit(0, 10)\n ```\n\n Will group all the results matching the query, but only return the\n first 10 groups.\n\n If you only wish to return a *top-N* style query, consider using\n `sort_by()` instead."}
{"_id": "q_6506", "text": "Add a sortby field to the query\n\n - **field** - the name of the field to sort by\n - **asc** - when `True`, sorting will be done in asceding order"}
{"_id": "q_6507", "text": "Indicate that value is a numeric range"}
{"_id": "q_6508", "text": "Bypass transformations.\n\n Parameters\n ----------\n jam : pyjams.JAMS\n A muda-enabled JAMS object\n\n Yields\n ------\n jam_out : pyjams.JAMS iterator\n The first result is `jam` (unmodified), by reference\n All subsequent results are generated by `transformer`"}
{"_id": "q_6509", "text": "Transpose a chord label by some number of semitones\n\n Parameters\n ----------\n label : str\n A chord string\n\n n_semitones : float\n The number of semitones to move `label`\n\n Returns\n -------\n label_transpose : str\n The transposed chord label"}
{"_id": "q_6510", "text": "Pack data into a jams sandbox.\n\n If not already present, this creates a `muda` field within `jam.sandbox`,\n along with `history`, `state`, and version arrays which are populated by\n deformation objects.\n\n Any additional fields can be added to the `muda` sandbox by supplying\n keyword arguments.\n\n Parameters\n ----------\n jam : jams.JAMS\n A JAMS object\n\n Returns\n -------\n jam : jams.JAMS\n The updated JAMS object\n\n Examples\n --------\n >>> jam = jams.JAMS()\n >>> muda.jam_pack(jam, my_data=dict(foo=5, bar=None))\n >>> jam.sandbox\n <Sandbox: muda>\n >>> jam.sandbox.muda\n <Sandbox: state, version, my_data, history>\n >>> jam.sandbox.muda.my_data\n {'foo': 5, 'bar': None}"}
{"_id": "q_6511", "text": "Save a muda jam to disk\n\n Parameters\n ----------\n filename_audio: str\n The path to store the audio file\n\n filename_jam: str\n The path to store the jams object\n\n strict: bool\n Strict safety checking for jams output\n\n fmt : str\n Output format parameter for `jams.JAMS.save`\n\n kwargs\n Additional parameters to `soundfile.write`"}
{"_id": "q_6512", "text": "Reconstruct a transformation or pipeline given a parameter dump."}
{"_id": "q_6513", "text": "Serialize a transformation object or pipeline.\n\n Parameters\n ----------\n transform : BaseTransform or Pipeline\n The transformation object to be serialized\n\n kwargs\n Additional keyword arguments to `jsonpickle.encode()`\n\n Returns\n -------\n json_str : str\n A JSON encoding of the transformation\n\n See Also\n --------\n deserialize\n\n Examples\n --------\n >>> D = muda.deformers.TimeStretch(rate=1.5)\n >>> muda.serialize(D)\n '{\"params\": {\"rate\": 1.5},\n \"__class__\": {\"py/type\": \"muda.deformers.time.TimeStretch\"}}'"}
{"_id": "q_6514", "text": "Construct a muda transformation from a JSON encoded string.\n\n Parameters\n ----------\n encoded : str\n JSON encoding of the transformation or pipeline\n\n kwargs\n Additional keyword arguments to `jsonpickle.decode()`\n\n Returns\n -------\n obj\n The transformation\n\n See Also\n --------\n serialize\n\n Examples\n --------\n >>> D = muda.deformers.TimeStretch(rate=1.5)\n >>> D_serial = muda.serialize(D)\n >>> D2 = muda.deserialize(D_serial)\n >>> D2\n TimeStretch(rate=1.5)"}
{"_id": "q_6515", "text": "Pretty print the dictionary 'params'\n\n Parameters\n ----------\n params: dict\n The dictionary to pretty print\n\n offset: int\n The offset in characters to add at the begin of each line.\n\n printer:\n The function to convert entries to strings, typically\n the builtin str or repr"}
{"_id": "q_6516", "text": "Apply the transformation to audio and annotations.\n\n The input jam is copied and modified, and returned\n contained in a list.\n\n Parameters\n ----------\n jam : jams.JAMS\n A single jam object to modify\n\n Returns\n -------\n jam_list : list\n A length-1 list containing `jam` after transformation\n\n See also\n --------\n core.load_jam_audio"}
{"_id": "q_6517", "text": "Iterative transformation generator\n\n Applies the deformation to an input jams object.\n\n This generates a sequence of deformed output JAMS.\n\n Parameters\n ----------\n jam : jams.JAMS\n The jam to transform\n\n Examples\n --------\n >>> for jam_out in deformer.transform(jam_in):\n ... process(jam_out)"}
{"_id": "q_6518", "text": "A recursive transformation pipeline"}
{"_id": "q_6519", "text": "Calculate the indices at which to sample a fragment of audio from a file.\n\n Parameters\n ----------\n filename : str\n Path to the input file\n\n n_samples : int > 0\n The number of samples to load\n\n sr : int > 0\n The target sampling rate\n\n Returns\n -------\n start : int\n The sample index from `filename` at which the audio fragment starts\n stop : int\n The sample index from `filename` at which the audio fragment stops (e.g. y = audio[start:stop])"}
{"_id": "q_6520", "text": "Slice a fragment of audio from a file.\n\n This uses pysoundfile to efficiently seek without\n loading the entire stream.\n\n Parameters\n ----------\n filename : str\n Path to the input file\n\n start : int\n The sample index of `filename` at which the audio fragment should start\n\n stop : int\n The sample index of `filename` at which the audio fragment should stop (e.g. y = audio[start:stop])\n\n n_samples : int > 0\n The number of samples to load\n\n sr : int > 0\n The target sampling rate\n\n mono : bool\n Ensure monophonic audio\n\n Returns\n -------\n y : np.ndarray [shape=(n_samples,)]\n A fragment of audio sampled from `filename`\n\n Raises\n ------\n ValueError\n If the source file is shorter than the requested length"}
{"_id": "q_6521", "text": "Normalize `path`.\n\n All remote paths are absolute."}
{"_id": "q_6522", "text": "Returns either the md5 or sha256 hash of a file at `file_path`.\n \n md5 is the default hash_type as it is faster than sha256\n\n The default block size is 64 kb, which appears to be one of a few command\n choices according to https://stackoverflow.com/a/44873382/2680. The code\n below is an extension of the example presented in that post."}
{"_id": "q_6523", "text": "Iterate over all storages for this projects."}
{"_id": "q_6524", "text": "Store a new file at `path` in this storage.\n\n The contents of the file descriptor `fp` (opened in 'rb' mode)\n will be uploaded to `path` which is the full path at\n which to store the file.\n\n To force overwrite of an existing file, set `force=True`.\n To overwrite an existing file only if the files differ, set `update=True`"}
{"_id": "q_6525", "text": "Copy data from file-like object fsrc to file-like object fdst\n\n This is like shutil.copyfileobj but with a progressbar."}
{"_id": "q_6526", "text": "Write contents of this file to a local file.\n\n Pass in a filepointer `fp` that has been opened for writing in\n binary mode."}
{"_id": "q_6527", "text": "Remove this file from the remote storage."}
{"_id": "q_6528", "text": "Update the remote file from a local file.\n\n Pass in a filepointer `fp` that has been opened for writing in\n binary mode."}
{"_id": "q_6529", "text": "Iterate over all children of `kind`\n\n Yield an instance of `klass` when a child is of type `kind`. Uses\n `recurse` as the path of attributes in the JSON returned from `url`\n to find more children."}
{"_id": "q_6530", "text": "Initialize or edit an existing .osfcli.config file."}
{"_id": "q_6531", "text": "Login user for protected API calls."}
{"_id": "q_6532", "text": "Fetch project `project_id`."}
{"_id": "q_6533", "text": "Extract JSON from response if `status_code` matches."}
{"_id": "q_6534", "text": "Follow the 'next' link on paginated results."}
{"_id": "q_6535", "text": "Lookup crscode on spatialreference.org and return in specified format.\n\n Arguments:\n\n - *codetype*: \"epsg\", \"esri\", or \"sr-org\".\n - *code*: The code.\n - *format*: The crs format of the returned string. One of \"ogcwkt\", \"esriwkt\", or \"proj4\", but also several others...\n\n Returns:\n\n - Crs string in the specified format."}
{"_id": "q_6536", "text": "Returns the crs object from a string interpreted as a specified format, located at a given url site.\n\n Arguments:\n\n - *url*: The url where the crs string is to be read from. \n - *format* (optional): Which format to parse the crs string as. One of \"ogc wkt\", \"esri wkt\", or \"proj4\".\n If None, tries to autodetect the format for you (default).\n\n Returns:\n\n - CRS object."}
{"_id": "q_6537", "text": "Returns the crs object from a file, with the format determined from the filename extension.\n\n Arguments:\n\n - *filepath*: filepath to be loaded, including extension."}
{"_id": "q_6538", "text": "Load crs object from epsg code, via spatialreference.org.\n Parses based on the proj4 representation.\n\n Arguments:\n\n - *code*: The EPSG code as an integer.\n\n Returns:\n\n - A CS instance of the indicated type."}
{"_id": "q_6539", "text": "Load crs object from sr-org code, via spatialreference.org.\n Parses based on the proj4 representation.\n\n Arguments:\n\n - *code*: The SR-ORG code as an integer.\n\n Returns:\n\n - A CS instance of the indicated type."}
{"_id": "q_6540", "text": "Write the raw header content to the out stream\n\n Parameters:\n ----------\n out : {file object}\n The output stream"}
{"_id": "q_6541", "text": "Instantiate a RawVLR by reading the content from the\n data stream\n\n Parameters:\n ----------\n data_stream : {file object}\n The input stream\n Returns\n -------\n RawVLR\n The RawVLR read"}
{"_id": "q_6542", "text": "Parses the GeoTiff VLRs information into nicer structs"}
{"_id": "q_6543", "text": "Returns the signedness foe the given type index\n\n Parameters\n ----------\n type_index: int\n index of the type as defined in the LAS Specification\n\n Returns\n -------\n DimensionSignedness,\n the enum variant"}
{"_id": "q_6544", "text": "Construct a new PackedPointRecord from an existing one with the ability to change\n to point format while doing so"}
{"_id": "q_6545", "text": "Tries to copy the values of the current dimensions from other_record"}
{"_id": "q_6546", "text": "Appends zeros to the points stored if the value we are trying to\n fit is bigger"}
{"_id": "q_6547", "text": "Returns all the dimensions names, including the names of sub_fields\n and their corresponding packed fields"}
{"_id": "q_6548", "text": "Creates a new point record with all dimensions initialized to zero\n\n Parameters\n ----------\n point_format_id: int\n The point format id the point record should have\n point_count : int\n The number of point the point record should have\n\n Returns\n -------\n PackedPointRecord"}
{"_id": "q_6549", "text": "Construct the point record by reading and decompressing the points data from\n the input buffer"}
{"_id": "q_6550", "text": "Returns the scaled z positions of the points as doubles"}
{"_id": "q_6551", "text": "Adds a new extra dimension to the point record\n\n Parameters\n ----------\n name: str\n the name of the dimension\n type: str\n type of the dimension (eg 'uint8')\n description: str, optional\n a small description of the dimension"}
{"_id": "q_6552", "text": "writes the data to a stream\n\n Parameters\n ----------\n out_stream: file object\n the destination stream, implementing the write method\n do_compress: bool, optional, default False\n Flag to indicate if you want the date to be compressed"}
{"_id": "q_6553", "text": "Writes the las data into a file\n\n Parameters\n ----------\n filename : str\n The file where the data should be written.\n do_compress: bool, optional, default None\n if None the extension of the filename will be used\n to determine if the data should be compressed\n otherwise the do_compress flag indicate if the data should be compressed"}
{"_id": "q_6554", "text": "Writes to a stream or file\n\n When destination is a string, it will be interpreted as the path were the file should be written to,\n also if do_compress is None, the compression will be guessed from the file extension:\n\n - .laz -> compressed\n - .las -> uncompressed\n\n .. note::\n\n This means that you could do something like:\n # Create .laz but not compressed\n\n las.write('out.laz', do_compress=False)\n\n # Create .las but compressed\n\n las.write('out.las', do_compress=True)\n\n While it should not confuse Las/Laz readers, it will confuse humans so avoid doing it\n\n\n Parameters\n ----------\n destination: str or file object\n filename or stream to write to\n do_compress: bool, optional\n Flags to indicate if you want to compress the data"}
{"_id": "q_6555", "text": "Builds the dict mapping point format id to numpy.dtype\n In the dtypes, bit fields are still packed, and need to be unpacked each time\n you want to access them"}
{"_id": "q_6556", "text": "Tries to find a matching point format id for the input numpy dtype\n To match, the input dtype has to be 100% equal to a point format dtype\n so all names & dimensions types must match\n\n Parameters:\n ----------\n dtype : numpy.dtype\n The input dtype\n unpacked : bool, optional\n [description] (the default is False, which [default_description])\n\n Raises\n ------\n errors.IncompatibleDataFormat\n If No compatible point format was found\n\n Returns\n -------\n int\n The compatible point format found"}
{"_id": "q_6557", "text": "Returns the minimum file version that supports the given point_format_id"}
{"_id": "q_6558", "text": "Returns the list of vlrs of the requested type\n Always returns a list even if there is only one VLR of type vlr_type.\n\n >>> import pylas\n >>> las = pylas.read(\"pylastests/extrabytes.las\")\n >>> las.vlrs\n [<ExtraBytesVlr(extra bytes structs: 5)>]\n >>> las.vlrs.get(\"WktCoordinateSystemVlr\")\n []\n >>> las.vlrs.get(\"WktCoordinateSystemVlr\")[0]\n Traceback (most recent call last):\n IndexError: list index out of range\n >>> las.vlrs.get('ExtraBytesVlr')\n [<ExtraBytesVlr(extra bytes structs: 5)>]\n >>> las.vlrs.get('ExtraBytesVlr')[0]\n <ExtraBytesVlr(extra bytes structs: 5)>\n\n\n Parameters\n ----------\n vlr_type: str\n the class name of the vlr\n\n Returns\n -------\n :py:class:`list`\n a List of vlrs matching the user_id and records_ids"}
{"_id": "q_6559", "text": "Returns the list of vlrs of the requested type\n The difference with get is that the returned vlrs will be removed from the list\n\n Parameters\n ----------\n vlr_type: str\n the class name of the vlr\n\n Returns\n -------\n list\n a List of vlrs matching the user_id and records_ids"}
{"_id": "q_6560", "text": "Returns true if all the files have the same points format id"}
{"_id": "q_6561", "text": "Returns true if all the files have the same numpy datatype"}
{"_id": "q_6562", "text": "Reads the 4 first bytes of the stream to check that is LASF"}
{"_id": "q_6563", "text": "Reads and return the vlrs of the file"}
{"_id": "q_6564", "text": "reads the compressed point record"}
{"_id": "q_6565", "text": "Reads the EVLRs of the file, will fail if the file version\n does not support evlrs"}
{"_id": "q_6566", "text": "Helper function to warn about unknown bytes found in the file"}
{"_id": "q_6567", "text": "Opens and reads the header of the las content in the source\n\n >>> with open_las('pylastests/simple.las') as f:\n ... print(f.header.point_format_id)\n 3\n\n\n >>> f = open('pylastests/simple.las', mode='rb')\n >>> with open_las(f, closefd=False) as flas:\n ... print(flas.header)\n <LasHeader(1.2)>\n >>> f.closed\n False\n\n >>> f = open('pylastests/simple.las', mode='rb')\n >>> with open_las(f) as flas:\n ... las = flas.read()\n >>> f.closed\n True\n\n Parameters\n ----------\n source : str or io.BytesIO\n if source is a str it must be a filename\n a stream if a file object with the methods read, seek, tell\n\n closefd: bool\n Whether the stream/file object shall be closed, this only work\n when using open_las in a with statement. An exception is raised if\n closefd is specified and the source is a filename\n\n\n Returns\n -------\n pylas.lasreader.LasReader"}
{"_id": "q_6568", "text": "Entry point for reading las data in pylas\n\n Reads the whole file into memory.\n\n >>> las = read_las(\"pylastests/simple.las\")\n >>> las.classification\n array([1, 1, 1, ..., 1, 1, 1], dtype=uint8)\n\n Parameters\n ----------\n source : str or io.BytesIO\n The source to read data from\n\n closefd: bool\n if True and the source is a stream, the function will close it\n after it is done reading\n\n\n Returns\n -------\n pylas.lasdatas.base.LasBase\n The object you can interact with to get access to the LAS points & VLRs"}
{"_id": "q_6569", "text": "Creates a File from an existing header,\n allocating the array of point according to the provided header.\n The input header is copied.\n\n\n Parameters\n ----------\n header : existing header to be used to create the file\n\n Returns\n -------\n pylas.lasdatas.base.LasBase"}
{"_id": "q_6570", "text": "Merges multiple las files into one\n\n merged = merge_las(las_1, las_2)\n merged = merge_las([las_1, las_2, las_3])\n\n Parameters\n ----------\n las_files: Iterable of LasData or LasData\n\n Returns\n -------\n pylas.lasdatas.base.LasBase\n The result of the merging"}
{"_id": "q_6571", "text": "writes the given las into memory using BytesIO and \n reads it again, returning the newly read file.\n\n Mostly used for testing purposes, without having to write to disk"}
{"_id": "q_6572", "text": "Returns the creation date stored in the las file\n\n Returns\n -------\n datetime.date"}
{"_id": "q_6573", "text": "Sets de minimum values of x, y, z as a numpy array"}
{"_id": "q_6574", "text": "Sets de maximum values of x, y, z as a numpy array"}
{"_id": "q_6575", "text": "Returns the scaling values of x, y, z as a numpy array"}
{"_id": "q_6576", "text": "Returns the offsets values of x, y, z as a numpy array"}
{"_id": "q_6577", "text": "seeks to the position of the las version header fields\n in the stream and returns it as a str\n\n Parameters\n ----------\n stream io.BytesIO\n\n Returns\n -------\n str\n file version read from the stream"}
{"_id": "q_6578", "text": "Converts a header to a another version\n\n Parameters\n ----------\n old_header: the old header instance\n new_version: float or str\n\n Returns\n -------\n The converted header\n\n\n >>> old_header = HeaderFactory.new(1.2)\n >>> HeaderFactory.convert_header(old_header, 1.4)\n <LasHeader(1.4)>\n\n >>> old_header = HeaderFactory.new('1.4')\n >>> HeaderFactory.convert_header(old_header, '1.2')\n <LasHeader(1.2)>"}
{"_id": "q_6579", "text": "Packs a sub field's array into another array using a mask\n\n Parameters:\n ----------\n array : numpy.ndarray\n The array in which the sub field array will be packed into\n array_in : numpy.ndarray\n sub field array to pack\n mask : mask (ie: 0b00001111)\n Mask of the sub field\n inplace : {bool}, optional\n If true a new array is returned. (the default is False, which modifies the array in place)\n\n Raises\n ------\n OverflowError\n If the values contained in the sub field array are greater than its mask's number of bits\n allows"}
{"_id": "q_6580", "text": "Returns a dict of the sub fields for this point format\n\n Returns\n -------\n Dict[str, Tuple[str, SubField]]\n maps a sub field name to its composed dimension with additional information"}
{"_id": "q_6581", "text": "Returns the number of extra bytes"}
{"_id": "q_6582", "text": "Returns True if the point format has waveform packet dimensions"}
{"_id": "q_6583", "text": "Function to calculate checksum as per Satel manual."}
{"_id": "q_6584", "text": "Verify checksum and strip header and footer of received frame."}
{"_id": "q_6585", "text": "Add header, checksum and footer to command data."}
{"_id": "q_6586", "text": "Start monitoring for interesting events."}
{"_id": "q_6587", "text": "Send command to disarm."}
{"_id": "q_6588", "text": "Send command to clear the alarm."}
{"_id": "q_6589", "text": "Send output turn on command to the alarm."}
{"_id": "q_6590", "text": "A workaround for Satel Integra disconnecting after 25s.\n\n Every interval it sends some random question to the device, ignoring\n answer - just to keep connection alive."}
{"_id": "q_6591", "text": "Stop monitoring and close connection."}
{"_id": "q_6592", "text": "Wrapper function for using SPI device drivers on systems like the\n Raspberry Pi and BeagleBone. This allows using any of the SPI drivers\n from a single entry point instead importing the driver for a specific\n LED type.\n\n Provides the same parameters of\n :py:class:`bibliopixel.drivers.SPI.SPIBase` as\n well as those below:\n\n :param ledtype: One of: LPD8806, WS2801, WS281X, or APA102"}
{"_id": "q_6593", "text": "Defer an edit to run on the EditQueue.\n\n :param callable f: The function to be called\n :param tuple args: Positional arguments to the function\n :param tuple kwds: Keyword arguments to the function\n :throws queue.Full: if the queue is full"}
{"_id": "q_6594", "text": "Get all the edits in the queue, then execute them.\n\n The algorithm gets all edits, and then executes all of them. It does\n *not* pull off one edit, execute, repeat until the queue is empty, and\n that means that the queue might not be empty at the end of\n ``run_edits``, because new edits might have entered the queue\n while the previous edits are being executed.\n\n This has the advantage that if edits enter the queue faster than they\n can be processed, ``get_and_run_edits`` won't go into an infinite loop,\n but rather the queue will grow unboundedly, which that can be\n detected, and mitigated and reported on - or if Queue.maxsize is\n set, ``bp`` will report a fairly clear error and just dump the edits\n on the ground."}
{"_id": "q_6595", "text": "Returns details of either the first or specified device\n\n :param int id: Identifier of desired device. If not given, first device\n found will be returned\n\n :returns tuple: Device ID, Device Address, Firmware Version"}
{"_id": "q_6596", "text": "SHOULD BE PRIVATE METHOD"}
{"_id": "q_6597", "text": "Set device ID to new value.\n\n :param str dev: Serial device address/path\n :param id: Device ID to set"}
{"_id": "q_6598", "text": "Return a named Palette, or None if no such name exists.\n\n If ``name`` is omitted, the default value is used."}
{"_id": "q_6599", "text": "Draw a circle in an RGB color, with center x0, y0 and radius r."}
{"_id": "q_6600", "text": "Draw a filled circle in an RGB color, with center x0, y0 and radius r."}
{"_id": "q_6601", "text": "Draw a between x0, y0 and x1, y1 in an RGB color.\n\n :param colorFunc: a function that takes an integer from x0 to x1 and\n returns a color corresponding to that point\n :param aa: if True, use Bresenham's algorithm for line drawing;\n otherwise use Xiaolin Wu's algorithm"}
{"_id": "q_6602", "text": "Draw line from point x0, y0 to x1, y1 using Bresenham's algorithm.\n\n Will draw beyond matrix bounds."}
{"_id": "q_6603", "text": "Draw filled triangle with points x0,y0 - x1,y1 - x2,y2\n\n :param aa: if True, use Bresenham's algorithm for line drawing;\n otherwise use Xiaolin Wu's algorithm"}
{"_id": "q_6604", "text": "Set the base project for routing."}
{"_id": "q_6605", "text": "Set pixel to RGB color tuple"}
{"_id": "q_6606", "text": "Get RGB color tuple of color at index pixel"}
{"_id": "q_6607", "text": "Scale RGB tuple by level, 0 - 256"}
{"_id": "q_6608", "text": "Save the description as a YML file. Prompt if no file given."}
{"_id": "q_6609", "text": "Run a function, catch, report and discard exceptions"}
{"_id": "q_6610", "text": "Receive a message from the input source and perhaps raise an Exception."}
{"_id": "q_6611", "text": "APA102 & SK9822 support on-chip brightness control, allowing greater\n color depth.\n\n APA102 superimposes a 440Hz PWM on the 19kHz base PWM to control\n brightness. SK9822 uses a base 4.7kHz PWM but controls brightness with a\n variable current source.\n\n Because of this SK9822 will have much less flicker at lower levels.\n Either way, this option is better and faster than scaling in\n BiblioPixel."}
{"_id": "q_6612", "text": "Return an independent copy of this layout with a completely separate\n color_list and no drivers."}
{"_id": "q_6613", "text": "Set the internal colors starting at an optional offset.\n\n If `color_list` is a list or other 1-dimensional array, it is reshaped\n into an N x 3 list.\n\n If `color_list` too long it is truncated; if it is too short then only\n the initial colors are set."}
{"_id": "q_6614", "text": "Fill the entire strip with HSV color tuple"}
{"_id": "q_6615", "text": "Decorator for RestServer methods that take a single address"}
{"_id": "q_6616", "text": "Decorator for RestServer methods that take multiple addresses"}
{"_id": "q_6617", "text": "Advance a list of unique, ordered elements in-place, lexicographically\n increasing or backward, by rightmost or leftmost digit.\n\n Returns False if the permutation wrapped around - i.e. went from\n lexicographically greatest to least, and True in all other cases.\n\n If the length of the list is N, then this function will repeat values after\n N! steps, and will return False exactly once.\n\n See also https://stackoverflow.com/a/34325140/43839"}
{"_id": "q_6618", "text": "For each row or column in cuts, read a list of its colors,\n apply the function to that list of colors, then write it back\n to the layout."}
{"_id": "q_6619", "text": "Compose a sequence of events into one event.\n\n Arguments:\n events: a sequence of objects looking like threading.Event\n condition: a function taking a sequence of bools and returning a bool."}
{"_id": "q_6620", "text": "Draws a filled circle at point x0,y0 with radius r and specified color"}
{"_id": "q_6621", "text": "Draw rectangle with top-left corner at x,y, width w, height h,\n and corner radius r."}
{"_id": "q_6622", "text": "Draw solid rectangle with top-left corner at x,y, width w, height h,\n and corner radius r"}
{"_id": "q_6623", "text": "Draw triangle with points x0,y0 - x1,y1 - x2,y2"}
{"_id": "q_6624", "text": "Use with caution!\n\n Directly set the pixel buffers.\n\n :param colors: A list of color tuples\n :param int pos: Position in color list to begin set operation."}
{"_id": "q_6625", "text": "Return a list of Segments that evenly split the strip."}
{"_id": "q_6626", "text": "Return a new segment starting right after self in the same buffer."}
{"_id": "q_6627", "text": "Stop the builder if it's running."}
{"_id": "q_6628", "text": "Open an instance of simpixel in the browser"}
{"_id": "q_6629", "text": "Depth first recursion through a dictionary containing type constructors\n\n The arguments pre, post and children are independently either:\n\n * None, which means to do nothing\n * a string, which means to use the static class method of that name on the\n class being constructed, or\n * a callable, to be called at each recursion\n\n Arguments:\n\n dictionary -- a project dictionary or one of its subdictionaries\n pre -- called before children are visited node in the recursion\n post -- called after children are visited in the recursion\n python_path -- relative path to start resolving typenames"}
{"_id": "q_6630", "text": "Tries to convert a value to a type constructor.\n\n If value is a string, then it used as the \"typename\" field.\n\n If the \"typename\" field exists, the symbol for that name is imported and\n added to the type constructor as a field \"datatype\".\n\n Throws:\n ImportError -- if \"typename\" is set but cannot be imported\n ValueError -- if \"typename\" is malformed"}
{"_id": "q_6631", "text": "Fill a portion of a strip from start to stop by step with a given item.\n If stop is not given, it defaults to the length of the strip."}
{"_id": "q_6632", "text": "Older animations in BPA and other areas use all sorts of different names for\n what we are now representing with palettes.\n\n This function mutates a kwds dictionary to remove these legacy fields and\n extract a palette from it, which it returns."}
{"_id": "q_6633", "text": "Write a series of frames as a single animated GIF.\n\n :param str filename: the name of the GIF file to write\n\n :param list frames: a list of filenames, each of which represents a single\n frame of the animation. Each frame must have exactly the same\n dimensions, and the code has only been tested with .gif files.\n\n :param float fps:\n The number of frames per second.\n\n :param int loop:\n The number of iterations. Default 0 (meaning loop indefinitely).\n\n :param int palette:\n The number of colors to quantize the image to. Is rounded to\n the nearest power of two. Default 256."}
{"_id": "q_6634", "text": "Loads not only JSON files but also YAML files ending in .yml.\n\n :param file: a filename or file handle to read from\n :returns: the data loaded from the JSON or YAML file\n :rtype: dict"}
{"_id": "q_6635", "text": "Order colors by hue, saturation and value, in that order.\n\n Returns -1 if a < b, 0 if a == b and 1 if a < b."}
{"_id": "q_6636", "text": "Update sections in a Project description"}
{"_id": "q_6637", "text": "Construct an animation, set the runner, and add in the two\n \"reserved fields\" `name` and `data`."}
{"_id": "q_6638", "text": "Return an image in the given mode."}
{"_id": "q_6639", "text": "Given an animated GIF, return a list with a colorlist for each frame."}
{"_id": "q_6640", "text": "Parse a string representing a time interval or duration into seconds,\n or raise an exception\n\n :param str s: a string representation of a time interval\n :raises ValueError: if ``s`` can't be interpreted as a duration"}
{"_id": "q_6641", "text": "Stop the Runner if it's running.\n Called as a classmethod, stop the running instance if any."}
{"_id": "q_6642", "text": "Display an image on a matrix."}
{"_id": "q_6643", "text": "Every other column is indexed in reverse."}
{"_id": "q_6644", "text": "Return a Palette but don't take into account Pallete Names."}
{"_id": "q_6645", "text": "Helper method to generate X,Y coordinate maps for strips"}
{"_id": "q_6646", "text": "Make an object from a symbol."}
{"_id": "q_6647", "text": "For the duration of this context manager, put the PID for this process into\n `pid_filename`, and then remove the file at the end."}
{"_id": "q_6648", "text": "Return an integer index or None"}
{"_id": "q_6649", "text": "Returns a generator with the elements \"data\" taken by offset, restricted\n by self.begin and self.end, and padded on either end by `pad` to get\n back to the original length of `data`"}
{"_id": "q_6650", "text": "Cleans up all sorts of special cases that humans want when entering\n an animation from a yaml file.\n\n 1. Loading it from a file\n 2. Using just a typename instead of a dict\n 3. A single dict representing an animation, with a run: section.\n 4. (Legacy) Having a dict with parallel elements run: and animation:\n 5. (Legacy) A tuple or list: (animation, run )"}
{"_id": "q_6651", "text": "Give each animation a unique, mutable layout so they can run\n independently."}
{"_id": "q_6652", "text": "If a project has a Curses driver, the section \"main\" in the section\n \"run\" must be \"bibliopixel.drivers.curses.Curses.main\"."}
{"_id": "q_6653", "text": "Merge zero or more dictionaries representing projects with the default\n project dictionary and return the result"}
{"_id": "q_6654", "text": "Guess the type of a file.\n\n If allow_directory is False, don't consider the possibility that the\n file is a directory."}
{"_id": "q_6655", "text": "Get a notebook from the database."}
{"_id": "q_6656", "text": "Apply _notebook_model_from_db or _file_model_from_db to each entry\n in file_records, depending on the result of `guess_type`."}
{"_id": "q_6657", "text": "Build a directory model from database directory record."}
{"_id": "q_6658", "text": "Save a notebook.\n\n Returns a validation message."}
{"_id": "q_6659", "text": "Save a non-notebook file."}
{"_id": "q_6660", "text": "Rename object from old_path to path.\n\n NOTE: This method is unfortunately named on the base class. It\n actually moves a file or a directory."}
{"_id": "q_6661", "text": "Delete object corresponding to path."}
{"_id": "q_6662", "text": "Add a new user if they don't already exist."}
{"_id": "q_6663", "text": "Delete a user and all of their resources."}
{"_id": "q_6664", "text": "Create a directory."}
{"_id": "q_6665", "text": "Return a WHERE clause that matches entries in a directory.\n\n Parameterized on table because this clause is re-used between files and\n directories."}
{"_id": "q_6666", "text": "Delete a directory."}
{"_id": "q_6667", "text": "Internal implementation of dir_exists.\n\n Expects a db-style path name."}
{"_id": "q_6668", "text": "Return files in a directory."}
{"_id": "q_6669", "text": "Return subdirectories of a directory."}
{"_id": "q_6670", "text": "Return a SELECT statement that returns the latest N versions of a file."}
{"_id": "q_6671", "text": "Default fields returned by a file query."}
{"_id": "q_6672", "text": "Get file data for the given user_id and path.\n\n Include content only if include_content=True."}
{"_id": "q_6673", "text": "Get the value in the 'id' column for the file with the given\n user_id and path."}
{"_id": "q_6674", "text": "Check if a file exists."}
{"_id": "q_6675", "text": "Rename a directory."}
{"_id": "q_6676", "text": "Save a file.\n\n TODO: Update-then-insert is probably cheaper than insert-then-update."}
{"_id": "q_6677", "text": "Create a generator of decrypted files.\n\n Files are yielded in ascending order of their timestamp.\n\n This function selects all current notebooks (optionally, falling within a\n datetime range), decrypts them, and returns a generator yielding dicts,\n each containing a decoded notebook and metadata including the user,\n filepath, and timestamp.\n\n Parameters\n ----------\n engine : SQLAlchemy.engine\n Engine encapsulating database connections.\n crypto_factory : function[str -> Any]\n A function from user_id to an object providing the interface required\n by PostgresContentsManager.crypto. Results of this will be used for\n decryption of the selected notebooks.\n min_dt : datetime.datetime, optional\n Minimum last modified datetime at which a file will be included.\n max_dt : datetime.datetime, optional\n Last modified datetime at and after which a file will be excluded.\n logger : Logger, optional"}
{"_id": "q_6678", "text": "Delete all database records for the given user_id."}
{"_id": "q_6679", "text": "Re-encrypt a row from ``table`` with ``id`` of ``row_id``."}
{"_id": "q_6680", "text": "Convert a secret key and a user ID into an encryption key to use with a\n ``cryptography.fernet.Fernet``.\n\n Taken from\n https://cryptography.io/en/latest/fernet/#using-passwords-with-fernet\n\n Parameters\n ----------\n password : unicode\n ascii-encodable key to derive\n user_id : unicode\n ascii-encodable user_id to use as salt"}
{"_id": "q_6681", "text": "Derive a list of per-user Fernet keys from a list of master keys and a\n username.\n\n If a None is encountered in ``passwords``, it is forwarded.\n\n Parameters\n ----------\n passwords : list[unicode]\n List of ascii-encodable keys to derive.\n user_id : unicode or None\n ascii-encodable user_id to use as salt"}
{"_id": "q_6682", "text": "Create and return a function suitable for passing as a crypto_factory to\n ``pgcontents.utils.sync.reencrypt_all_users``\n\n The factory here returns a ``FernetEncryption`` that uses a key derived\n from ``password`` and salted with the supplied user_id."}
{"_id": "q_6683", "text": "Decorator memoizing a single-argument function"}
{"_id": "q_6684", "text": "Get the name from a column-like SQLAlchemy expression.\n\n Works for Columns and Cast expressions."}
{"_id": "q_6685", "text": "Convert a SQLAlchemy row that does not contain a 'content' field to a dict.\n\n If row is None, return None.\n\n Raises AssertionError if there is a field named 'content' in ``fields``."}
{"_id": "q_6686", "text": "Create a checkpoint of the current state of a notebook\n\n Returns a checkpoint_id for the new checkpoint."}
{"_id": "q_6687", "text": "Create a checkpoint of the current state of a file\n\n Returns a checkpoint_id for the new checkpoint."}
{"_id": "q_6688", "text": "delete a checkpoint for a file"}
{"_id": "q_6689", "text": "Get the content of a checkpoint."}
{"_id": "q_6690", "text": "Return a list of checkpoints for a given file"}
{"_id": "q_6691", "text": "Rename all checkpoints for old_path to new_path."}
{"_id": "q_6692", "text": "Delete all checkpoints for the given path."}
{"_id": "q_6693", "text": "Resolve a path based on a dictionary of manager prefixes.\n\n Returns a triple of (prefix, manager, manager_relative_path)."}
{"_id": "q_6694", "text": "Prefix all path entries in model with the given prefix."}
{"_id": "q_6695", "text": "Decorator for methods that accept path as a first argument."}
{"_id": "q_6696", "text": "Parameterized decorator for methods that accept path as a second\n argument."}
{"_id": "q_6697", "text": "Strip slashes from directories before updating."}
{"_id": "q_6698", "text": "Resolve paths with '..' to normalized paths, raising an error if the final\n result is outside root."}
{"_id": "q_6699", "text": "Decode base64 data of unknown format.\n\n Attempts to interpret data as utf-8, falling back to ascii on failure."}
{"_id": "q_6700", "text": "Decode base64 content for a file.\n\n format:\n If 'text', the contents will be decoded as UTF-8.\n If 'base64', do nothing.\n If not specified, try to decode as UTF-8, and fall back to base64\n\n Returns a triple of decoded_content, format, and mimetype."}
{"_id": "q_6701", "text": "Return an iterable of all prefix directories of path, descending from root."}
{"_id": "q_6702", "text": "Create a user."}
{"_id": "q_6703", "text": "Split an iterable of models into a list of file paths and a list of\n directory paths."}
{"_id": "q_6704", "text": "Recursive helper for walk."}
{"_id": "q_6705", "text": "Iterate over all files visible to ``mgr``."}
{"_id": "q_6706", "text": "Iterate over the contents of all files visible to ``mgr``."}
{"_id": "q_6707", "text": "Re-encrypt data for all users.\n\n This function is idempotent, meaning that it should be possible to apply\n the same re-encryption process multiple times without having any effect on\n the database. Idempotency is achieved by first attempting to decrypt with\n the old crypto and falling back to the new crypto on failure.\n\n An important consequence of this strategy is that **decrypting** a database\n is not supported with this function, because ``NoEncryption.decrypt``\n always succeeds. To decrypt an already-encrypted database, use\n ``unencrypt_all_users`` instead.\n\n It is, however, possible to perform an initial encryption of a database by\n passing a function returning a ``NoEncryption`` as ``old_crypto_factory``.\n\n Parameters\n ----------\n engine : SQLAlchemy.engine\n Engine encapsulating database connections.\n old_crypto_factory : function[str -> Any]\n A function from user_id to an object providing the interface required\n by PostgresContentsManager.crypto. Results of this will be used for\n decryption of existing database content.\n new_crypto_factory : function[str -> Any]\n A function from user_id to an object providing the interface required\n by PostgresContentsManager.crypto. Results of this will be used for\n re-encryption of database content.\n\n This **must not** return instances of ``NoEncryption``. Use\n ``unencrypt_all_users`` if you want to unencrypt a database.\n logger : logging.Logger, optional\n A logger to user during re-encryption.\n\n See Also\n --------\n reencrypt_user\n unencrypt_all_users"}
{"_id": "q_6708", "text": "Re-encrypt all files and checkpoints for a single user."}
{"_id": "q_6709", "text": "Unencrypt all files and checkpoints for a single user."}
{"_id": "q_6710", "text": "Upgrade the given database to revision."}
{"_id": "q_6711", "text": "Santizes the data for the given block.\n If block has a matching embed serializer, use the `to_internal_value` method."}
{"_id": "q_6712", "text": "Queue an instance to be fetched from the database."}
{"_id": "q_6713", "text": "Insert a fetched instance into embed block."}
{"_id": "q_6714", "text": "Load data in bulk for each embed block."}
{"_id": "q_6715", "text": "Perform validation of the widget data"}
{"_id": "q_6716", "text": "Render HTML entry point for manager app."}
{"_id": "q_6717", "text": "Excludes fields that are included in the queryparameters"}
{"_id": "q_6718", "text": "Get the latest article with the given primary key."}
{"_id": "q_6719", "text": "Optionally restricts the returned articles by filtering against a `topic`\n query parameter in the URL."}
{"_id": "q_6720", "text": "Only display unpublished content to authenticated users, filter by\n query parameter if present."}
{"_id": "q_6721", "text": "Overrides the default get_attribute method to convert None values to False."}
{"_id": "q_6722", "text": "Checks that the given widget contains the required fields"}
{"_id": "q_6723", "text": "Return True if id is a valid UUID, False otherwise."}
{"_id": "q_6724", "text": "Raise a ValidationError if data does not match the author format."}
{"_id": "q_6725", "text": "Save widget data for this zone."}
{"_id": "q_6726", "text": "Renders the widget as HTML."}
{"_id": "q_6727", "text": "Retrieves the settings for this integration as a dictionary.\n\n Removes all hidden fields if show_hidden=False"}
{"_id": "q_6728", "text": "Receive OAuth callback request from Facebook."}
{"_id": "q_6729", "text": "Updates settings for given integration."}
{"_id": "q_6730", "text": "Handles requests to the user signup page."}
{"_id": "q_6731", "text": "Renders the contents of the zone with given zone_id."}
{"_id": "q_6732", "text": "Handles saving the featured image.\n\n If data is None, the featured image will be removed.\n\n `data` should be dictionary with the following format:\n {\n 'image_id': int,\n 'caption': str,\n 'credit': str\n }"}
{"_id": "q_6733", "text": "Save the subsection to the parent article"}
{"_id": "q_6734", "text": "Returns the file extension."}
{"_id": "q_6735", "text": "Custom save method to process thumbnails and save image dimensions."}
{"_id": "q_6736", "text": "Attempts to connect to the MySQL server.\n\n :return: Bound MySQL connection object if successful or ``None`` if\n unsuccessful."}
{"_id": "q_6737", "text": "Copy the instance and make sure not to use a reference"}
{"_id": "q_6738", "text": "Returns a generator for individual account transactions. The\n latest operation will be first. This call can be used in a\n ``for`` loop.\n\n :param int first: sequence number of the first\n transaction to return (*optional*)\n :param int last: sequence number of the last\n transaction to return (*optional*)\n :param int limit: limit number of transactions to\n return (*optional*)\n :param array only_ops: Limit generator by these\n operations (*optional*)\n :param array exclude_ops: Exclude these operations from\n generator (*optional*).\n\n ... note::\n only_ops and exclude_ops takes an array of strings:\n The full list of operation ID's can be found in\n operationids.py.\n Example: ['transfer', 'fill_order']"}
{"_id": "q_6739", "text": "Upgrade account to life time member"}
{"_id": "q_6740", "text": "Add an other account to the whitelist of this account"}
{"_id": "q_6741", "text": "Remove an other account from any list of this account"}
{"_id": "q_6742", "text": "Use to derive a number that allows to easily recover the\n public key from the signature"}
{"_id": "q_6743", "text": "Returns a datetime of the block with the given block\n number.\n\n :param int block_num: Block number"}
{"_id": "q_6744", "text": "Returns the timestamp of the block with the given block\n number.\n\n :param int block_num: Block number"}
{"_id": "q_6745", "text": "Yields account names between start and stop.\n\n :param str start: Start at this account name\n :param str stop: Stop at this account name\n :param int steps: Obtain ``steps`` ret with a single call from RPC"}
{"_id": "q_6746", "text": "Refresh the data from the API server"}
{"_id": "q_6747", "text": "Is the store unlocked so that I can decrypt the content?"}
{"_id": "q_6748", "text": "The password is used to encrypt this masterpassword. To\n decrypt the keys stored in the keys database, one must use\n BIP38, decrypt the masterpassword from the configuration\n store with the user password, and use the decrypted\n masterpassword to decrypt the BIP38 encrypted private keys\n from the keys storage!\n\n :param str password: Password to use for en-/de-cryption"}
{"_id": "q_6749", "text": "Derive the checksum\n\n :param str s: Random string for which to derive the checksum"}
{"_id": "q_6750", "text": "Change the password that allows to decrypt the master key"}
{"_id": "q_6751", "text": "Decrypt the content according to BIP38\n\n :param str wif: Encrypted key"}
{"_id": "q_6752", "text": "Encrypt the content according to BIP38\n\n :param str wif: Unencrypted key"}
{"_id": "q_6753", "text": "Derive private key from the brain key and the current sequence\n number"}
{"_id": "q_6754", "text": "Derive y point from x point"}
{"_id": "q_6755", "text": "Return the point for the public key"}
{"_id": "q_6756", "text": "Derive new public key from this key and a sha256 \"offset\""}
{"_id": "q_6757", "text": "Derive uncompressed public key"}
{"_id": "q_6758", "text": "Derive new private key from this private key and an arbitrary\n sequence number"}
{"_id": "q_6759", "text": "Derive new private key from this key and a sha256 \"offset\""}
{"_id": "q_6760", "text": "Claim a balance from the genesis block\n\n :param str balance_id: The identifier that identifies the balance\n to claim (1.15.x)\n :param str account: (optional) the account that owns the bet\n (defaults to ``default_account``)"}
{"_id": "q_6761", "text": "This method will initialize ``SharedInstance.instance`` and return it.\n The purpose of this method is to have offer single default\n instance that can be reused by multiple classes."}
{"_id": "q_6762", "text": "This allows to set a config that will be used when calling\n ``shared_blockchain_instance`` and allows to define the configuration\n without requiring to actually create an instance"}
{"_id": "q_6763", "text": "Find the next url in the list"}
{"_id": "q_6764", "text": "reset the failed connection counters"}
{"_id": "q_6765", "text": "Is the key `key` available?"}
{"_id": "q_6766", "text": "returns all items off the store as tuples"}
{"_id": "q_6767", "text": "Return the key if exists or a default value\n\n :param str value: Value\n :param str default: Default value if key not present"}
{"_id": "q_6768", "text": "Delete a key from the store\n\n :param str value: Value"}
{"_id": "q_6769", "text": "Check if the database table exists"}
{"_id": "q_6770", "text": "Create the new table in the SQLite database"}
{"_id": "q_6771", "text": "Returns an instance of base \"Operations\" for further processing"}
{"_id": "q_6772", "text": "Try to obtain the wif key from the wallet by telling which account\n and permission is supposed to sign the transaction"}
{"_id": "q_6773", "text": "Add a wif that should be used for signing of the transaction."}
{"_id": "q_6774", "text": "Auxiliary method to obtain the required fees for a set of\n operations. Requires a websocket connection to a witness node!"}
{"_id": "q_6775", "text": "Verify the authority of the signed transaction"}
{"_id": "q_6776", "text": "Broadcast a transaction to the blockchain network\n\n :param tx tx: Signed transaction to broadcast"}
{"_id": "q_6777", "text": "Clear the transaction builder and start from scratch"}
{"_id": "q_6778", "text": "Returns the price instance so that the base asset is ``base``.\n\n Note: This makes a copy of the object!"}
{"_id": "q_6779", "text": "Returns the price instance so that the quote asset is ``quote``.\n\n Note: This makes a copy of the object!"}
{"_id": "q_6780", "text": "This method obtains the required private keys if present in\n the wallet, finalizes the transaction, signs it and\n broadacasts it\n\n :param operation ops: The operation (or list of operaions) to\n broadcast\n :param operation account: The account that authorizes the\n operation\n :param string permission: The required permission for\n signing (active, owner, posting)\n :param object append_to: This allows to provide an instance of\n ProposalsBuilder (see :func:`new_proposal`) or\n TransactionBuilder (see :func:`new_tx()`) to specify\n where to put a specific operation.\n\n ... note:: ``append_to`` is exposed to every method used in the\n this class\n\n ... note::\n\n If ``ops`` is a list of operation, they all need to be\n signable by the same key! Thus, you cannot combine ops\n that require active permission with ops that require\n posting permission. Neither can you use different\n accounts for different operations!\n\n ... note:: This uses ``txbuffer`` as instance of\n :class:`transactionbuilder.TransactionBuilder`.\n You may want to use your own txbuffer"}
{"_id": "q_6781", "text": "Broadcast a transaction to the Blockchain\n\n :param tx tx: Signed transaction to broadcast"}
{"_id": "q_6782", "text": "Let's obtain a new txbuffer\n\n :returns int txid: id of the new txbuffer"}
{"_id": "q_6783", "text": "The transaction id of this transaction"}
{"_id": "q_6784", "text": "Sign the transaction with the provided private keys.\n\n :param array wifkeys: Array of wif keys\n :param str chain: identifier for the chain"}
{"_id": "q_6785", "text": "Unlock the wallet database"}
{"_id": "q_6786", "text": "Create a new wallet database"}
{"_id": "q_6787", "text": "Add a private key to the wallet database"}
{"_id": "q_6788", "text": "Remove all keys associated with a given account"}
{"_id": "q_6789", "text": "Obtain owner Memo Key for an account from the wallet database"}
{"_id": "q_6790", "text": "Obtain owner Active Key for an account from the wallet database"}
{"_id": "q_6791", "text": "Obtain the first account name from public key"}
{"_id": "q_6792", "text": "Get key type"}
{"_id": "q_6793", "text": "Return all accounts installed in the wallet database"}
{"_id": "q_6794", "text": "Encrypt a memo\n\n :param str message: clear text memo message\n :returns: encrypted message\n :rtype: str"}
{"_id": "q_6795", "text": "Decrypt a message\n\n :param dict message: encrypted memo message\n :returns: decrypted message\n :rtype: str"}
{"_id": "q_6796", "text": "Derive the share secret between ``priv`` and ``pub``\n\n :param `Base58` priv: Private Key\n :param `Base58` pub: Public Key\n :return: Shared secret\n :rtype: hex\n\n The shared secret is generated such that::\n\n Pub(Alice) * Priv(Bob) = Pub(Bob) * Priv(Alice)"}
{"_id": "q_6797", "text": "Initialize AES instance\n\n :param hex shared_secret: Shared Secret to use as encryption key\n :param int nonce: Random nonce\n :return: AES instance\n :rtype: AES"}
{"_id": "q_6798", "text": "Encode a message with a shared secret between Alice and Bob\n\n :param PrivateKey priv: Private Key (of Alice)\n :param PublicKey pub: Public Key (of Bob)\n :param int nonce: Random nonce\n :param str message: Memo message\n :return: Encrypted message\n :rtype: hex"}
{"_id": "q_6799", "text": "Decode a message with a shared secret between Alice and Bob\n\n :param PrivateKey priv: Private Key (of Bob)\n :param PublicKey pub: Public Key (of Alice)\n :param int nonce: Nonce used for Encryption\n :param bytes message: Encrypted Memo message\n :return: Decrypted message\n :rtype: str\n :raise ValueError: if message cannot be decoded as valid UTF-8\n string"}
{"_id": "q_6800", "text": "Send IPMI 'command' via ipmitool"}
{"_id": "q_6801", "text": "Find the given 'pattern' in 'content"}
{"_id": "q_6802", "text": "Cat file and return content"}
{"_id": "q_6803", "text": "Get chunk meta of NVMe device"}
{"_id": "q_6804", "text": "Get sizeof DescriptorTable"}
{"_id": "q_6805", "text": "Verify LNVM variables and construct exported variables"}
{"_id": "q_6806", "text": "Compare of two Buffer item"}
{"_id": "q_6807", "text": "Copy stream to buffer"}
{"_id": "q_6808", "text": "Write buffer to file"}
{"_id": "q_6809", "text": "Read file to buffer"}
{"_id": "q_6810", "text": "230v power on"}
{"_id": "q_6811", "text": "Get chunk information"}
{"_id": "q_6812", "text": "Verify BLOCK variables and construct exported variables"}
{"_id": "q_6813", "text": "Execute a script or testcase"}
{"_id": "q_6814", "text": "Setup test-hooks\n @returns dict of hook filepaths {\"enter\": [], \"exit\": []}"}
{"_id": "q_6815", "text": "Dump the given trun to file"}
{"_id": "q_6816", "text": "Print essential info on"}
{"_id": "q_6817", "text": "Create and initialize a testcase"}
{"_id": "q_6818", "text": "Triggers when exiting the given testsuite"}
{"_id": "q_6819", "text": "Creates and initialized a TESTSUITE struct and site-effects such as creating\n output directories and forwarding initialization of testcases"}
{"_id": "q_6820", "text": "setup res_root and aux_root, log info and run tcase-enter-hooks\n\n @returns 0 when all hooks succeed, some value othervise"}
{"_id": "q_6821", "text": "Triggers when exiting the given testrun"}
{"_id": "q_6822", "text": "Triggers when entering the given testrun"}
{"_id": "q_6823", "text": "Setup the testrunner data-structure, embedding the parsed environment\n variables and command-line arguments and continues with setup for testplans,\n testsuites, and testcases"}
{"_id": "q_6824", "text": "CIJ Test Runner main entry point"}
{"_id": "q_6825", "text": "Get chunk meta table"}
{"_id": "q_6826", "text": "Generic address to device address"}
{"_id": "q_6827", "text": "Start DMESG job in thread"}
{"_id": "q_6828", "text": "Terminate DMESG job"}
{"_id": "q_6829", "text": "generate rater pic"}
{"_id": "q_6830", "text": "round the data"}
{"_id": "q_6831", "text": "Verify PCI variables and construct exported variables"}
{"_id": "q_6832", "text": "Print, emphasized 'good', the given 'txt' message"}
{"_id": "q_6833", "text": "Define the list of 'exported' variables with 'prefix' with values from 'env'"}
{"_id": "q_6834", "text": "Get-log-page chunk information\n\n If the pugrp and punit is set, then provide report only for that pugrp/punit\n\n @returns the first chunk in the given state if one exists, None otherwise"}
{"_id": "q_6835", "text": "Get a chunk-descriptor for the first chunk in the given state.\n\n If the pugrp and punit is set, then search only that pugrp/punit\n\n @returns the first chunk in the given state if one exists, None otherwise"}
{"_id": "q_6836", "text": "Kill all of FIO processes"}
{"_id": "q_6837", "text": "Get parameter of FIO"}
{"_id": "q_6838", "text": "Run FIO job in thread"}
{"_id": "q_6839", "text": "Run FIO job"}
{"_id": "q_6840", "text": "Parse descriptions from the the given tcase"}
{"_id": "q_6841", "text": "Returns content of the given 'fpath' with HTML annotations for syntax\n highlighting"}
{"_id": "q_6842", "text": "Perform postprocessing of the given test run"}
{"_id": "q_6843", "text": "Replace all absolute paths to \"re-home\" it"}
{"_id": "q_6844", "text": "Main entry point"}
{"_id": "q_6845", "text": "Wait util target connected"}
{"_id": "q_6846", "text": "Factory method for the assertion builder with value to be tested and optional description."}
{"_id": "q_6847", "text": "Asserts that val is equal to other."}
{"_id": "q_6848", "text": "Asserts that val is not equal to other."}
{"_id": "q_6849", "text": "Asserts that the val is not identical to other, via 'is' compare."}
{"_id": "q_6850", "text": "Asserts that val is of the given type."}
{"_id": "q_6851", "text": "Asserts that val is the given length."}
{"_id": "q_6852", "text": "Asserts that val does not contain the given item or items."}
{"_id": "q_6853", "text": "Asserts that val is iterable and does not contain any duplicate items."}
{"_id": "q_6854", "text": "Asserts that val is empty."}
{"_id": "q_6855", "text": "Asserts that val is not empty."}
{"_id": "q_6856", "text": "Asserts that val is numeric and is less than other."}
{"_id": "q_6857", "text": "Asserts that val is numeric and is between low and high."}
{"_id": "q_6858", "text": "Asserts that val is numeric and is close to other within tolerance."}
{"_id": "q_6859", "text": "Asserts that val is case-insensitive equal to other."}
{"_id": "q_6860", "text": "Asserts that val is string or iterable and ends with suffix."}
{"_id": "q_6861", "text": "Asserts that val is string and matches regex pattern."}
{"_id": "q_6862", "text": "Asserts that val is non-empty string and all characters are alphabetic."}
{"_id": "q_6863", "text": "Asserts that val is non-empty string and all characters are digits."}
{"_id": "q_6864", "text": "Asserts that val is non-empty string and all characters are lowercase."}
{"_id": "q_6865", "text": "Asserts that val is non-empty string and all characters are uppercase."}
{"_id": "q_6866", "text": "Asserts that val is a unicode string."}
{"_id": "q_6867", "text": "Asserts that val is iterable and a subset of the given superset or flattened superset if multiple supersets are given."}
{"_id": "q_6868", "text": "Asserts that val is a dict and contains the given value or values."}
{"_id": "q_6869", "text": "Asserts that val is a dict and contains the given entry or entries."}
{"_id": "q_6870", "text": "Asserts that val is a date and is before other date."}
{"_id": "q_6871", "text": "Asserts that val is a path and that it exists."}
{"_id": "q_6872", "text": "Asserts that val is an existing path to a file."}
{"_id": "q_6873", "text": "Asserts that val is an existing path to a directory."}
{"_id": "q_6874", "text": "Asserts that val is an existing path to a file and that file is named filename."}
{"_id": "q_6875", "text": "Asserts that val is an existing path to a file and that file is a child of parent."}
{"_id": "q_6876", "text": "Asserts that val is callable and that when called raises the given error."}
{"_id": "q_6877", "text": "Asserts the val callable when invoked with the given args and kwargs raises the expected exception."}
{"_id": "q_6878", "text": "Helper to convert the given args and kwargs into a string."}
{"_id": "q_6879", "text": "Generate CSV file for training and testing data\n\n Input\n =====\n best_path: str, path to BEST folder which contains unzipped subfolder\n 'article', 'encyclopedia', 'news', 'novel'\n\n cleaned_data: str, path to output folder, the cleaned data will be saved\n in the given folder name where training set will be stored in `train` folder\n and testing set will be stored on `test` folder\n\n create_val: boolean, True or False, if True, divide training set into training set and\n validation set in `val` folder"}
{"_id": "q_6880", "text": "Transform processed path into feature matrix and output array\n\n Input\n =====\n best_processed_path: str, path to processed BEST dataset\n\n option: str, 'train' or 'test'"}
{"_id": "q_6881", "text": "Given path to processed BEST dataset,\n train CNN model for words beginning alongside with\n character label encoder and character type label encoder\n\n Input\n =====\n best_processed_path: str, path to processed BEST dataset\n weight_path: str, path to weight path file\n verbose: int, verbost option for training Keras model\n\n Output\n ======\n model: keras model, keras model for tokenize prediction"}
{"_id": "q_6882", "text": "Tokenize given Thai text string\n\n Input\n =====\n text: str, Thai text string\n custom_dict: str (or list), path to customized dictionary file\n It allows the function not to tokenize given dictionary wrongly.\n The file should contain custom words separated by line.\n Alternatively, you can provide list of custom words too.\n\n Output\n ======\n tokens: list, list of tokenized words\n\n Example\n =======\n >> deepcut.tokenize('\u0e15\u0e31\u0e14\u0e04\u0e33\u0e44\u0e14\u0e49\u0e14\u0e35\u0e21\u0e32\u0e01')\n >> ['\u0e15\u0e31\u0e14\u0e04\u0e33','\u0e44\u0e14\u0e49','\u0e14\u0e35','\u0e21\u0e32\u0e01']"}
{"_id": "q_6883", "text": "Create feature array of character and surrounding characters"}
{"_id": "q_6884", "text": "Given input dataframe, create feature dataframe of shifted characters"}
{"_id": "q_6885", "text": "Wraps a fileobj in a bandwidth limited stream wrapper\n\n :type fileobj: file-like obj\n :param fileobj: The file-like obj to wrap\n\n :type transfer_coordinator: s3transfer.futures.TransferCoordinator\n param transfer_coordinator: The coordinator for the general transfer\n that the wrapped stream is a part of\n\n :type enabled: boolean\n :param enabled: Whether bandwidth limiting should be enabled to start"}
{"_id": "q_6886", "text": "Read a specified amount\n\n Reads will only be throttled if bandwidth limiting is enabled."}
{"_id": "q_6887", "text": "Consume an a requested amount\n\n :type amt: int\n :param amt: The amount of bytes to request to consume\n\n :type request_token: RequestToken\n :param request_token: The token associated to the consumption\n request that is used to identify the request. So if a\n RequestExceededException is raised the token should be used\n in subsequent retry consume() request.\n\n :raises RequestExceededException: If the consumption amount would\n exceed the maximum allocated bandwidth\n\n :rtype: int\n :returns: The amount consumed"}
{"_id": "q_6888", "text": "Schedules a wait time to be able to consume an amount\n\n :type amt: int\n :param amt: The amount of bytes scheduled to be consumed\n\n :type token: RequestToken\n :param token: The token associated to the consumption\n request that is used to identify the request.\n\n :type time_to_consume: float\n :param time_to_consume: The desired time it should take for that\n specific request amount to be consumed in regardless of previously\n scheduled consumption requests\n\n :rtype: float\n :returns: The amount of time to wait for the specific request before\n actually consuming the specified amount."}
{"_id": "q_6889", "text": "Get the projected rate using a provided amount and time\n\n :type amt: int\n :param amt: The proposed amount to consume\n\n :type time_at_consumption: float\n :param time_at_consumption: The proposed time to consume at\n\n :rtype: float\n :returns: The consumption rate if that amt and time were consumed"}
{"_id": "q_6890", "text": "Record the consumption rate based off amount and time point\n\n :type amt: int\n :param amt: The amount that got consumed\n\n :type time_at_consumption: float\n :param time_at_consumption: The time at which the amount was consumed"}
{"_id": "q_6891", "text": "Downloads the object's contents to a file\n\n :type bucket: str\n :param bucket: The name of the bucket to download from\n\n :type key: str\n :param key: The name of the key to download from\n\n :type filename: str\n :param filename: The name of a file to download to.\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n client operation\n\n :type expected_size: int\n :param expected_size: The expected size in bytes of the download. If\n provided, the downloader will not call HeadObject to determine the\n object's size and use the provided value instead. The size is\n needed to determine whether to do a multipart download.\n\n :rtype: s3transfer.futures.TransferFuture\n :returns: Transfer future representing the download"}
{"_id": "q_6892", "text": "Poll for the result of a transfer\n\n :param transfer_id: Unique identifier for the transfer\n :return: If the transfer succeeded, it will return the result. If the\n transfer failed, it will raise the exception associated to the\n failure."}
{"_id": "q_6893", "text": "Decrement the count by one"}
{"_id": "q_6894", "text": "Finalize the counter\n\n Once finalized, the counter never be incremented and the callback\n can be invoked once the count reaches zero"}
{"_id": "q_6895", "text": "Checks to see if a file is a special UNIX file.\n\n It checks if the file is a character special device, block special\n device, FIFO, or socket.\n\n :param filename: Name of the file\n\n :returns: True if the file is a special file. False, if is not."}
{"_id": "q_6896", "text": "Get a chunksize close to current that fits within all S3 limits.\n\n :type current_chunksize: int\n :param current_chunksize: The currently configured chunksize.\n\n :type file_size: int or None\n :param file_size: The size of the file to upload. This might be None\n if the object being transferred has an unknown size.\n\n :returns: A valid chunksize that fits within configured limits."}
{"_id": "q_6897", "text": "Queue IO write for submission to the IO executor.\n\n This method accepts an IO executor and information about the\n downloaded data, and handles submitting this to the IO executor.\n\n This method may defer submission to the IO executor if necessary."}
{"_id": "q_6898", "text": "Retrieves a class for managing output for a download\n\n :type transfer_future: s3transfer.futures.TransferFuture\n :param transfer_future: The transfer future for the request\n\n :type osutil: s3transfer.utils.OSUtils\n :param osutil: The os utility associated to the transfer\n\n :rtype: class of DownloadOutputManager\n :returns: The appropriate class to use for managing a specific type of\n input for downloads."}
{"_id": "q_6899", "text": "Downloads an object and places content into io queue\n\n :param client: The client to use when calling GetObject\n :param bucket: The bucket to download from\n :param key: The key to download from\n :param fileobj: The file handle to write content to\n :param exta_args: Any extra arguements to include in GetObject request\n :param callbacks: List of progress callbacks to invoke on download\n :param max_attempts: The number of retries to do when downloading\n :param download_output_manager: The download output manager associated\n with the current download.\n :param io_chunksize: The size of each io chunk to read from the\n download stream and queue in the io queue.\n :param start_index: The location in the file to start writing the\n content of the key to.\n :param bandwidth_limiter: The bandwidth limiter to use when throttling\n the downloading of data in streams."}
{"_id": "q_6900", "text": "Pulls off an io queue to write contents to a file\n\n :param fileobj: The file handle to write content to\n :param data: The data to write\n :param offset: The offset to write the data to."}
{"_id": "q_6901", "text": "Request any available writes given new incoming data.\n\n You call this method by providing new data along with the\n offset associated with the data. If that new data unlocks\n any contiguous writes that can now be submitted, this\n method will return all applicable writes.\n\n This is done with 1 method call so you don't have to\n make two method calls (put(), get()) which acquires a lock\n each method call."}
{"_id": "q_6902", "text": "Backwards compat function to determine if a fileobj is seekable\n\n :param fileobj: The file-like object to determine if seekable\n\n :returns: True, if seekable. False, otherwise."}
{"_id": "q_6903", "text": "Downloads a file from S3\n\n :type bucket: str\n :param bucket: The name of the bucket to download from\n\n :type key: str\n :param key: The name of the key to download from\n\n :type fileobj: str or seekable file-like object\n :param fileobj: The name of a file to download or a seekable file-like\n object to download. It is recommended to use a filename because\n file-like objects may result in higher memory usage.\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n client operation\n\n :type subscribers: list(s3transfer.subscribers.BaseSubscriber)\n :param subscribers: The list of subscribers to be invoked in the\n order provided based on the event emit during the process of\n the transfer request.\n\n :rtype: s3transfer.futures.TransferFuture\n :returns: Transfer future representing the download"}
{"_id": "q_6904", "text": "Copies a file in S3\n\n :type copy_source: dict\n :param copy_source: The name of the source bucket, key name of the\n source object, and optional version ID of the source object. The\n dictionary format is:\n ``{'Bucket': 'bucket', 'Key': 'key', 'VersionId': 'id'}``. Note\n that the ``VersionId`` key is optional and may be omitted.\n\n :type bucket: str\n :param bucket: The name of the bucket to copy to\n\n :type key: str\n :param key: The name of the key to copy to\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n client operation\n\n :type subscribers: a list of subscribers\n :param subscribers: The list of subscribers to be invoked in the\n order provided based on the event emit during the process of\n the transfer request.\n\n :type source_client: botocore or boto3 Client\n :param source_client: The client to be used for operation that\n may happen at the source object. For example, this client is\n used for the head_object that determines the size of the copy.\n If no client is provided, the transfer manager's client is used\n as the client for the source object.\n\n :rtype: s3transfer.futures.TransferFuture\n :returns: Transfer future representing the copy"}
{"_id": "q_6905", "text": "Delete an S3 object.\n\n :type bucket: str\n :param bucket: The name of the bucket.\n\n :type key: str\n :param key: The name of the S3 object to delete.\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n DeleteObject call.\n\n :type subscribers: list\n :param subscribers: A list of subscribers to be invoked during the\n process of the transfer request. Note that the ``on_progress``\n callback is not invoked during object deletion.\n\n :rtype: s3transfer.futures.TransferFuture\n :return: Transfer future representing the deletion."}
{"_id": "q_6906", "text": "Shutdown the TransferManager\n\n It will wait till all transfers complete before it completely shuts\n down.\n\n :type cancel: boolean\n :param cancel: If True, calls TransferFuture.cancel() for\n all in-progress in transfers. This is useful if you want the\n shutdown to happen quicker.\n\n :type cancel_msg: str\n :param cancel_msg: The message to specify if canceling all in-progress\n transfers."}
{"_id": "q_6907", "text": "Cancels all inprogress transfers\n\n This cancels the inprogress transfers by calling cancel() on all\n tracked transfer coordinators.\n\n :param msg: The message to pass on to each transfer coordinator that\n gets cancelled.\n\n :param exc_type: The type of exception to set for the cancellation"}
{"_id": "q_6908", "text": "Wait until there are no more inprogress transfers\n\n This will not stop when failures are encountered and not propogate any\n of these errors from failed transfers, but it can be interrupted with\n a KeyboardInterrupt."}
{"_id": "q_6909", "text": "Retrieves a class for managing input for an upload based on file type\n\n :type transfer_future: s3transfer.futures.TransferFuture\n :param transfer_future: The transfer future for the request\n\n :rtype: class of UploadInputManager\n :returns: The appropriate class to use for managing a specific type of\n input for uploads."}
{"_id": "q_6910", "text": "Sets the exception on the future."}
{"_id": "q_6911", "text": "Set a result for the TransferFuture\n\n Implies that the TransferFuture succeeded. This will always set a\n result because it is invoked on the final task where there is only\n ever one final task and it is ran at the very end of a transfer\n process. So if a result is being set for this final task, the transfer\n succeeded even if something came a long and canceled the transfer\n on the final task."}
{"_id": "q_6912", "text": "Set an exception for the TransferFuture\n\n Implies the TransferFuture failed.\n\n :param exception: The exception that cause the transfer to fail.\n :param override: If True, override any existing state."}
{"_id": "q_6913", "text": "Cancels the TransferFuture\n\n :param msg: The message to attach to the cancellation\n :param exc_type: The type of exception to set for the cancellation"}
{"_id": "q_6914", "text": "Submits a task to a provided executor\n\n :type executor: s3transfer.futures.BoundedExecutor\n :param executor: The executor to submit the callable to\n\n :type task: s3transfer.tasks.Task\n :param task: The task to submit to the executor\n\n :type tag: s3transfer.futures.TaskTag\n :param tag: A tag to associate to the submitted task\n\n :rtype: concurrent.futures.Future\n :returns: A future representing the submitted task"}
{"_id": "q_6915", "text": "Add a done callback to be invoked when transfer is done"}
{"_id": "q_6916", "text": "Adds a callback to call upon failure"}
{"_id": "q_6917", "text": "Announce that future is done running and run associated callbacks\n\n This will run any failure cleanups if the transfer failed if not\n they have not been run, allows the result() to be unblocked, and will\n run any done callbacks associated to the TransferFuture if they have\n not already been ran."}
{"_id": "q_6918", "text": "Submit a task to complete\n\n :type task: s3transfer.tasks.Task\n :param task: The task to run __call__ on\n\n\n :type tag: s3transfer.futures.TaskTag\n :param tag: An optional tag to associate to the task. This\n is used to override which semaphore to use.\n\n :type block: boolean\n :param block: True if to wait till it is possible to submit a task.\n False, if not to wait and raise an error if not able to submit\n a task.\n\n :returns: The future assocaited to the submitted task"}
{"_id": "q_6919", "text": "Upload a file to an S3 object.\n\n Variants have also been injected into S3 client, Bucket and Object.\n You don't have to use S3Transfer.upload_file() directly."}
{"_id": "q_6920", "text": "Download an S3 object to a file.\n\n Variants have also been injected into S3 client, Bucket and Object.\n You don't have to use S3Transfer.download_file() directly."}
{"_id": "q_6921", "text": "Find functions with step decorator in parsed file"}
{"_id": "q_6922", "text": "Get the arguments passed to step decorators\n converted to python objects."}
{"_id": "q_6923", "text": "Find the step with old_text and change it to new_text.The step function\n parameters are also changed according to move_param_from_idx.\n Each entry in this list should specify parameter position from old."}
{"_id": "q_6924", "text": "Find functions with step decorator in parsed file."}
{"_id": "q_6925", "text": "Get arguments passed to step decorators converted to python objects."}
{"_id": "q_6926", "text": "Find the step with old_text and change it to new_text.\n The step function parameters are also changed according\n to move_param_from_idx. Each entry in this list should\n specify parameter position from old"}
{"_id": "q_6927", "text": "Select default parser for loading and refactoring steps. Passing `redbaron` as argument\n will select the old paring engine from v0.3.3\n\n Replacing the redbaron parser was necessary to support Python 3 syntax. We have tried our\n best to make sure there is no user impact on users. However, there may be regressions with\n new parser backend.\n\n To revert to the old parser implementation, add `GETGAUGE_USE_0_3_3_PARSER=true` property\n to the `python.properties` file in the `<PROJECT_DIR>/env/default directory.\n\n This property along with the redbaron parser will be removed in future releases."}
{"_id": "q_6928", "text": "List team memberships for a team, by ID.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all team memberships returned by\n the query. The generator will automatically request additional 'pages'\n of responses from Webex as needed until all responses have been\n returned. The container makes the generator safe for reuse. A new API\n call will be made, using the same parameters that were specified when\n the generator was created, every time a new iterator is requested from\n the container.\n\n Args:\n teamId(basestring): List team memberships for a team, by ID.\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the team memberships returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6929", "text": "Add someone to a team by Person ID or email address.\n\n Add someone to a team by Person ID or email address; optionally making\n them a moderator.\n\n Args:\n teamId(basestring): The team ID.\n personId(basestring): The person ID.\n personEmail(basestring): The email address of the person.\n isModerator(bool): Set to True to make the person a team moderator.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n TeamMembership: A TeamMembership object with the details of the\n created team membership.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6930", "text": "Update a team membership, by ID.\n\n Args:\n membershipId(basestring): The team membership ID.\n isModerator(bool): Set to True to make the person a team moderator.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n TeamMembership: A TeamMembership object with the updated Webex\n Teams team-membership details.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6931", "text": "Delete a team membership, by ID.\n\n Args:\n membershipId(basestring): The team membership ID.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6932", "text": "Get a cat fact from catfact.ninja and return it as a string.\n\n Functions for Soundhound, Google, IBM Watson, or other APIs can be added\n to create the desired functionality into this bot."}
{"_id": "q_6933", "text": "Respond to inbound webhook JSON HTTP POSTs from Webex Teams."}
{"_id": "q_6934", "text": "List room memberships.\n\n By default, lists memberships for rooms to which the authenticated user\n belongs.\n\n Use query parameters to filter the response.\n\n Use `roomId` to list memberships for a room, by ID.\n\n Use either `personId` or `personEmail` to filter the results.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all memberships returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n roomId(basestring): Limit results to a specific room, by ID.\n personId(basestring): Limit results to a specific person, by ID.\n personEmail(basestring): Limit results to a specific person, by\n email address.\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the memberships returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6935", "text": "Delete a membership, by ID.\n\n Args:\n membershipId(basestring): The membership ID.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6936", "text": "Check to see if string is an validly-formatted web url."}
{"_id": "q_6937", "text": "Open the file and return an EncodableFile tuple."}
{"_id": "q_6938", "text": "Object is an instance of one of the acceptable types or None.\n\n Args:\n o: The object to be inspected.\n acceptable_types: A type or tuple of acceptable types.\n may_be_none(bool): Whether or not the object may be None.\n\n Raises:\n TypeError: If the object is None and may_be_none=False, or if the\n object is not an instance of one of the acceptable types."}
{"_id": "q_6939", "text": "Check response code against the expected code; raise ApiError.\n\n Checks the requests.response.status_code against the provided expected\n response code (erc), and raises a ApiError if they do not match.\n\n Args:\n response(requests.response): The response object returned by a request\n using the requests package.\n expected_response_code(int): The expected response code (HTTP response\n code).\n\n Raises:\n ApiError: If the requests.response.status_code does not match the\n provided expected response code (erc)."}
{"_id": "q_6940", "text": "Given a dictionary or JSON string; return a dictionary.\n\n Args:\n json_data(dict, str): Input JSON object.\n\n Returns:\n A Python dictionary with the contents of the JSON object.\n\n Raises:\n TypeError: If the input object is not a dictionary or string."}
{"_id": "q_6941", "text": "strptime with the Webex Teams DateTime format as the default."}
{"_id": "q_6942", "text": "List rooms.\n\n By default, lists rooms to which the authenticated user belongs.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all rooms returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n teamId(basestring): Limit the rooms to those associated with a\n team, by ID.\n type(basestring): 'direct' returns all 1-to-1 rooms. `group`\n returns all group rooms. If not specified or values not\n matched, will return all room types.\n sortBy(basestring): Sort results by room ID (`id`), most recent\n activity (`lastactivity`), or most recently created\n (`created`).\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the rooms returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6943", "text": "Creation date and time in ISO8601 format."}
{"_id": "q_6944", "text": "Attempt to get the access token from the environment.\n\n Try using the current and legacy environment variables. If the access token\n is found in a legacy environment variable, raise a deprecation warning.\n\n Returns:\n The access token found in the environment (str), or None."}
{"_id": "q_6945", "text": "Create a webhook.\n\n Args:\n name(basestring): A user-friendly name for this webhook.\n targetUrl(basestring): The URL that receives POST requests for\n each event.\n resource(basestring): The resource type for the webhook.\n event(basestring): The event type for the webhook.\n filter(basestring): The filter that defines the webhook scope.\n secret(basestring): The secret used to generate payload signature.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Webhook: A Webhook object with the details of the created webhook.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6946", "text": "Update the HTTP headers used for requests in this session.\n\n Note: Updates provided by the dictionary passed as the `headers`\n parameter to this method are merged into the session headers by adding\n new key-value pairs and/or updating the values of existing keys. The\n session headers are not replaced by the provided dictionary.\n\n Args:\n headers(dict): Updates to the current session headers."}
{"_id": "q_6947", "text": "Given a relative or absolute URL; return an absolute URL.\n\n Args:\n url(basestring): A relative or absolute URL.\n\n Returns:\n str: An absolute URL."}
{"_id": "q_6948", "text": "Abstract base method for making requests to the Webex Teams APIs.\n\n This base method:\n * Expands the API endpoint URL to an absolute URL\n * Makes the actual HTTP request to the API endpoint\n * Provides support for Webex Teams rate-limiting\n * Inspects response codes and raises exceptions as appropriate\n\n Args:\n method(basestring): The request-method type ('GET', 'POST', etc.).\n url(basestring): The URL of the API endpoint to be called.\n erc(int): The expected response code that should be returned by the\n Webex Teams API endpoint to indicate success.\n **kwargs: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint."}
{"_id": "q_6949", "text": "Sends a GET request.\n\n Args:\n url(basestring): The URL of the API endpoint.\n params(dict): The parameters for the HTTP GET request.\n **kwargs:\n erc(int): The expected (success) response code for the request.\n others: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint."}
{"_id": "q_6950", "text": "Return a generator that GETs and yields pages of data.\n\n Provides native support for RFC5988 Web Linking.\n\n Args:\n url(basestring): The URL of the API endpoint.\n params(dict): The parameters for the HTTP GET request.\n **kwargs:\n erc(int): The expected (success) response code for the request.\n others: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint."}
{"_id": "q_6951", "text": "Sends a DELETE request.\n\n Args:\n url(basestring): The URL of the API endpoint.\n **kwargs:\n erc(int): The expected (success) response code for the request.\n others: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint."}
{"_id": "q_6952", "text": "Create a new guest issuer using the provided issuer token.\n\n This function returns a guest issuer with an api access token.\n\n Args:\n subject(basestring): Unique and public identifier\n displayName(basestring): Display Name of the guest user\n issuerToken(basestring): Issuer token from developer hub\n expiration(basestring): Expiration time as a unix timestamp\n secret(basestring): The secret used to sign your guest issuers\n\n Returns:\n GuestIssuerToken: A Guest Issuer with a valid access token.\n\n Raises:\n TypeError: If the parameter types are incorrect\n ApiError: If the webex teams cloud returns an error."}
{"_id": "q_6953", "text": "Lists messages in a room.\n\n Each message will include content attachments if present.\n\n The list API sorts the messages in descending order by creation date.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all messages returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n roomId(basestring): List messages for a room, by ID.\n mentionedPeople(basestring): List messages where the caller is\n mentioned by specifying \"me\" or the caller `personId`.\n before(basestring): List messages sent before a date and time, in\n ISO8601 format.\n beforeMessage(basestring): List messages sent before a message,\n by ID.\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the messages returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6954", "text": "Post a message, and optionally a attachment, to a room.\n\n The files parameter is a list, which accepts multiple values to allow\n for future expansion, but currently only one file may be included with\n the message.\n\n Args:\n roomId(basestring): The room ID.\n toPersonId(basestring): The ID of the recipient when sending a\n private 1:1 message.\n toPersonEmail(basestring): The email address of the recipient when\n sending a private 1:1 message.\n text(basestring): The message, in plain text. If `markdown` is\n specified this parameter may be optionally used to provide\n alternate text for UI clients that do not support rich text.\n markdown(basestring): The message, in markdown format.\n files(`list`): A list of public URL(s) or local path(s) to files to\n be posted into the room. Only one file is allowed per message.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Message: A Message object with the details of the created message.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n ValueError: If the files parameter is a list of length > 1, or if\n the string in the list (the only element in the list) does not\n contain a valid URL or path to a local file."}
{"_id": "q_6955", "text": "Delete a message.\n\n Args:\n messageId(basestring): The ID of the message to be deleted.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6956", "text": "Create a new user account for a given organization\n\n Only an admin can create a new user account.\n\n Args:\n emails(`list`): Email address(es) of the person (list of strings).\n displayName(basestring): Full name of the person.\n firstName(basestring): First name of the person.\n lastName(basestring): Last name of the person.\n avatar(basestring): URL to the person's avatar in PNG format.\n orgId(basestring): ID of the organization to which this\n person belongs.\n roles(`list`): Roles of the person (list of strings containing\n the role IDs to be assigned to the person).\n licenses(`list`): Licenses allocated to the person (list of\n strings - containing the license IDs to be allocated to the\n person).\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Person: A Person object with the details of the created person.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6957", "text": "Get a person's details, by ID.\n\n Args:\n personId(basestring): The ID of the person to be retrieved.\n\n Returns:\n Person: A Person object with the details of the requested person.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6958", "text": "Update details for a person, by ID.\n\n Only an admin can update a person's details.\n\n Email addresses for a person cannot be changed via the Webex Teams API.\n\n Include all details for the person. This action expects all user\n details to be present in the request. A common approach is to first GET\n the person's details, make changes, then PUT both the changed and\n unchanged values.\n\n Args:\n personId(basestring): The person ID.\n emails(`list`): Email address(es) of the person (list of strings).\n displayName(basestring): Full name of the person.\n firstName(basestring): First name of the person.\n lastName(basestring): Last name of the person.\n avatar(basestring): URL to the person's avatar in PNG format.\n orgId(basestring): ID of the organization to which this\n person belongs.\n roles(`list`): Roles of the person (list of strings containing\n the role IDs to be assigned to the person).\n licenses(`list`): Licenses allocated to the person (list of\n strings - containing the license IDs to be allocated to the\n person).\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Person: A Person object with the updated details.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6959", "text": "Remove a person from the system.\n\n Only an admin can remove a person.\n\n Args:\n personId(basestring): The ID of the person to be deleted.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6960", "text": "List teams to which the authenticated user belongs.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all teams returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the teams returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6961", "text": "Update details for a team, by ID.\n\n Args:\n teamId(basestring): The team ID.\n name(basestring): A user-friendly name for the team.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Team: A Team object with the updated Webex Teams team details.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6962", "text": "List events.\n\n List events in your organization. Several query parameters are\n available to filter the response.\n\n Note: `from` is a keyword in Python and may not be used as a variable\n name, so we had to use `_from` instead.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all events returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Wevex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n resource(basestring): Limit results to a specific resource type.\n Possible values: \"messages\", \"memberships\".\n type(basestring): Limit results to a specific event type. Possible\n values: \"created\", \"updated\", \"deleted\".\n actorId(basestring): Limit results to events performed by this\n person, by ID.\n _from(basestring): Limit results to events which occurred after a\n date and time, in ISO8601 format (yyyy-MM-dd'T'HH:mm:ss.SSSZ).\n to(basestring): Limit results to events which occurred before a\n date and time, in ISO8601 format (yyyy-MM-dd'T'HH:mm:ss.SSSZ).\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the events returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."}
{"_id": "q_6963", "text": "Respond to inbound webhook JSON HTTP POST from Webex Teams."}
{"_id": "q_6964", "text": "Get the ngrok public HTTP URL from the local client API."}
{"_id": "q_6965", "text": "Create a Webex Teams webhook pointing to the public ngrok URL."}
{"_id": "q_6966", "text": "Return all rows from a cursor as a dict."}
{"_id": "q_6967", "text": "Parse a received datetime into a timezone-aware, Python datetime object.\n\n Arguments:\n datetime_string: A string to be parsed.\n datetime_format: A datetime format string to be used for parsing"}
{"_id": "q_6968", "text": "Connect to the REST API, authenticating with a JWT for the current user."}
{"_id": "q_6969", "text": "Return redirect to embargo error page if the given user is blocked."}
{"_id": "q_6970", "text": "Sort the course mode dictionaries by slug according to the COURSE_MODE_SORT_ORDER constant.\n\n Arguments:\n modes (list): A list of course mode dictionaries.\n Returns:\n list: A list with the course modes dictionaries sorted by slug."}
{"_id": "q_6971", "text": "Query the Enrollment API to see whether a course run has a given course mode available.\n\n Arguments:\n course_run_id (str): The string value of the course run's unique identifier\n\n Returns:\n bool: Whether the course run has the given mode avaialble for enrollment."}
{"_id": "q_6972", "text": "Call the enrollment API to enroll the user in the course specified by course_id.\n\n Args:\n username (str): The username by which the user goes on the OpenEdX platform\n course_id (str): The string value of the course's unique identifier\n mode (str): The enrollment mode which should be used for the enrollment\n cohort (str): Add the user to this named cohort\n\n Returns:\n dict: A dictionary containing details of the enrollment, including course details, mode, username, etc."}
{"_id": "q_6973", "text": "Query the enrollment API to get information about a single course enrollment.\n\n Args:\n username (str): The username by which the user goes on the OpenEdX platform\n course_id (str): The string value of the course's unique identifier\n\n Returns:\n dict: A dictionary containing details of the enrollment, including course details, mode, username, etc."}
{"_id": "q_6974", "text": "Query the enrollment API and determine if a learner is enrolled in a course run.\n\n Args:\n username (str): The username by which the user goes on the OpenEdX platform\n course_run_id (str): The string value of the course's unique identifier\n\n Returns:\n bool: Indicating whether the user is enrolled in the course run. Returns False under any errors."}
{"_id": "q_6975", "text": "Return a Course Discovery API client setup with authentication for the specified user."}
{"_id": "q_6976", "text": "Return specified course catalog.\n\n Returns:\n dict: catalog details if it is available for the user."}
{"_id": "q_6977", "text": "Return paginated response for all catalog courses.\n\n Returns:\n dict: API response with links to next and previous pages."}
{"_id": "q_6978", "text": "Return a paginated list of course catalogs, including name and ID.\n\n Returns:\n dict: Paginated response containing catalogs available for the user."}
{"_id": "q_6979", "text": "Return the courses included in a single course catalog by ID.\n\n Args:\n catalog_id (int): The catalog ID we want to retrieve.\n\n Returns:\n list: Courses of the catalog in question"}
{"_id": "q_6980", "text": "Return single program by UUID, or None if not found.\n\n Arguments:\n program_uuid(string): Program UUID in string form\n\n Returns:\n dict: Program data provided by Course Catalog API"}
{"_id": "q_6981", "text": "Get a program type by its slug.\n\n Arguments:\n slug (str): The slug to identify the program type.\n\n Returns:\n dict: A program type object."}
{"_id": "q_6982", "text": "Find common course modes for a set of course runs.\n\n This function essentially returns an intersection of types of seats available\n for each course run.\n\n Arguments:\n course_run_ids(Iterable[str]): Target Course run IDs.\n\n Returns:\n set: course modes found in all given course runs\n\n Examples:\n # run1 has prof and audit, run 2 has the same\n get_common_course_modes(['course-v1:run1', 'course-v1:run2'])\n {'prof', 'audit'}\n\n # run1 has prof and audit, run 2 has only prof\n get_common_course_modes(['course-v1:run1', 'course-v1:run2'])\n {'prof'}\n\n # run1 has prof and audit, run 2 honor\n get_common_course_modes(['course-v1:run1', 'course-v1:run2'])\n {}\n\n # run1 has nothing, run2 has prof\n get_common_course_modes(['course-v1:run1', 'course-v1:run2'])\n {}\n\n # run1 has prof and audit, run 2 prof, run3 has audit\n get_common_course_modes(['course-v1:run1', 'course-v1:run2', 'course-v1:run3'])\n {}\n\n # run1 has nothing, run 2 prof, run3 has prof\n get_common_course_modes(['course-v1:run1', 'course-v1:run2', 'course-v1:run3'])\n {}"}
{"_id": "q_6983", "text": "Determine if the given course or course run ID is contained in the catalog with the given ID.\n\n Args:\n catalog_id (int): The ID of the catalog\n course_id (str): The ID of the course or course run\n\n Returns:\n bool: Whether the course or course run is contained in the given catalog"}
{"_id": "q_6984", "text": "Load data from API client.\n\n Arguments:\n resource(string): type of resource to load\n default(any): value to return if API query returned empty result. Sensible values: [], {}, None etc.\n\n Returns:\n dict: Deserialized response from Course Catalog API"}
{"_id": "q_6985", "text": "Return all content metadata contained in the catalogs associated with the EnterpriseCustomer.\n\n Arguments:\n enterprise_customer (EnterpriseCustomer): The EnterpriseCustomer to return content metadata for.\n\n Returns:\n list: List of dicts containing content metadata."}
{"_id": "q_6986", "text": "Return items that need to be created, updated, and deleted along with the\n current ContentMetadataItemTransmissions."}
{"_id": "q_6987", "text": "Serialize content metadata items for a create transmission to the integrated channel."}
{"_id": "q_6988", "text": "Transmit content metadata update to integrated channel."}
{"_id": "q_6989", "text": "Transmit content metadata deletion to integrated channel."}
{"_id": "q_6990", "text": "Return the ContentMetadataItemTransmision models for previously\n transmitted content metadata items."}
{"_id": "q_6991", "text": "Update ContentMetadataItemTransmision models for the given content metadata items."}
{"_id": "q_6992", "text": "Flag a method as deprecated.\n\n :param extra: Extra text you'd like to display after the default text."}
{"_id": "q_6993", "text": "View decorator for allowing authenticated user with valid enterprise UUID.\n\n This decorator requires enterprise identifier as a parameter\n `enterprise_uuid`.\n\n This decorator will throw 404 if no kwarg `enterprise_uuid` is provided to\n the decorated view .\n\n If there is no enterprise in database against the kwarg `enterprise_uuid`\n or if the user is not authenticated then it will redirect the user to the\n enterprise-linked SSO login page.\n\n Usage::\n @enterprise_login_required()\n def my_view(request, enterprise_uuid):\n # Some functionality ...\n\n OR\n\n class MyView(View):\n ...\n @method_decorator(enterprise_login_required)\n def get(self, request, enterprise_uuid):\n # Some functionality ..."}
{"_id": "q_6994", "text": "Verify that the username has a matching user, and that the user has an associated EnterpriseCustomerUser."}
{"_id": "q_6995", "text": "Save the model with the found EnterpriseCustomerUser."}
{"_id": "q_6996", "text": "Serialize the EnterpriseCustomerCatalog object.\n\n Arguments:\n instance (EnterpriseCustomerCatalog): The EnterpriseCustomerCatalog to serialize.\n\n Returns:\n dict: The EnterpriseCustomerCatalog converted to a dict."}
{"_id": "q_6997", "text": "Return the enterprise related django groups that this user is a part of."}
{"_id": "q_6998", "text": "Verify that the username has a matching user."}
{"_id": "q_6999", "text": "Save the EnterpriseCustomerUser."}
{"_id": "q_7000", "text": "Return the updated course data dictionary.\n\n Arguments:\n instance (dict): The course data.\n\n Returns:\n dict: The updated course data."}
{"_id": "q_7001", "text": "Return the updated course run data dictionary.\n\n Arguments:\n instance (dict): The course run data.\n\n Returns:\n dict: The updated course run data."}
{"_id": "q_7002", "text": "Return the updated program data dictionary.\n\n Arguments:\n instance (dict): The program data.\n\n Returns:\n dict: The updated program data."}
{"_id": "q_7003", "text": "This implements the same relevant logic as ListSerializer except that if one or more items fail validation,\n processing for other items that did not fail will continue."}
{"_id": "q_7004", "text": "This selectively calls the child create method based on whether or not validation failed for each payload."}
{"_id": "q_7005", "text": "This selectively calls to_representation on each result that was processed by create."}
{"_id": "q_7006", "text": "Perform the enrollment for existing enterprise customer users, or create the pending objects for new users."}
{"_id": "q_7007", "text": "Validates the lms_user_id, if is given, to see if there is an existing EnterpriseCustomerUser for it."}
{"_id": "q_7008", "text": "Validates the tpa_user_id, if is given, to see if there is an existing EnterpriseCustomerUser for it.\n\n It first uses the third party auth api to find the associated username to do the lookup."}
{"_id": "q_7009", "text": "Validates the user_email, if given, to see if an existing EnterpriseCustomerUser exists for it.\n\n If it does not, it does not fail validation, unlike for the other field validation methods above."}
{"_id": "q_7010", "text": "Validates that the course run id is part of the Enterprise Customer's catalog."}
{"_id": "q_7011", "text": "Update pagination links in course catalog data and return DRF Response.\n\n Arguments:\n data (dict): Dictionary containing catalog courses.\n request (HttpRequest): Current request object.\n\n Returns:\n (Response): DRF response object containing pagination links."}
{"_id": "q_7012", "text": "Delete the `role_based_access_control` switch."}
{"_id": "q_7013", "text": "Send a completion status call to SAP SuccessFactors using the client.\n\n Args:\n payload: The learner completion data payload to send to SAP SuccessFactors"}
{"_id": "q_7014", "text": "Modify throttling for service users.\n\n Updates throttling rate if the request is coming from the service user, and\n defaults to UserRateThrottle's configured setting otherwise.\n\n Updated throttling rate comes from `DEFAULT_THROTTLE_RATES` key in `REST_FRAMEWORK`\n setting. service user throttling is specified in `DEFAULT_THROTTLE_RATES` by `service_user` key\n\n Example Setting:\n ```\n REST_FRAMEWORK = {\n ...\n 'DEFAULT_THROTTLE_RATES': {\n ...\n 'service_user': '50/day'\n }\n }\n ```"}
{"_id": "q_7015", "text": "This method adds enterprise-specific metadata for each course.\n\n We are adding following field in all the courses.\n tpa_hint: a string for identifying Identity Provider.\n enterprise_id: the UUID of the enterprise\n **kwargs: any additional data one would like to add on a per-use basis.\n\n Arguments:\n enterprise_customer: The customer whose data will be used to fill the enterprise context.\n course_container_key: The key used to find the container for courses in the serializer's data dictionary."}
{"_id": "q_7016", "text": "Update course metadata of the given course and return updated course.\n\n Arguments:\n course (dict): Course Metadata returned by course catalog API\n enterprise_customer (EnterpriseCustomer): enterprise customer instance.\n enterprise_context (dict): Enterprise context to be added to course runs and URLs..\n\n Returns:\n (dict): Updated course metadata"}
{"_id": "q_7017", "text": "Collect learner data for the ``EnterpriseCustomer`` where data sharing consent is granted.\n\n Yields a learner data object for each enrollment, containing:\n\n * ``enterprise_enrollment``: ``EnterpriseCourseEnrollment`` object.\n * ``completed_date``: datetime instance containing the course/enrollment completion date; None if not complete.\n \"Course completion\" occurs for instructor-paced courses when course certificates are issued, and\n for self-paced courses, when the course end date is passed, or when the learner achieves a passing grade.\n * ``grade``: string grade recorded for the learner in the course."}
{"_id": "q_7018", "text": "Generate a learner data transmission audit with fields properly filled in."}
{"_id": "q_7019", "text": "Get enterprise user id from user object.\n\n Arguments:\n obj (User): Django User object\n\n Returns:\n (int): Primary Key identifier for enterprise user object."}
{"_id": "q_7020", "text": "Get enterprise SSO UID.\n\n Arguments:\n obj (User): Django User object\n\n Returns:\n (str): string containing UUID for enterprise customer's Identity Provider."}
{"_id": "q_7021", "text": "Remove content metadata items from the `items_to_create`, `items_to_update`, `items_to_delete` dicts.\n\n Arguments:\n failed_items (list): Failed Items to be removed.\n items_to_create (dict): dict containing the items created successfully.\n items_to_update (dict): dict containing the items updated successfully.\n items_to_delete (dict): dict containing the items deleted successfully."}
{"_id": "q_7022", "text": "Parse and validate arguments for send_course_enrollments command.\n\n Arguments:\n *args: Positional arguments passed to the command\n **options: optional arguments passed to the command\n\n Returns:\n A tuple containing parsed values for\n 1. days (int): Integer showing number of days to lookup enterprise enrollments,\n course completion etc and send to xAPI LRS\n 2. enterprise_customer_uuid (EnterpriseCustomer): Enterprise Customer if present then\n send xAPI statements just for this enterprise."}
{"_id": "q_7023", "text": "Send xAPI statements."}
{"_id": "q_7024", "text": "Django template tag that returns course information to display in a modal.\n\n You may pass in a particular course if you like. Otherwise, the modal will look for course context\n within the parent context.\n\n Usage:\n {% course_modal %}\n {% course_modal course %}"}
{"_id": "q_7025", "text": "Django template filter that returns an anchor with attributes useful for course modal selection.\n\n General Usage:\n {{ link_text|link_to_modal:index }}\n\n Examples:\n {{ course_title|link_to_modal:forloop.counter0 }}\n {{ course_title|link_to_modal:3 }}\n {{ view_details_text|link_to_modal:0 }}"}
{"_id": "q_7026", "text": "Populates the ``DataSharingConsent`` model with the ``enterprise`` application's consent data.\n\n Consent data from the ``enterprise`` application come from the ``EnterpriseCourseEnrollment`` model."}
{"_id": "q_7027", "text": "Send a completion status payload to the Degreed Completion Status endpoint\n\n Args:\n user_id: Unused.\n payload: JSON encoded object (serialized from DegreedLearnerDataTransmissionAudit)\n containing completion status fields per Degreed documentation.\n\n Returns:\n A tuple containing the status code and the body of the response.\n Raises:\n HTTPError: if we received a failure response code from Degreed"}
{"_id": "q_7028", "text": "Delete a completion status previously sent to the Degreed Completion Status endpoint\n\n Args:\n user_id: Unused.\n payload: JSON encoded object (serialized from DegreedLearnerDataTransmissionAudit)\n containing the required completion status fields for deletion per Degreed documentation.\n\n Returns:\n A tuple containing the status code and the body of the response.\n Raises:\n HTTPError: if we received a failure response code from Degreed"}
{"_id": "q_7029", "text": "Make a DELETE request using the session object to a Degreed endpoint.\n\n Args:\n url (str): The url to send a DELETE request to.\n data (str): The json encoded payload to DELETE.\n scope (str): Must be one of the scopes Degreed expects:\n - `CONTENT_PROVIDER_SCOPE`\n - `COMPLETION_PROVIDER_SCOPE`"}
{"_id": "q_7030", "text": "Instantiate a new session object for use in connecting with Degreed"}
{"_id": "q_7031", "text": "Return whether or not the specified content is available to the EnterpriseCustomer.\n\n Multiple course_run_ids and/or program_uuids query parameters can be sent to this view to check\n for their existence in the EnterpriseCustomerCatalogs associated with this EnterpriseCustomer.\n At least one course run key or program UUID value must be included in the request."}
{"_id": "q_7032", "text": "Returns the list of enterprise customers the user has a specified group permission access to."}
{"_id": "q_7033", "text": "Retrieve the list of entitlements available to this learner.\n\n Only those entitlements are returned that satisfy enterprise customer's data sharing setting.\n\n Arguments:\n request (HttpRequest): Reference to in-progress request instance.\n pk (Int): Primary key value of the selected enterprise learner.\n\n Returns:\n (HttpResponse): Response object containing a list of learner's entitlements."}
{"_id": "q_7034", "text": "Return whether or not the EnterpriseCustomerCatalog contains the specified content.\n\n Multiple course_run_ids and/or program_uuids query parameters can be sent to this view to check\n for their existence in the EnterpriseCustomerCatalog. At least one course run key\n or program UUID value must be included in the request."}
{"_id": "q_7035", "text": "Return the metadata for the specified course.\n\n The course needs to be included in the specified EnterpriseCustomerCatalog\n in order for metadata to be returned from this endpoint."}
{"_id": "q_7036", "text": "DRF view to get catalog details.\n\n Arguments:\n request (HttpRequest): Current request\n pk (int): Course catalog identifier\n\n Returns:\n (Response): DRF response object containing course catalogs."}
{"_id": "q_7037", "text": "Gets ``email``, ``enterprise_name``, and ``number_of_codes``,\n which are the relevant parameters for this API endpoint.\n\n :param request: The request to this endpoint.\n :return: The ``email``, ``enterprise_name``, and ``number_of_codes`` from the request."}
{"_id": "q_7038", "text": "Get a user-friendly message indicating a missing parameter for the API endpoint."}
{"_id": "q_7039", "text": "Return the title of the content item."}
{"_id": "q_7040", "text": "Return the description of the content item."}
{"_id": "q_7041", "text": "Return the image URI of the content item."}
{"_id": "q_7042", "text": "Return the content metadata item launch points.\n\n SAPSF allows you to transmit an arry of content launch points which\n are meant to represent sections of a content item which a learner can\n launch into from SAPSF. Currently, we only provide a single launch\n point for a content item."}
{"_id": "q_7043", "text": "Return the title of the courserun content item."}
{"_id": "q_7044", "text": "Return the schedule of the courseun content item."}
{"_id": "q_7045", "text": "Return the id for the given content_metadata_item, `uuid` for programs or `key` for other content"}
{"_id": "q_7046", "text": "Convert an ISO-8601 datetime string to a Unix epoch timestamp in some magnitude.\n\n By default, returns seconds."}
{"_id": "q_7047", "text": "Yield successive n-sized chunks from dictionary."}
{"_id": "q_7048", "text": "Convert a datetime.timedelta object or a regular number to a custom-formatted string.\n\n This function works like the strftime() method works for datetime.datetime\n objects.\n\n The fmt argument allows custom formatting to be specified. Fields can\n include seconds, minutes, hours, days, and weeks. Each field is optional.\n\n Arguments:\n tdelta (datetime.timedelta, int): time delta object containing the duration or an integer\n to go with the input_type.\n fmt (str): Expected format of the time delta. place holders can only be one of the following.\n 1. D to extract days from time delta\n 2. H to extract hours from time delta\n 3. M to extract months from time delta\n 4. S to extract seconds from timedelta\n input_type (str): The input_type argument allows tdelta to be a regular number instead of the\n default, which is a datetime.timedelta object.\n Valid input_type strings:\n 1. 's', 'seconds',\n 2. 'm', 'minutes',\n 3. 'h', 'hours',\n 4. 'd', 'days',\n 5. 'w', 'weeks'\n Returns:\n (str): timedelta object interpolated into a string following the given format.\n\n Examples:\n '{D:02}d {H:02}h {M:02}m {S:02}s' --> '05d 08h 04m 02s' (default)\n '{W}w {D}d {H}:{M:02}:{S:02}' --> '4w 5d 8:04:02'\n '{D:2}d {H:2}:{M:02}:{S:02}' --> ' 5d 8:04:02'\n '{H}h {S}s' --> '72h 800s'"}
{"_id": "q_7049", "text": "Return the transformed version of the course description.\n\n We choose one value out of the course's full description, short description, and title\n depending on availability and length limits."}
{"_id": "q_7050", "text": "Delete the file if it already exist and returns the enterprise customer logo image path.\n\n Arguments:\n instance (:class:`.EnterpriseCustomerBrandingConfiguration`): EnterpriseCustomerBrandingConfiguration object\n filename (str): file to upload\n\n Returns:\n path: path of image file e.g. enterprise/branding/<model.id>/<model_id>_logo.<ext>.lower()"}
{"_id": "q_7051", "text": "Return link by email."}
{"_id": "q_7052", "text": "Unlink user email from Enterprise Customer.\n\n If :class:`django.contrib.auth.models.User` instance with specified email does not exist,\n :class:`.PendingEnterpriseCustomerUser` instance is deleted instead.\n\n Raises EnterpriseCustomerUser.DoesNotExist if instance of :class:`django.contrib.auth.models.User` with\n specified email exists and corresponding :class:`.EnterpriseCustomerUser` instance does not.\n\n Raises PendingEnterpriseCustomerUser.DoesNotExist exception if instance of\n :class:`django.contrib.auth.models.User` with specified email exists and corresponding\n :class:`.PendingEnterpriseCustomerUser` instance does not."}
{"_id": "q_7053", "text": "Get the data sharing consent object associated with a certain user, enterprise customer, and other scope.\n\n :param username: The user that grants consent\n :param enterprise_customer_uuid: The consent requester\n :param course_id (optional): A course ID to which consent may be related\n :param program_uuid (optional): A program to which consent may be related\n :return: The data sharing consent object, or None if the enterprise customer for the given UUID does not exist."}
{"_id": "q_7054", "text": "Get the data sharing consent object associated with a certain user of a customer for a course.\n\n :param username: The user that grants consent.\n :param course_id: The course for which consent is granted.\n :param enterprise_customer_uuid: The consent requester.\n :return: The data sharing consent object"}
{"_id": "q_7055", "text": "Send xAPI statement for course enrollment.\n\n Arguments:\n lrs_configuration (XAPILRSConfiguration): XAPILRSConfiguration instance where to send statements.\n course_enrollment (CourseEnrollment): Course enrollment object."}
{"_id": "q_7056", "text": "Send xAPI statement for course completion.\n\n Arguments:\n lrs_configuration (XAPILRSConfiguration): XAPILRSConfiguration instance where to send statements.\n user (User): Django User object.\n course_overview (CourseOverview): Course over view object containing course details.\n course_grade (CourseGrade): course grade object."}
{"_id": "q_7057", "text": "Return the exported and transformed content metadata as a dictionary."}
{"_id": "q_7058", "text": "Transform the provided content metadata item to the schema expected by the integrated channel."}
{"_id": "q_7059", "text": "Perform other one-time initialization steps."}
{"_id": "q_7060", "text": "Get actor for the statement."}
{"_id": "q_7061", "text": "Parse csv file and return a stream of dictionaries representing each row.\n\n First line of CSV file must contain column headers.\n\n Arguments:\n file_stream: input file\n expected_columns (set[unicode]): columns that are expected to be present\n\n Yields:\n dict: CSV line parsed into a dictionary."}
{"_id": "q_7062", "text": "Validate email to be linked to Enterprise Customer.\n\n Performs two checks:\n * Checks that email is valid\n * Checks that it is not already linked to any Enterprise Customer\n\n Arguments:\n email (str): user email to link\n raw_email (str): raw value as it was passed by user - used in error message.\n message_template (str): Validation error template string.\n ignore_existing (bool): If True to skip the check for an existing Enterprise Customer\n\n Raises:\n ValidationError: if email is invalid or already linked to Enterprise Customer.\n\n Returns:\n bool: Whether or not there is an existing record with the same email address."}
{"_id": "q_7063", "text": "Return course runs from program data.\n\n Arguments:\n program(dict): Program data from Course Catalog API\n\n Returns:\n set: course runs in given program"}
{"_id": "q_7064", "text": "Get the earliest date that one of the courses in the program was available.\n For the sake of emails to new learners, we treat this as the program start date.\n\n Arguemnts:\n program (dict): Program data from Course Catalog API\n\n returns:\n datetime.datetime: The date and time at which the first course started"}
{"_id": "q_7065", "text": "Returns paginated list.\n\n Arguments:\n object_list (QuerySet): A list of records to be paginated.\n page (int): Current page number.\n page_size (int): Number of records displayed in each paginated set.\n show_all (bool): Whether to show all records.\n\n Adopted from django/contrib/admin/templatetags/admin_list.py\n https://github.com/django/django/blob/1.11.1/django/contrib/admin/templatetags/admin_list.py#L50"}
{"_id": "q_7066", "text": "Clean email form field\n\n Returns:\n str: the cleaned value, converted to an email address (or an empty string)"}
{"_id": "q_7067", "text": "Clean program.\n\n Try obtaining program treating form value as program UUID or title.\n\n Returns:\n dict: Program information if program found"}
{"_id": "q_7068", "text": "Clean the notify_on_enrollment field."}
{"_id": "q_7069", "text": "Verify that the selected mode is valid for the given course ."}
{"_id": "q_7070", "text": "Verify that selected mode is available for program and all courses in the program"}
{"_id": "q_7071", "text": "Retrieve a list of catalog ID and name pairs.\n\n Once retrieved, these name pairs can be used directly as a value\n for the `choices` argument to a ChoiceField."}
{"_id": "q_7072", "text": "Clean form fields prior to database entry.\n\n In this case, the major cleaning operation is substituting a None value for a blank\n value in the Catalog field."}
{"_id": "q_7073", "text": "Final validations of model fields.\n\n 1. Validate that selected site for enterprise customer matches with the selected identity provider's site."}
{"_id": "q_7074", "text": "Ensure that all necessary resources to render the view are present."}
{"_id": "q_7075", "text": "Get the set of variables that are needed by default across views."}
{"_id": "q_7076", "text": "Return a dict having course or program specific keys for data sharing consent page."}
{"_id": "q_7077", "text": "Process the above form."}
{"_id": "q_7078", "text": "Handle the enrollment of enterprise learner in the provided course.\n\n Based on `enterprise_uuid` in URL, the view will decide which\n enterprise customer's course enrollment record should be created.\n\n Depending on the value of query parameter `course_mode` then learner\n will be either redirected to LMS dashboard for audit modes or\n redirected to ecommerce basket flow for payment of premium modes."}
{"_id": "q_7079", "text": "Set the final discounted price on each premium mode."}
{"_id": "q_7080", "text": "Return the available course modes for the course run.\n\n The provided EnterpriseCustomerCatalog is used to filter and order the\n course modes returned using the EnterpriseCustomerCatalog's\n field \"enabled_course_modes\"."}
{"_id": "q_7081", "text": "Extend a course with more details needed for the program landing page.\n\n In particular, we add the following:\n\n * `course_image_uri`\n * `course_title`\n * `course_level_type`\n * `course_short_description`\n * `course_full_description`\n * `course_effort`\n * `expected_learning_items`\n * `staff`"}
{"_id": "q_7082", "text": "User is requesting a course, we need to translate that into the current course run.\n\n :param user:\n :param enterprise_customer:\n :param course_key:\n :return: course_run_id"}
{"_id": "q_7083", "text": "Return whether a request is eligible for direct audit enrollment for a particular enterprise customer.\n\n 'resource_id' can be either course_run_id or program_uuid.\n We check for the following criteria:\n - The `audit` query parameter.\n - The user's being routed to the course enrollment landing page.\n - The customer's catalog contains the course in question.\n - The audit track is an available mode for the course."}
{"_id": "q_7084", "text": "Redirects to the appropriate view depending on where the user came from."}
{"_id": "q_7085", "text": "Run some custom GET logic for Enterprise workflows before routing the user through existing views.\n\n In particular, before routing to existing views:\n - If the requested resource is a course, find the current course run for that course,\n and make that course run the requested resource instead.\n - Look to see whether a request is eligible for direct audit enrollment, and if so, directly enroll the user."}
{"_id": "q_7086", "text": "Run some custom POST logic for Enterprise workflows before routing the user through existing views."}
{"_id": "q_7087", "text": "Task to send learner data to each linked integrated channel.\n\n Arguments:\n username (str): The username of the User to be used for making API requests for learner data.\n channel_code (str): Capitalized identifier for the integrated channel\n channel_pk (str): Primary key for identifying integrated channel"}
{"_id": "q_7088", "text": "Task to unlink inactive learners of provided integrated channel.\n\n Arguments:\n channel_code (str): Capitalized identifier for the integrated channel\n channel_pk (str): Primary key for identifying integrated channel"}
{"_id": "q_7089", "text": "Handle User model changes - checks if pending enterprise customer user record exists and upgrades it to actual link.\n\n If there are pending enrollments attached to the PendingEnterpriseCustomerUser, then this signal also takes the\n newly-created users and enrolls them in the relevant courses."}
{"_id": "q_7090", "text": "Set default value for `EnterpriseCustomerCatalog.content_filter` if not already set."}
{"_id": "q_7091", "text": "Assign an enterprise learner role to EnterpriseCustomerUser whenever a new record is created."}
{"_id": "q_7092", "text": "Ensure at least one of the specified query parameters are included in the request.\n\n This decorator checks for the existence of at least one of the specified query\n parameters and passes the values as function parameters to the decorated view.\n If none of the specified query parameters are included in the request, a\n ValidationError is raised.\n\n Usage::\n @require_at_least_one_query_parameter('program_uuids', 'course_run_ids')\n def my_view(request, program_uuids, course_run_ids):\n # Some functionality ..."}
{"_id": "q_7093", "text": "Assigns enterprise role to users."}
{"_id": "q_7094", "text": "Entry point for managment command execution."}
{"_id": "q_7095", "text": "Perform the linking of user in the process of logging to the Enterprise Customer.\n\n Args:\n backend: The class handling the SSO interaction (SAML, OAuth, etc)\n user: The user object in the process of being logged in with\n **kwargs: Any remaining pipeline variables"}
{"_id": "q_7096", "text": "Find the LMS user from the LMS model `UserSocialAuth`.\n\n Arguments:\n tpa_provider (third_party_auth.provider): third party auth provider object\n tpa_username (str): Username returned by the third party auth"}
{"_id": "q_7097", "text": "Instantiate a new session object for use in connecting with SAP SuccessFactors"}
{"_id": "q_7098", "text": "Make a post request using the session object to a SuccessFactors endpoint.\n\n Args:\n url (str): The url to post to.\n payload (str): The json encoded payload to post."}
{"_id": "q_7099", "text": "Make recursive GET calls to traverse the paginated API response for search students."}
{"_id": "q_7100", "text": "Filter only for the user's ID if non-staff."}
{"_id": "q_7101", "text": "Send a completion status call to the integrated channel using the client.\n\n Args:\n payload: The learner completion data payload to send to the integrated channel.\n kwargs: Contains integrated channel-specific information for customized transmission variables.\n - app_label: The app label of the integrated channel for whom to store learner data records for.\n - model_name: The name of the specific learner data record model to use.\n - remote_user_id: The remote ID field name of the learner on the audit model."}
{"_id": "q_7102", "text": "Validate that a particular image extension."}
{"_id": "q_7103", "text": "Get the enterprise customer id given an enterprise customer catalog id."}
{"_id": "q_7104", "text": "Run sphinx-apidoc after Sphinx initialization.\n\n Read the Docs won't run tox or custom shell commands, so we need this to\n avoid checking in the generated reStructuredText files."}
{"_id": "q_7105", "text": "Returns the enterprise customer requested for the given uuid, None if not.\n\n Raises CommandError if uuid is invalid."}
{"_id": "q_7106", "text": "Assemble a list of integrated channel classes to transmit to.\n\n If a valid channel type was provided, use it.\n\n Otherwise, use all the available channel types."}
{"_id": "q_7107", "text": "Get the contents of a file listing the requirements"}
{"_id": "q_7108", "text": "Iterate over each learner data record and transmit it to the integrated channel."}
{"_id": "q_7109", "text": "Transmit content metadata to integrated channel."}
{"_id": "q_7110", "text": "Return a DegreedLearnerDataTransmissionAudit with the given enrollment and course completion data.\n\n If completed_date is None, then course completion has not been met.\n\n If no remote ID can be found, return None."}
{"_id": "q_7111", "text": "Render the given template with the stock data."}
{"_id": "q_7112", "text": "Build common admin context."}
{"_id": "q_7113", "text": "Handle GET request - render \"Transmit courses metadata\" form.\n\n Arguments:\n request (django.http.request.HttpRequest): Request instance\n enterprise_customer_uuid (str): Enterprise Customer UUID\n\n Returns:\n django.http.response.HttpResponse: HttpResponse"}
{"_id": "q_7114", "text": "Get the list of PendingEnterpriseCustomerUsers we want to render.\n\n Args:\n search_keyword (str): The keyword to search for in pending users' email addresses.\n customer_uuid (str): A unique identifier to filter down to only pending users\n linked to a particular EnterpriseCustomer."}
{"_id": "q_7115", "text": "Link single user by email or username.\n\n Arguments:\n enterprise_customer (EnterpriseCustomer): learners will be linked to this Enterprise Customer instance\n manage_learners_form (ManageLearnersForm): bound ManageLearners form instance"}
{"_id": "q_7116", "text": "Bulk link users by email.\n\n Arguments:\n enterprise_customer (EnterpriseCustomer): learners will be linked to this Enterprise Customer instance\n manage_learners_form (ManageLearnersForm): bound ManageLearners form instance\n request (django.http.request.HttpRequest): HTTP Request instance\n email_list (iterable): A list of pre-processed email addresses to handle using the form"}
{"_id": "q_7117", "text": "Query the enrollment API and determine if a learner is enrolled in a given course run track.\n\n Args:\n user: The user whose enrollment needs to be checked\n course_mode: The mode with which the enrollment should be checked\n course_id: course id of the course where enrollment should be checked.\n\n Returns:\n Boolean: Whether or not enrollment exists"}
{"_id": "q_7118", "text": "Accept a list of emails, and separate them into users that exist on OpenEdX and users who don't.\n\n Args:\n emails: An iterable of email addresses to split between existing and nonexisting\n\n Returns:\n users: Queryset of users who exist in the OpenEdX platform and who were in the list of email addresses\n missing_emails: List of unique emails which were in the original list, but do not yet exist as users"}
{"_id": "q_7119", "text": "Enroll existing users in all courses in a program, and create pending enrollments for nonexisting users.\n\n Args:\n enterprise_customer: The EnterpriseCustomer which is sponsoring the enrollment\n program_details: The details of the program in which we're enrolling\n course_mode (str): The mode with which we're enrolling in the program\n emails: An iterable of email addresses which need to be enrolled\n\n Returns:\n successes: A list of users who were successfully enrolled in all courses of the program\n pending: A list of PendingEnterpriseCustomerUsers who were successfully linked and had\n pending enrollments created for them in the database\n failures: A list of users who could not be enrolled in the program"}
{"_id": "q_7120", "text": "Enroll existing users in a course, and create a pending enrollment for nonexisting users.\n\n Args:\n enterprise_customer: The EnterpriseCustomer which is sponsoring the enrollment\n course_id (str): The unique identifier of the course in which we're enrolling\n course_mode (str): The mode with which we're enrolling in the course\n emails: An iterable of email addresses which need to be enrolled\n\n Returns:\n successes: A list of users who were successfully enrolled in the course\n pending: A list of PendingEnterpriseCustomerUsers who were successfully linked and had\n pending enrollments created for them in the database\n failures: A list of users who could not be enrolled in the course"}
{"_id": "q_7121", "text": "Deduplicate any outgoing message requests, and send the remainder.\n\n Args:\n http_request: The HTTP request in whose response we want to embed the messages\n message_requests: A list of undeduplicated messages in the form of tuples of message type\n and text- for example, ('error', 'Something went wrong')"}
{"_id": "q_7122", "text": "Create message for the users who were not able to be enrolled in a course or program.\n\n Args:\n users: An iterable of users who were not successfully enrolled\n enrolled_in (str): A string identifier for the course or program with which enrollment was attempted\n\n Returns:\n tuple: A 2-tuple containing a message type and message text"}
{"_id": "q_7123", "text": "Enroll the users with the given email addresses to the courses specified, either specifically or by program.\n\n Args:\n cls (type): The EnterpriseCustomerManageLearnersView class itself\n request: The HTTP request the enrollment is being created by\n enterprise_customer: The instance of EnterpriseCustomer whose attached users we're enrolling\n emails: An iterable of strings containing email addresses to enroll in a course\n mode: The enrollment mode the users will be enrolled in the course with\n course_id: The ID of the course in which we want to enroll\n program_details: Details about a program in which we want to enroll\n notify: Whether to notify (by email) the users that have been enrolled"}
{"_id": "q_7124", "text": "Handle DELETE request - handle unlinking learner.\n\n Arguments:\n request (django.http.request.HttpRequest): Request instance\n customer_uuid (str): Enterprise Customer UUID\n\n Returns:\n django.http.response.HttpResponse: HttpResponse"}
{"_id": "q_7125", "text": "Build a ProxyDataSharingConsent using the details of the received consent records."}
{"_id": "q_7126", "text": "Commit a real ``DataSharingConsent`` object to the database, mirroring current field settings.\n\n :return: A ``DataSharingConsent`` object if validation is successful, otherwise ``None``."}
{"_id": "q_7127", "text": "Get course completions via PersistentCourseGrade for all the learners of given enterprise customer.\n\n Arguments:\n enterprise_customer (EnterpriseCustomer): Include Course enrollments for learners\n of this enterprise customer.\n days (int): Include course enrollment of this number of days.\n\n Returns:\n (list): A list of PersistentCourseGrade objects."}
{"_id": "q_7128", "text": "Prefetch Users from the list of user_ids present in the persistent_course_grades.\n\n Arguments:\n persistent_course_grades (list): A list of PersistentCourseGrade.\n\n Returns:\n (dict): A dictionary containing user_id to user mapping."}
{"_id": "q_7129", "text": "Get Identity Provider with given id.\n\n Return:\n Instance of ProviderConfig or None."}
{"_id": "q_7130", "text": "Get template of catalog admin url.\n\n URL template will contain a placeholder '{catalog_id}' for catalog id.\n Arguments:\n mode e.g. change/add.\n\n Returns:\n A string containing template for catalog url.\n\n Example:\n >>> get_catalog_admin_url_template('change')\n \"http://localhost:18381/admin/catalogs/catalog/{catalog_id}/change/\""}
{"_id": "q_7131", "text": "Create HTML and plaintext message bodies for a notification.\n\n We receive a context with data we can use to render, as well as an optional site\n template configration - if we don't get a template configuration, we'll use the\n standard, built-in template.\n\n Arguments:\n template_context (dict): A set of data to render\n template_configuration: A database-backed object with templates\n stored that can be used to render a notification."}
{"_id": "q_7132", "text": "Get a subject line for a notification email.\n\n The method is designed to fail in a \"smart\" way; if we can't render a\n database-backed subject line template, then we'll fall back to a template\n saved in the Django settings; if we can't render _that_ one, then we'll\n fall through to a friendly string written into the code.\n\n One example of a failure case in which we want to fall back to a stock template\n would be if a site admin entered a subject line string that contained a template\n tag that wasn't available, causing a KeyError to be raised.\n\n Arguments:\n course_name (str): Course name to be rendered into the string\n template_configuration: A database-backed object with a stored subject line template"}
{"_id": "q_7133", "text": "Send an email notifying a user about their enrollment in a course.\n\n Arguments:\n user: Either a User object or a PendingEnterpriseCustomerUser that we can use\n to get details for the email\n enrolled_in (dict): The dictionary contains details of the enrollable object\n (either course or program) that the user enrolled in. This MUST contain\n a `name` key, and MAY contain the other following keys:\n - url: A human-friendly link to the enrollable's home page\n - type: Either `course` or `program` at present\n - branding: A special name for what the enrollable \"is\"; for example,\n \"MicroMasters\" would be the branding for a \"MicroMasters Program\"\n - start: A datetime object indicating when the enrollable will be available.\n enterprise_customer: The EnterpriseCustomer that the enrollment was created using.\n email_connection: An existing Django email connection that can be used without\n creating a new connection for each individual message"}
{"_id": "q_7134", "text": "Get the ``EnterpriseCustomer`` instance associated with ``uuid``.\n\n :param uuid: The universally unique ID of the enterprise customer.\n :return: The ``EnterpriseCustomer`` instance, or ``None`` if it doesn't exist."}
{"_id": "q_7135", "text": "Return track selection url for the given course.\n\n Arguments:\n course_run (dict): A dictionary containing course run metadata.\n query_parameters (dict): A dictionary containing query parameters to be added to course selection url.\n\n Raises:\n (KeyError): Raised when course run dict does not have 'key' key.\n\n Returns:\n (str): Course track selection url."}
{"_id": "q_7136", "text": "Given an EnterpriseCustomer UUID, return the corresponding EnterpriseCustomer or raise a 404.\n\n Arguments:\n enterprise_uuid (str): The UUID (in string form) of the EnterpriseCustomer to fetch.\n\n Returns:\n (EnterpriseCustomer): The EnterpriseCustomer given the UUID."}
{"_id": "q_7137", "text": "Get MD5 encoded cache key for given arguments.\n\n Here is the format of key before MD5 encryption.\n key1:value1__key2:value2 ...\n\n Example:\n >>> get_cache_key(site_domain=\"example.com\", resource=\"enterprise\")\n # Here is key format for above call\n # \"site_domain:example.com__resource:enterprise\"\n a54349175618ff1659dee0978e3149ca\n\n Arguments:\n **kwargs: Key word arguments that need to be present in cache key.\n\n Returns:\n An MD5 encoded key uniquely identified by the key word arguments."}
{"_id": "q_7138", "text": "Traverse a paginated API response.\n\n Extracts and concatenates \"results\" (list of dict) returned by DRF-powered\n APIs.\n\n Arguments:\n response (Dict): Current response dict from service API\n endpoint (slumber Resource object): slumber Resource object from edx-rest-api-client\n\n Returns:\n list of dict."}
{"_id": "q_7139", "text": "Return grammatically correct, translated text based off of a minimum and maximum value.\n\n Example:\n min = 1, max = 1, singular = '{} hour required for this course', plural = '{} hours required for this course'\n output = '1 hour required for this course'\n\n min = 2, max = 2, singular = '{} hour required for this course', plural = '{} hours required for this course'\n output = '2 hours required for this course'\n\n min = 2, max = 4, range_text = '{}-{} hours required for this course'\n output = '2-4 hours required for this course'\n\n min = None, max = 2, plural = '{} hours required for this course'\n output = '2 hours required for this course'\n\n Expects ``range_text`` to already have a translation function called on it.\n\n Returns:\n ``None`` if both of the input values are ``None``.\n ``singular`` formatted if both are equal or one of the inputs, but not both, are ``None``, and the value is 1.\n ``plural`` formatted if both are equal or one of its inputs, but not both, are ``None``, and the value is > 1.\n ``range_text`` formatted if min != max and both are valid values."}
{"_id": "q_7140", "text": "Format the price to have the appropriate currency and digits..\n\n :param price: The price amount.\n :param currency: The currency for the price.\n :return: A formatted price string, i.e. '$10', '$10.52'."}
{"_id": "q_7141", "text": "Get the site configuration value for a key, unless a site configuration does not exist for that site.\n\n Useful for testing when no Site Configuration exists in edx-enterprise or if a site in LMS doesn't have\n a configuration tied to it.\n\n :param site: A Site model object\n :param key: The name of the value to retrieve\n :param default: The default response if there's no key in site config or settings\n :return: The value located at that key in the site configuration or settings file."}
{"_id": "q_7142", "text": "Get a configuration value, or fall back to ``default`` if it doesn't exist.\n\n Also takes a `type` argument to guide which particular upstream method to use when trying to retrieve a value.\n Current types include:\n - `url` to specifically get a URL."}
{"_id": "q_7143", "text": "Emit a track event for enterprise course enrollment."}
{"_id": "q_7144", "text": "Return true if the course run is enrollable, false otherwise.\n\n We look for the following criteria:\n - end is greater than now OR null\n - enrollment_start is less than now OR null\n - enrollment_end is greater than now OR null"}
{"_id": "q_7145", "text": "Return true if the course run has a verified seat with an unexpired upgrade deadline, false otherwise."}
{"_id": "q_7146", "text": "Return course run with start date closest to now."}
{"_id": "q_7147", "text": "Return the current course run on the following conditions.\n\n - If user has active course runs (already enrolled) then return course run with closest start date\n Otherwise it will check the following logic:\n - Course run is enrollable (see is_course_run_enrollable)\n - Course run has a verified seat and the upgrade deadline has not expired.\n - Course run start date is closer to now than any other enrollable/upgradeable course runs.\n - If no enrollable/upgradeable course runs, return course run with most recent start date."}
{"_id": "q_7148", "text": "LRS client instance to be used for sending statements."}
{"_id": "q_7149", "text": "Save xAPI statement.\n\n Arguments:\n statement (EnterpriseStatement): xAPI Statement to send to the LRS.\n\n Raises:\n ClientError: If xAPI statement fails to save."}
{"_id": "q_7150", "text": "Check that if request user has implicit access to `ENTERPRISE_DASHBOARD_ADMIN_ROLE` feature role.\n\n Returns:\n boolean: whether the request user has access or not"}
{"_id": "q_7151", "text": "Check that if request user has implicit access to `ENTERPRISE_CATALOG_ADMIN_ROLE` feature role.\n\n Returns:\n boolean: whether the request user has access or not"}
{"_id": "q_7152", "text": "Check that if request user has implicit access to `ENTERPRISE_ENROLLMENT_API_ADMIN_ROLE` feature role.\n\n Returns:\n boolean: whether the request user has access or not"}
{"_id": "q_7153", "text": "Instance is EnterpriseCustomer. Return e-commerce coupon urls."}
{"_id": "q_7154", "text": "Return an export csv action.\n\n Arguments:\n description (string): action description\n fields ([string]): list of model fields to include\n header (bool): whether or not to output the column names as the first row"}
{"_id": "q_7155", "text": "Return the action method to clear the catalog ID for a EnterpriseCustomer."}
{"_id": "q_7156", "text": "Get information about maps of the robots.\n\n :return:"}
{"_id": "q_7157", "text": "Get information about robots connected to account.\n\n :return:"}
{"_id": "q_7158", "text": "Get information about persistent maps of the robots.\n\n :return:"}
{"_id": "q_7159", "text": "Calculates the distance between two points on earth."}
{"_id": "q_7160", "text": "Takes a graph and returns an adjacency list.\n\n Parameters\n ----------\n g : :any:`networkx.DiGraph`, :any:`networkx.Graph`, etc.\n Any object that networkx can turn into a\n :any:`DiGraph<networkx.DiGraph>`.\n return_dict_of_dict : bool (optional, default: ``True``)\n Specifies whether this function will return a dict of dicts\n or a dict of lists.\n\n Returns\n -------\n adj : dict\n An adjacency representation of graph as a dictionary of\n dictionaries, where a key is the vertex index for a vertex\n ``v`` and the values are :class:`dicts<.dict>` with keys for\n the vertex index and values as edge properties.\n\n Examples\n --------\n >>> import queueing_tool as qt\n >>> import networkx as nx\n >>> adj = {0: [1, 2], 1: [0], 2: [0, 3], 3: [2]}\n >>> g = nx.DiGraph(adj)\n >>> qt.graph2dict(g, return_dict_of_dict=True)\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {1: {}, 2: {}},\n 1: {0: {}},\n 2: {0: {}, 3: {}},\n 3: {2: {}}}\n >>> qt.graph2dict(g, return_dict_of_dict=False)\n {0: [1, 2], 1: [0], 2: [0, 3], 3: [2]}"}
{"_id": "q_7161", "text": "Takes a dictionary based representation of an adjacency list\n and returns a dict of dicts based representation."}
{"_id": "q_7162", "text": "Takes an adjacency list, dict, or matrix and returns a graph.\n\n The purpose of this function is take an adjacency list (or matrix)\n and return a :class:`.QueueNetworkDiGraph` that can be used with a\n :class:`.QueueNetwork` instance. The Graph returned has the\n ``edge_type`` edge property set for each edge. Note that the graph may\n be altered.\n\n Parameters\n ----------\n adjacency : dict or :class:`~numpy.ndarray`\n An adjacency list as either a dict, or an adjacency matrix.\n adjust : int ``{1, 2}`` (optional, default: 1)\n Specifies what to do when the graph has terminal vertices\n (nodes with no out-edges). Note that if ``adjust`` is not 2\n then it is assumed to be 1. There are two choices:\n\n * ``adjust = 1``: A loop is added to each terminal node in the\n graph, and their ``edge_type`` of that loop is set to 0.\n * ``adjust = 2``: All edges leading to terminal nodes have\n their ``edge_type`` set to 0.\n\n **kwargs :\n Unused.\n\n Returns\n -------\n out : :any:`networkx.DiGraph`\n A directed graph with the ``edge_type`` edge property.\n\n Raises\n ------\n TypeError\n Is raised if ``adjacency`` is not a dict or\n :class:`~numpy.ndarray`.\n\n Examples\n --------\n If terminal nodes are such that all in-edges have edge type ``0``\n then nothing is changed. However, if a node is a terminal node then\n a loop is added with edge type 0.\n\n >>> import queueing_tool as qt\n >>> adj = {\n ... 0: {1: {}},\n ... 1: {2: {},\n ... 3: {}},\n ... 3: {0: {}}}\n >>> eTy = {0: {1: 1}, 1: {2: 2, 3: 4}, 3: {0: 1}}\n >>> # A loop will be added to vertex 2\n >>> g = qt.adjacency2graph(adj, edge_type=eTy)\n >>> ans = qt.graph2dict(g)\n >>> sorted(ans.items()) # doctest: +NORMALIZE_WHITESPACE\n [(0, {1: {'edge_type': 1}}),\n (1, {2: {'edge_type': 2}, 3: {'edge_type': 4}}), \n (2, {2: {'edge_type': 0}}),\n (3, {0: {'edge_type': 1}})]\n\n You can use a dict of lists to represent the adjacency list.\n\n >>> adj = {0 : [1], 1: [2, 3], 3: [0]}\n >>> g = qt.adjacency2graph(adj, edge_type=eTy)\n >>> ans = qt.graph2dict(g)\n >>> sorted(ans.items()) # doctest: +NORMALIZE_WHITESPACE\n [(0, {1: {'edge_type': 1}}),\n (1, {2: {'edge_type': 2}, 3: {'edge_type': 4}}),\n (2, {2: {'edge_type': 0}}),\n (3, {0: {'edge_type': 1}})]\n\n Alternatively, you could have this function adjust the edges that\n lead to terminal vertices by changing their edge type to 0:\n\n >>> # The graph is unaltered\n >>> g = qt.adjacency2graph(adj, edge_type=eTy, adjust=2)\n >>> ans = qt.graph2dict(g)\n >>> sorted(ans.items()) # doctest: +NORMALIZE_WHITESPACE\n [(0, {1: {'edge_type': 1}}),\n (1, {2: {'edge_type': 0}, 3: {'edge_type': 4}}),\n (2, {}),\n (3, {0: {'edge_type': 1}})]"}
{"_id": "q_7163", "text": "Returns all edges with the specified edge type.\n\n Parameters\n ----------\n edge_type : int\n An integer specifying what type of edges to return.\n\n Returns\n -------\n out : list of 2-tuples\n A list of 2-tuples representing the edges in the graph\n with the specified edge type.\n\n Examples\n --------\n Lets get type 2 edges from the following graph\n\n >>> import queueing_tool as qt\n >>> adjacency = {\n ... 0: {1: {'edge_type': 2}},\n ... 1: {2: {'edge_type': 1},\n ... 3: {'edge_type': 4}},\n ... 2: {0: {'edge_type': 2}},\n ... 3: {3: {'edge_type': 0}}\n ... }\n >>> G = qt.QueueNetworkDiGraph(adjacency)\n >>> ans = G.get_edge_type(2)\n >>> ans.sort()\n >>> ans\n [(0, 1), (2, 0)]"}
{"_id": "q_7164", "text": "Returns the arguments used when plotting.\n\n Takes any keyword arguments for\n :class:`~matplotlib.collections.LineCollection` and\n :meth:`~matplotlib.axes.Axes.scatter` and returns two\n dictionaries with all the defaults set.\n\n Parameters\n ----------\n line_kwargs : dict (optional, default: ``None``)\n Any keyword arguments accepted by\n :class:`~matplotlib.collections.LineCollection`.\n scatter_kwargs : dict (optional, default: ``None``)\n Any keyword arguments accepted by\n :meth:`~matplotlib.axes.Axes.scatter`.\n\n Returns\n -------\n tuple\n A 2-tuple of dicts. The first entry is the keyword\n arguments for\n :class:`~matplotlib.collections.LineCollection` and the\n second is the keyword args for\n :meth:`~matplotlib.axes.Axes.scatter`.\n\n Notes\n -----\n If a specific keyword argument is not passed then the defaults\n are used."}
{"_id": "q_7165", "text": "A function that returns the arrival time of the next arrival for\n a Poisson random measure.\n\n Parameters\n ----------\n t : float\n The start time from which to simulate the next arrival time.\n rate : function\n The *intensity function* for the measure, where ``rate(t)`` is\n the expected arrival rate at time ``t``.\n rate_max : float\n The maximum value of the ``rate`` function.\n\n Returns\n -------\n out : float\n The time of the next arrival.\n\n Notes\n -----\n This function returns the time of the next arrival, where the\n distribution of the number of arrivals between times :math:`t` and\n :math:`t+s` is Poisson with mean\n\n .. math::\n\n \\int_{t}^{t+s} dx \\, r(x)\n\n where :math:`r(t)` is the supplied ``rate`` function. This function\n can only simulate processes that have bounded intensity functions.\n See chapter 6 of [3]_ for more on the mathematics behind Poisson\n random measures; the book's publisher, Springer, has that chapter\n available online for free at (`pdf`_\\).\n\n A Poisson random measure is sometimes called a non-homogeneous\n Poisson process. A Poisson process is a special type of Poisson\n random measure.\n\n .. _pdf: http://www.springer.com/cda/content/document/\\\n cda_downloaddocument/9780387878584-c1.pdf\n\n Examples\n --------\n Suppose you wanted to model the arrival process as a Poisson\n random measure with rate function :math:`r(t) = 2 + \\sin( 2\\pi t)`.\n Then you could do so as follows:\n\n >>> import queueing_tool as qt\n >>> import numpy as np\n >>> np.random.seed(10)\n >>> rate = lambda t: 2 + np.sin(2 * np.pi * t)\n >>> arr_f = lambda t: qt.poisson_random_measure(t, rate, 3)\n >>> arr_f(1) # doctest: +ELLIPSIS\n 1.491...\n\n References\n ----------\n .. [3] Cinlar, Erhan. *Probability and stochastics*. Graduate Texts in\\\n Mathematics. Vol. 261. Springer, New York, 2011.\\\n :doi:`10.1007/978-0-387-87859-1`"}
{"_id": "q_7166", "text": "Returns a color for the queue.\n\n Parameters\n ----------\n which : int (optional, default: ``0``)\n Specifies the type of color to return.\n\n Returns\n -------\n color : list\n Returns a RGBA color that is represented as a list with 4\n entries where each entry can be any floating point number\n between 0 and 1.\n\n * If ``which`` is 1 then it returns the color of the edge\n as if it were a self loop. This is specified in\n ``colors['edge_loop_color']``.\n * If ``which`` is 2 then it returns the color of the vertex\n pen color (defined as color/vertex_color in\n :meth:`.QueueNetworkDiGraph.graph_draw`). This is\n specified in ``colors['vertex_color']``.\n * If ``which`` is anything else, then it returns the a\n shade of the edge that is proportional to the number of\n agents in the system -- which includes those being\n servered and those waiting to be served. More agents\n correspond to darker edge colors. Uses\n ``colors['vertex_fill_color']`` if the queue sits on a\n loop, and ``colors['edge_color']`` otherwise."}
{"_id": "q_7167", "text": "Returns an integer representing whether the next event is\n an arrival, a departure, or nothing.\n\n Returns\n -------\n out : int\n An integer representing whether the next event is an\n arrival or a departure: ``1`` corresponds to an arrival,\n ``2`` corresponds to a departure, and ``0`` corresponds to\n nothing scheduled to occur."}
{"_id": "q_7168", "text": "Resets the queue to its initial state.\n\n The attributes ``t``, ``num_events``, ``num_agents`` are set to\n zero, :meth:`.reset_colors` is called, and the\n :meth:`.QueueServer.clear` method is called for each queue in\n the network.\n\n Notes\n -----\n ``QueueNetwork`` must be re-initialized before any simulations\n can run."}
{"_id": "q_7169", "text": "Clears data from all queues.\n\n If none of the parameters are given then every queue's data is\n cleared.\n\n Parameters\n ----------\n queues : int or an iterable of int (optional)\n The edge index (or an iterable of edge indices) identifying\n the :class:`QueueServer(s)<.QueueServer>` whose data will\n be cleared.\n edge : 2-tuple of int or *array_like* (optional)\n Explicitly specify which queues' data to clear. Must be\n either:\n\n * A 2-tuple of the edge's source and target vertex\n indices, or\n * An iterable of 2-tuples of the edge's source and\n target vertex indices.\n\n edge_type : int or an iterable of int (optional)\n A integer, or a collection of integers identifying which\n edge types will have their data cleared."}
{"_id": "q_7170", "text": "Returns a deep copy of itself."}
{"_id": "q_7171", "text": "Draws the network. The coloring of the network corresponds\n to the number of agents at each queue.\n\n Parameters\n ----------\n update_colors : ``bool`` (optional, default: ``True``).\n Specifies whether all the colors are updated.\n line_kwargs : dict (optional, default: None)\n Any keyword arguments accepted by\n :class:`~matplotlib.collections.LineCollection`\n scatter_kwargs : dict (optional, default: None)\n Any keyword arguments accepted by\n :meth:`~matplotlib.axes.Axes.scatter`.\n bgcolor : list (optional, keyword only)\n A list with 4 floats representing a RGBA color. The\n default is defined in ``self.colors['bgcolor']``.\n figsize : tuple (optional, keyword only, default: ``(7, 7)``)\n The width and height of the canvas in inches.\n **kwargs\n Any parameters to pass to\n :meth:`.QueueNetworkDiGraph.draw_graph`.\n\n Notes\n -----\n This method relies heavily on\n :meth:`.QueueNetworkDiGraph.draw_graph`. Also, there is a\n parameter that sets the background color of the canvas, which\n is the ``bgcolor`` parameter.\n\n Examples\n --------\n To draw the current state of the network, call:\n\n >>> import queueing_tool as qt\n >>> g = qt.generate_pagerank_graph(100, seed=13)\n >>> net = qt.QueueNetwork(g, seed=13)\n >>> net.initialize(100)\n >>> net.simulate(1200)\n >>> net.draw() # doctest: +SKIP\n\n If you specify a file name and location, the drawing will be\n saved to disk. For example, to save the drawing to the current\n working directory do the following:\n\n >>> net.draw(fname=\"state.png\", scatter_kwargs={'s': 40}) # doctest: +SKIP\n\n .. figure:: current_state1.png\n :align: center\n\n The shade of each edge depicts how many agents are located at\n the corresponding queue. The shade of each vertex is determined\n by the total number of inbound agents. Although loops are not\n visible by default, the vertex that corresponds to a loop shows\n how many agents are in that loop.\n\n There are several additional parameters that can be passed --\n all :meth:`.QueueNetworkDiGraph.draw_graph` parameters are\n valid. For example, to show the edges as dashed lines do the\n following.\n\n >>> net.draw(line_kwargs={'linestyle': 'dashed'}) # doctest: +SKIP"}
{"_id": "q_7172", "text": "Gets data from queues and organizes it by agent.\n\n If none of the parameters are given then data from every\n :class:`.QueueServer` is retrieved.\n\n Parameters\n ----------\n queues : int or *array_like* (optional)\n The edge index (or an iterable of edge indices) identifying\n the :class:`QueueServer(s)<.QueueServer>` whose data will\n be retrieved.\n edge : 2-tuple of int or *array_like* (optional)\n Explicitly specify which queues to retrieve agent data\n from. Must be either:\n\n * A 2-tuple of the edge's source and target vertex\n indices, or\n * An iterable of 2-tuples of the edge's source and\n target vertex indices.\n\n edge_type : int or an iterable of int (optional)\n A integer, or a collection of integers identifying which\n edge types to retrieve agent data from.\n return_header : bool (optonal, default: False)\n Determines whether the column headers are returned.\n\n Returns\n -------\n dict\n Returns a ``dict`` where the keys are the\n :class:`Agent's<.Agent>` ``agent_id`` and the values are\n :class:`ndarrays<~numpy.ndarray>` for that\n :class:`Agent's<.Agent>` data. The columns of this array\n are as follows:\n\n * First: The arrival time of an agent.\n * Second: The service start time of an agent.\n * Third: The departure time of an agent.\n * Fourth: The length of the queue upon the agents arrival.\n * Fifth: The total number of :class:`Agents<.Agent>` in the\n :class:`.QueueServer`.\n * Sixth: the :class:`QueueServer's<.QueueServer>` id\n (its edge index).\n\n headers : str (optional)\n A comma seperated string of the column headers. Returns\n ``'arrival,service,departure,num_queued,num_total,q_id'``"}
{"_id": "q_7173", "text": "Prepares the ``QueueNetwork`` for simulation.\n\n Each :class:`.QueueServer` in the network starts inactive,\n which means they do not accept arrivals from outside the\n network, and they have no agents in their system. This method\n sets queues to active, which then allows agents to arrive from\n outside the network.\n\n Parameters\n ----------\n nActive : int (optional, default: ``1``)\n The number of queues to set as active. The queues are\n selected randomly.\n queues : int *array_like* (optional)\n The edge index (or an iterable of edge indices) identifying\n the :class:`QueueServer(s)<.QueueServer>` to make active by.\n edges : 2-tuple of int or *array_like* (optional)\n Explicitly specify which queues to make active. Must be\n either:\n\n * A 2-tuple of the edge's source and target vertex\n indices, or\n * An iterable of 2-tuples of the edge's source and\n target vertex indices.\n\n edge_type : int or an iterable of int (optional)\n A integer, or a collection of integers identifying which\n edge types will be set active.\n\n Raises\n ------\n ValueError\n If ``queues``, ``egdes``, and ``edge_type`` are all ``None``\n and ``nActive`` is an integer less than 1\n :exc:`~ValueError` is raised.\n TypeError\n If ``queues``, ``egdes``, and ``edge_type`` are all ``None``\n and ``nActive`` is not an integer then a :exc:`~TypeError`\n is raised.\n QueueingToolError\n Raised if all the queues specified are\n :class:`NullQueues<.NullQueue>`.\n\n Notes\n -----\n :class:`NullQueues<.NullQueue>` cannot be activated, and are\n sifted out if they are specified. More specifically, every edge\n with edge type 0 is sifted out."}
{"_id": "q_7174", "text": "Returns whether the next event is an arrival or a departure\n and the queue the event is accuring at.\n\n Returns\n -------\n des : str\n Indicates whether the next event is an arrival, a\n departure, or nothing; returns ``'Arrival'``,\n ``'Departure'``, or ``'Nothing'``.\n edge : int or ``None``\n The edge index of the edge that this event will occur at.\n If there are no events then ``None`` is returned."}
{"_id": "q_7175", "text": "Change the routing transitions probabilities for the\n network.\n\n Parameters\n ----------\n mat : dict or :class:`~numpy.ndarray`\n A transition routing matrix or transition dictionary. If\n passed a dictionary, the keys are source vertex indices and\n the values are dictionaries with target vertex indicies\n as the keys and the probabilities of routing from the\n source to the target as the values.\n\n Raises\n ------\n ValueError\n A :exc:`.ValueError` is raised if: the keys in the dict\n don't match with a vertex index in the graph; or if the\n :class:`~numpy.ndarray` is passed with the wrong shape,\n must be (``num_vertices``, ``num_vertices``); or the values\n passed are not probabilities (for each vertex they are\n positive and sum to 1);\n TypeError\n A :exc:`.TypeError` is raised if mat is not a dict or\n :class:`~numpy.ndarray`.\n\n Examples\n --------\n The default transition matrix is every out edge being equally\n likely:\n\n >>> import queueing_tool as qt\n >>> adjacency = {\n ... 0: [2],\n ... 1: [2, 3],\n ... 2: [0, 1, 2, 4],\n ... 3: [1],\n ... 4: [2],\n ... }\n >>> g = qt.adjacency2graph(adjacency)\n >>> net = qt.QueueNetwork(g)\n >>> net.transitions(False) # doctest: +ELLIPSIS\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {2: 1.0},\n 1: {2: 0.5, 3: 0.5},\n 2: {0: 0.25, 1: 0.25, 2: 0.25, 4: 0.25},\n 3: {1: 1.0},\n 4: {2: 1.0}}\n\n If you want to change only one vertex's transition\n probabilities, you can do so with the following:\n\n >>> net.set_transitions({1 : {2: 0.75, 3: 0.25}})\n >>> net.transitions(False) # doctest: +ELLIPSIS\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {2: 1.0},\n 1: {2: 0.75, 3: 0.25},\n 2: {0: 0.25, 1: 0.25, 2: 0.25, 4: 0.25},\n 3: {1: 1.0},\n 4: {2: 1.0}}\n\n One can generate a transition matrix using\n :func:`.generate_transition_matrix`. You can change all\n transition probabilities with an :class:`~numpy.ndarray`:\n\n >>> mat = qt.generate_transition_matrix(g, seed=10)\n >>> net.set_transitions(mat)\n >>> net.transitions(False) # doctest: +ELLIPSIS\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {2: 1.0},\n 1: {2: 0.962..., 3: 0.037...},\n 2: {0: 0.301..., 1: 0.353..., 2: 0.235..., 4: 0.108...},\n 3: {1: 1.0},\n 4: {2: 1.0}}\n\n See Also\n --------\n :meth:`.transitions` : Return the current routing\n probabilities.\n :func:`.generate_transition_matrix` : Generate a random routing\n matrix."}
{"_id": "q_7176", "text": "Draws the network, highlighting queues of a certain type.\n\n The colored vertices represent self loops of type ``edge_type``.\n Dark edges represent queues of type ``edge_type``.\n\n Parameters\n ----------\n edge_type : int\n The type of vertices and edges to be shown.\n **kwargs\n Any additional parameters to pass to :meth:`.draw`, and\n :meth:`.QueueNetworkDiGraph.draw_graph`\n\n Notes\n -----\n The colors are defined by the class attribute ``colors``. The\n relevant colors are ``vertex_active``, ``vertex_inactive``,\n ``vertex_highlight``, ``edge_active``, and ``edge_inactive``.\n\n Examples\n --------\n The following code highlights all edges with edge type ``2``.\n If the edge is a loop then the vertex is highlighted as well.\n In this case all edges with edge type ``2`` happen to be loops.\n\n >>> import queueing_tool as qt\n >>> g = qt.generate_pagerank_graph(100, seed=13)\n >>> net = qt.QueueNetwork(g, seed=13)\n >>> fname = 'edge_type_2.png'\n >>> net.show_type(2, fname=fname) # doctest: +SKIP\n\n .. figure:: edge_type_2-1.png\n :align: center"}
{"_id": "q_7177", "text": "Simulates the network forward.\n\n Simulates either a specific number of events or for a specified\n amount of simulation time.\n\n Parameters\n ----------\n n : int (optional, default: 1)\n The number of events to simulate. If ``t`` is not given\n then this parameter is used.\n t : float (optional)\n The amount of simulation time to simulate forward. If\n given, ``t`` is used instead of ``n``.\n\n Raises\n ------\n QueueingToolError\n Will raise a :exc:`.QueueingToolError` if the\n ``QueueNetwork`` has not been initialized. Call\n :meth:`.initialize` before calling this method.\n\n Examples\n --------\n Let ``net`` denote your instance of a ``QueueNetwork``. Before\n you simulate, you need to initialize the network, which allows\n arrivals from outside the network. To initialize with 2 (random\n chosen) edges accepting arrivals run:\n\n >>> import queueing_tool as qt\n >>> g = qt.generate_pagerank_graph(100, seed=50)\n >>> net = qt.QueueNetwork(g, seed=50)\n >>> net.initialize(2)\n\n To simulate the network 50000 events run:\n\n >>> net.num_events\n 0\n >>> net.simulate(50000)\n >>> net.num_events\n 50000\n\n To simulate the network for at least 75 simulation time units\n run:\n\n >>> t0 = net.current_time\n >>> net.simulate(t=75)\n >>> t1 = net.current_time\n >>> t1 - t0 # doctest: +ELLIPSIS\n 75..."}
{"_id": "q_7178", "text": "Tells the queues to collect data on agents' arrival, service\n start, and departure times.\n\n If none of the parameters are given then every\n :class:`.QueueServer` will start collecting data.\n\n Parameters\n ----------\n queues : :any:`int`, *array_like* (optional)\n The edge index (or an iterable of edge indices) identifying\n the :class:`QueueServer(s)<.QueueServer>` that will start\n collecting data.\n edge : 2-tuple of int or *array_like* (optional)\n Explicitly specify which queues will collect data. Must be\n either:\n\n * A 2-tuple of the edge's source and target vertex\n indices, or\n * An iterable of 2-tuples of the edge's source and\n target vertex indices.\n\n edge_type : int or an iterable of int (optional)\n A integer, or a collection of integers identifying which\n edge types will be set active."}
{"_id": "q_7179", "text": "Tells the queues to stop collecting data on agents.\n\n If none of the parameters are given then every\n :class:`.QueueServer` will stop collecting data.\n\n Parameters\n ----------\n queues : int, *array_like* (optional)\n The edge index (or an iterable of edge indices) identifying\n the :class:`QueueServer(s)<.QueueServer>` that will stop\n collecting data.\n edge : 2-tuple of int or *array_like* (optional)\n Explicitly specify which queues will stop collecting data.\n Must be either:\n\n * A 2-tuple of the edge's source and target vertex\n indices, or\n * An iterable of 2-tuples of the edge's source and\n target vertex indices.\n\n edge_type : int or an iterable of int (optional)\n A integer, or a collection of integers identifying which\n edge types will stop collecting data."}
{"_id": "q_7180", "text": "Returns the routing probabilities for each vertex in the\n graph.\n\n Parameters\n ----------\n return_matrix : bool (optional, the default is ``True``)\n Specifies whether an :class:`~numpy.ndarray` is returned.\n If ``False``, a dict is returned instead.\n\n Returns\n -------\n out : a dict or :class:`~numpy.ndarray`\n The transition probabilities for each vertex in the graph.\n If ``out`` is an :class:`~numpy.ndarray`, then\n ``out[v, u]`` returns the probability of a transition from\n vertex ``v`` to vertex ``u``. If ``out`` is a dict\n then ``out_edge[v][u]`` is the probability of moving from\n vertex ``v`` to the vertex ``u``.\n\n Examples\n --------\n Lets change the routing probabilities:\n\n >>> import queueing_tool as qt\n >>> import networkx as nx\n >>> g = nx.sedgewick_maze_graph()\n >>> net = qt.QueueNetwork(g)\n\n Below is an adjacency list for the graph ``g``.\n\n >>> ans = qt.graph2dict(g, False)\n >>> {k: sorted(v) for k, v in ans.items()}\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: [2, 5, 7],\n 1: [7],\n 2: [0, 6],\n 3: [4, 5],\n 4: [3, 5, 6, 7],\n 5: [0, 3, 4],\n 6: [2, 4],\n 7: [0, 1, 4]}\n\n The default transition matrix is every out edge being equally\n likely:\n\n >>> net.transitions(False) # doctest: +ELLIPSIS\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {2: 0.333..., 5: 0.333..., 7: 0.333...},\n 1: {7: 1.0},\n 2: {0: 0.5, 6: 0.5},\n 3: {4: 0.5, 5: 0.5},\n 4: {3: 0.25, 5: 0.25, 6: 0.25, 7: 0.25},\n 5: {0: 0.333..., 3: 0.333..., 4: 0.333...},\n 6: {2: 0.5, 4: 0.5},\n 7: {0: 0.333..., 1: 0.333..., 4: 0.333...}}\n\n Now we will generate a random routing matrix:\n\n >>> mat = qt.generate_transition_matrix(g, seed=96)\n >>> net.set_transitions(mat)\n >>> net.transitions(False) # doctest: +ELLIPSIS\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {2: 0.112..., 5: 0.466..., 7: 0.420...},\n 1: {7: 1.0},\n 2: {0: 0.561..., 6: 0.438...},\n 3: {4: 0.545..., 5: 0.454...},\n 4: {3: 0.374..., 5: 0.381..., 6: 0.026..., 7: 0.217...},\n 5: {0: 0.265..., 3: 0.460..., 4: 0.274...},\n 6: {2: 0.673..., 4: 0.326...},\n 7: {0: 0.033..., 1: 0.336..., 4: 0.630...}}\n\n What this shows is the following: when an :class:`.Agent` is at\n vertex ``2`` they will transition to vertex ``0`` with\n probability ``0.561`` and route to vertex ``6`` probability\n ``0.438``, when at vertex ``6`` they will transition back to\n vertex ``2`` with probability ``0.673`` and route vertex ``4``\n probability ``0.326``, etc."}
{"_id": "q_7181", "text": "Returns the number of elements in the set that ``s`` belongs to.\n\n Parameters\n ----------\n s : object\n An object\n\n Returns\n -------\n out : int\n The number of elements in the set that ``s`` belongs to."}
{"_id": "q_7182", "text": "Locates the leader of the set to which the element ``s`` belongs.\n\n Parameters\n ----------\n s : object\n An object that the ``UnionFind`` contains.\n\n Returns\n -------\n object\n The leader of the set that contains ``s``."}
{"_id": "q_7183", "text": "Merges the set that contains ``a`` with the set that contains ``b``.\n\n Parameters\n ----------\n a, b : objects\n Two objects whose sets are to be merged."}
{"_id": "q_7184", "text": "Generates a random transition matrix for the graph ``g``.\n\n Parameters\n ----------\n g : :any:`networkx.DiGraph`, :class:`numpy.ndarray`, dict, etc.\n Any object that :any:`DiGraph<networkx.DiGraph>` accepts.\n seed : int (optional)\n An integer used to initialize numpy's psuedo-random number\n generator.\n\n Returns\n -------\n mat : :class:`~numpy.ndarray`\n Returns a transition matrix where ``mat[i, j]`` is the\n probability of transitioning from vertex ``i`` to vertex ``j``.\n If there is no edge connecting vertex ``i`` to vertex ``j``\n then ``mat[i, j] = 0``."}
{"_id": "q_7185", "text": "Creates a random graph where the vertex types are\n selected using their pagerank.\n\n Calls :func:`.minimal_random_graph` and then\n :func:`.set_types_rank` where the ``rank`` keyword argument\n is given by :func:`networkx.pagerank`.\n\n Parameters\n ----------\n num_vertices : int (optional, the default is 250)\n The number of vertices in the graph.\n **kwargs :\n Any parameters to send to :func:`.minimal_random_graph` or\n :func:`.set_types_rank`.\n\n Returns\n -------\n :class:`.QueueNetworkDiGraph`\n A graph with a ``pos`` vertex property and the ``edge_type``\n edge property.\n\n Notes\n -----\n This function sets the edge types of a graph to be either 1, 2, or\n 3. It sets the vertices to type 2 by selecting the top\n ``pType2 * g.number_of_nodes()`` vertices given by the\n :func:`~networkx.pagerank` of the graph. A loop is added\n to all vertices identified this way (if one does not exist\n already). It then randomly sets vertices close to the type 2\n vertices as type 3, and adds loops to these vertices as well. These\n loops then have edge types that correspond to the vertices type.\n The rest of the edges are set to type 1."}
{"_id": "q_7186", "text": "Yield all of the documentation for trait definitions on a class object."}
{"_id": "q_7187", "text": "Add lines to the block."}
{"_id": "q_7188", "text": "We are transitioning from a noncomment to a comment."}
{"_id": "q_7189", "text": "Possibly add a new comment.\n\n Only adds a new comment if this comment is the only thing on the line.\n Otherwise, it extends the noncomment block."}
{"_id": "q_7190", "text": "Make the index mapping lines of actual code to their associated\n prefix comments."}
{"_id": "q_7191", "text": "Read complete DSMR telegram's from the serial interface and parse it\n into CosemObject's and MbusObject's\n\n :rtype: generator"}
{"_id": "q_7192", "text": "Creates a DSMR asyncio protocol."}
{"_id": "q_7193", "text": "Creates a DSMR asyncio protocol coroutine using serial port."}
{"_id": "q_7194", "text": "Add incoming data to buffer."}
{"_id": "q_7195", "text": "Send off parsed telegram to handling callback."}
{"_id": "q_7196", "text": "Parse telegram from string to dict.\n\n The telegram str type makes python 2.x integration easier.\n\n :param str telegram_data: full telegram from start ('/') to checksum\n ('!ABCD') including line endings in between the telegram's lines\n :rtype: dict\n :returns: Shortened example:\n {\n ..\n r'\\d-\\d:96\\.1\\.1.+?\\r\\n': <CosemObject>, # EQUIPMENT_IDENTIFIER\n r'\\d-\\d:1\\.8\\.1.+?\\r\\n': <CosemObject>, # ELECTRICITY_USED_TARIFF_1\n r'\\d-\\d:24\\.3\\.0.+?\\r\\n.+?\\r\\n': <MBusObject>, # GAS_METER_READING\n ..\n }\n :raises ParseError:\n :raises InvalidChecksumError:"}
{"_id": "q_7197", "text": "Loads config from string or dict"}
{"_id": "q_7198", "text": "Converts the configuration dictionary into the corresponding configuration format\n\n :param files: whether to include \"additional files\" in the output or not;\n defaults to ``True``\n :returns: string with output"}
{"_id": "q_7199", "text": "Returns a ``BytesIO`` instance representing an in-memory tar.gz archive\n containing the native router configuration.\n\n :returns: in-memory tar.gz archive, instance of ``BytesIO``"}
{"_id": "q_7200", "text": "Adds a single file in tarfile instance.\n\n :param tar: tarfile instance\n :param name: string representing filename or path\n :param contents: string representing file contents\n :param mode: string representing file mode, defaults to 644\n :returns: None"}
{"_id": "q_7201", "text": "Parses a native configuration and converts\n it to a NetJSON configuration dictionary"}
{"_id": "q_7202", "text": "Merges ``list2`` on top of ``list1``.\n\n If both lists contain dictionaries which have keys specified\n in ``identifiers`` which have equal values, those dicts will\n be merged (dicts in ``list2`` will override dicts in ``list1``).\n The remaining elements will be summed in order to create a list\n which contains elements of both lists.\n\n :param list1: ``list`` from template\n :param list2: ``list`` from config\n :param identifiers: ``list`` or ``None``\n :returns: merged ``list``"}
{"_id": "q_7203", "text": "Evaluates variables in ``data``\n\n :param data: data structure containing variables, may be\n ``str``, ``dict`` or ``list``\n :param context: ``dict`` containing variables\n :returns: modified data structure"}
{"_id": "q_7204", "text": "Looks for a key in a dictionary, if found returns\n a deepcopied value, otherwise returns default value"}
{"_id": "q_7205", "text": "Loops over item and performs type casting\n according to supplied schema fragment"}
{"_id": "q_7206", "text": "generates install.sh and adds it to included files"}
{"_id": "q_7207", "text": "generates tc_script.sh and adds it to included files"}
{"_id": "q_7208", "text": "Renders configuration by using the jinja2 templating engine"}
{"_id": "q_7209", "text": "converts NetJSON address to\n UCI intermediate data structure"}
{"_id": "q_7210", "text": "converts NetJSON interface to\n UCI intermediate data structure"}
{"_id": "q_7211", "text": "deletes NetJSON address keys"}
{"_id": "q_7212", "text": "converts NetJSON bridge to\n UCI intermediate data structure"}
{"_id": "q_7213", "text": "determines UCI interface \"proto\" option"}
{"_id": "q_7214", "text": "determines UCI interface \"dns\" option"}
{"_id": "q_7215", "text": "only for mac80211 driver"}
{"_id": "q_7216", "text": "determines NetJSON protocol radio attribute"}
{"_id": "q_7217", "text": "Returns a configuration dictionary representing an OpenVPN client configuration\n that is compatible with the passed server configuration.\n\n :param host: remote VPN server\n :param server: dictionary representing a single OpenVPN server configuration\n :param ca_path: optional string representing path to CA, will consequently add\n a file in the resulting configuration dictionary\n :param ca_contents: optional string representing contents of CA file\n :param cert_path: optional string representing path to certificate, will consequently add\n a file in the resulting configuration dictionary\n :param cert_contents: optional string representing contents of cert file\n :param key_path: optional string representing path to key, will consequently add\n a file in the resulting configuration dictionary\n :param key_contents: optional string representing contents of key file\n :returns: dictionary representing a single OpenVPN client configuration"}
{"_id": "q_7218", "text": "returns a list of NetJSON extra files for automatically generated clients\n produces side effects in ``client`` dictionary"}
{"_id": "q_7219", "text": "parse requirements.txt, ignore links, exclude comments"}
{"_id": "q_7220", "text": "Get all facts of this node. Additional arguments may also be\n specified that will be passed to the query function."}
{"_id": "q_7221", "text": "Get a single fact from this node."}
{"_id": "q_7222", "text": "Get all resources of this node or all resources of the specified\n type. Additional arguments may also be specified that will be passed\n to the query function."}
{"_id": "q_7223", "text": "Get all reports for this node. Additional arguments may also be\n specified that will be passed to the query function."}
{"_id": "q_7224", "text": "A base_url that will be used to construct the final\n URL we're going to query against.\n\n :returns: A URL of the form: ``proto://host:port``.\n :rtype: :obj:`string`"}
{"_id": "q_7225", "text": "The complete URL we will end up querying. Depending on the\n endpoint we pass in this will result in different URL's with\n different prefixes.\n\n :param endpoint: The PuppetDB API endpoint we want to query.\n :type endpoint: :obj:`string`\n :param path: An additional path if we don't wish to query the\\\n bare endpoint.\n :type path: :obj:`string`\n\n :returns: A URL constructed from :func:`base_url` with the\\\n apropraite API version/prefix and the rest of the path added\\\n to it.\n :rtype: :obj:`string`"}
{"_id": "q_7226", "text": "Query for nodes by either name or query. If both aren't\n provided this will return a list of all nodes. This method\n also fetches the nodes status and event counts of the latest\n report from puppetdb.\n\n :param with_status: (optional) include the node status in the\\\n returned nodes\n :type with_status: :bool:\n :param unreported: (optional) amount of hours when a node gets\n marked as unreported\n :type unreported: :obj:`None` or integer\n :param \\*\\*kwargs: The rest of the keyword arguments are passed\n to the _query function\n\n :returns: A generator yieling Nodes.\n :rtype: :class:`pypuppetdb.types.Node`"}
{"_id": "q_7227", "text": "Gets a single node from PuppetDB.\n\n :param name: The name of the node search.\n :type name: :obj:`string`\n\n :return: An instance of Node\n :rtype: :class:`pypuppetdb.types.Node`"}
{"_id": "q_7228", "text": "Get the available catalog for a given node.\n\n :param node: (Required) The name of the PuppetDB node.\n :type: :obj:`string`\n\n :returns: An instance of Catalog\n :rtype: :class:`pypuppetdb.types.Catalog`"}
{"_id": "q_7229", "text": "Connect with PuppetDB. This will return an object allowing you\n to query the API through its methods.\n\n :param host: (Default: 'localhost;) Hostname or IP of PuppetDB.\n :type host: :obj:`string`\n\n :param port: (Default: '8080') Port on which to talk to PuppetDB.\n :type port: :obj:`int`\n\n :param ssl_verify: (optional) Verify PuppetDB server certificate.\n :type ssl_verify: :obj:`bool` or :obj:`string` True, False or filesystem \\\n path to CA certificate.\n\n :param ssl_key: (optional) Path to our client secret key.\n :type ssl_key: :obj:`None` or :obj:`string` representing a filesystem\\\n path.\n\n :param ssl_cert: (optional) Path to our client certificate.\n :type ssl_cert: :obj:`None` or :obj:`string` representing a filesystem\\\n path.\n\n :param timeout: (Default: 10) Number of seconds to wait for a response.\n :type timeout: :obj:`int`\n\n :param protocol: (optional) Explicitly specify the protocol to be used\n (especially handy when using HTTPS with ssl_verify=False and\n without certs)\n :type protocol: :obj:`None` or :obj:`string`\n\n :param url_path: (Default: '/') The URL path where PuppetDB is served\n :type url_path: :obj:`None` or :obj:`string`\n\n :param username: (optional) The username to use for HTTP basic\n authentication\n :type username: :obj:`None` or :obj:`string`\n\n :param password: (optional) The password to use for HTTP basic\n authentication\n :type password: :obj:`None` or :obj:`string`\n\n :param token: (optional) The x-auth token to use for X-Authentication\n :type token: :obj:`None` or :obj:`string`"}
{"_id": "q_7230", "text": "The Master has been started from the command line. Execute ad-hoc tests if desired."}
{"_id": "q_7231", "text": "Direct operate a set of commands\n\n :param command_set: set of command headers\n :param callback: callback that will be invoked upon completion or failure\n :param config: optional configuration that controls normal callbacks and allows the user to be specified for SA"}
{"_id": "q_7232", "text": "Select and operate a single command\n\n :param command: command to operate\n :param index: index of the command\n :param callback: callback that will be invoked upon completion or failure\n :param config: optional configuration that controls normal callbacks and allows the user to be specified for SA"}
{"_id": "q_7233", "text": "Select and operate a set of commands\n\n :param command_set: set of command headers\n :param callback: callback that will be invoked upon completion or failure\n :param config: optional configuration that controls normal callbacks and allows the user to be specified for SA"}
{"_id": "q_7234", "text": "The Outstation has been started from the command line. Execute ad-hoc tests if desired."}
{"_id": "q_7235", "text": "The Master sent an Operate command to the Outstation. Handle it.\n\n :param command: ControlRelayOutputBlock,\n AnalogOutputInt16, AnalogOutputInt32, AnalogOutputFloat32, or AnalogOutputDouble64.\n :param index: int\n :param op_type: OperateType\n :return: CommandStatus"}
{"_id": "q_7236", "text": "Create Bloomberg connection\n\n Returns:\n (Bloomberg connection, if connection is new)"}
{"_id": "q_7237", "text": "Stop and destroy Bloomberg connection"}
{"_id": "q_7238", "text": "Parse markdown as description"}
{"_id": "q_7239", "text": "Standardized earning outputs and add percentage by each blocks\n\n Args:\n data: earning data block\n header: earning headers\n\n Returns:\n pd.DataFrame\n\n Examples:\n >>> format_earning(\n ... data=pd.read_pickle('xbbg/tests/data/sample_earning.pkl'),\n ... header=pd.read_pickle('xbbg/tests/data/sample_earning_header.pkl')\n ... ).round(2)\n level fy2017 fy2017_pct\n Asia-Pacific 1.0 3540.0 66.43\n \u00a0\u00a0\u00a0China 2.0 1747.0 49.35\n \u00a0\u00a0\u00a0Japan 2.0 1242.0 35.08\n \u00a0\u00a0\u00a0Singapore 2.0 551.0 15.56\n United States 1.0 1364.0 25.60\n Europe 1.0 263.0 4.94\n Other Countries 1.0 162.0 3.04"}
{"_id": "q_7240", "text": "Format `pdblp` outputs to column-based results\n\n Args:\n data: `pdblp` result\n source: `bdp` or `bds`\n col_maps: rename columns with these mappings\n\n Returns:\n pd.DataFrame\n\n Examples:\n >>> format_output(\n ... data=pd.read_pickle('xbbg/tests/data/sample_bdp.pkl'),\n ... source='bdp'\n ... ).reset_index()\n ticker name\n 0 QQQ US Equity INVESCO QQQ TRUST SERIES 1\n 1 SPY US Equity SPDR S&P 500 ETF TRUST\n >>> format_output(\n ... data=pd.read_pickle('xbbg/tests/data/sample_dvd.pkl'),\n ... source='bds', col_maps={'Dividend Frequency': 'dvd_freq'}\n ... ).loc[:, ['ex_date', 'dividend_amount', 'dvd_freq']].reset_index()\n ticker ex_date dividend_amount dvd_freq\n 0 C US Equity 2018-02-02 0.32 Quarter"}
{"_id": "q_7241", "text": "Format intraday data\n\n Args:\n data: pd.DataFrame from bdib\n ticker: ticker\n\n Returns:\n pd.DataFrame\n\n Examples:\n >>> format_intraday(\n ... data=pd.read_parquet('xbbg/tests/data/sample_bdib.parq'),\n ... ticker='SPY US Equity',\n ... ).xs('close', axis=1, level=1, drop_level=False)\n ticker SPY US Equity\n field close\n 2018-12-28 09:30:00-05:00 249.67\n 2018-12-28 09:31:00-05:00 249.54\n 2018-12-28 09:32:00-05:00 249.22\n 2018-12-28 09:33:00-05:00 249.01\n 2018-12-28 09:34:00-05:00 248.86\n >>> format_intraday(\n ... data=pd.read_parquet('xbbg/tests/data/sample_bdib.parq'),\n ... ticker='SPY US Equity', price_only=True\n ... )\n ticker SPY US Equity\n 2018-12-28 09:30:00-05:00 249.67\n 2018-12-28 09:31:00-05:00 249.54\n 2018-12-28 09:32:00-05:00 249.22\n 2018-12-28 09:33:00-05:00 249.01\n 2018-12-28 09:34:00-05:00 248.86"}
{"_id": "q_7242", "text": "Logging info for given tickers and fields\n\n Args:\n tickers: tickers\n flds: fields\n\n Returns:\n str\n\n Examples:\n >>> print(info_qry(\n ... tickers=['NVDA US Equity'], flds=['Name', 'Security_Name']\n ... ))\n tickers: ['NVDA US Equity']\n fields: ['Name', 'Security_Name']"}
{"_id": "q_7243", "text": "Bloomberg historical data\n\n Args:\n tickers: ticker(s)\n flds: field(s)\n start_date: start date\n end_date: end date - default today\n adjust: `all`, `dvd`, `normal`, `abn` (=abnormal), `split`, `-` or None\n exact match of above words will adjust for corresponding events\n Case 0: `-` no adjustment for dividend or split\n Case 1: `dvd` or `normal|abn` will adjust for all dividends except splits\n Case 2: `adjust` will adjust for splits and ignore all dividends\n Case 3: `all` == `dvd|split` == adjust for all\n Case 4: None == Bloomberg default OR use kwargs\n **kwargs: overrides\n\n Returns:\n pd.DataFrame\n\n Examples:\n >>> res = bdh(\n ... tickers='VIX Index', flds=['High', 'Low', 'Last_Price'],\n ... start_date='2018-02-05', end_date='2018-02-07',\n ... ).round(2).transpose()\n >>> res.index.name = None\n >>> res.columns.name = None\n >>> res\n 2018-02-05 2018-02-06 2018-02-07\n VIX Index High 38.80 50.30 31.64\n Low 16.80 22.42 21.17\n Last_Price 37.32 29.98 27.73\n >>> bdh(\n ... tickers='AAPL US Equity', flds='Px_Last',\n ... start_date='20140605', end_date='20140610', adjust='-'\n ... ).round(2)\n ticker AAPL US Equity\n field Px_Last\n 2014-06-05 647.35\n 2014-06-06 645.57\n 2014-06-09 93.70\n 2014-06-10 94.25\n >>> bdh(\n ... tickers='AAPL US Equity', flds='Px_Last',\n ... start_date='20140606', end_date='20140609',\n ... CshAdjNormal=False, CshAdjAbnormal=False, CapChg=False,\n ... ).round(2)\n ticker AAPL US Equity\n field Px_Last\n 2014-06-06 645.57\n 2014-06-09 93.70"}
{"_id": "q_7244", "text": "Bloomberg intraday bar data within market session\n\n Args:\n ticker: ticker\n dt: date\n session: examples include\n day_open_30, am_normal_30_30, day_close_30, allday_exact_0930_1000\n **kwargs:\n ref: reference ticker or exchange for timezone\n keep_tz: if keep tz if reference ticker / exchange is given\n start_time: start time\n end_time: end time\n typ: [TRADE, BID, ASK, BID_BEST, ASK_BEST, BEST_BID, BEST_ASK]\n\n Returns:\n pd.DataFrame"}
{"_id": "q_7245", "text": "Earning exposures by Geo or Products\n\n Args:\n ticker: ticker name\n by: [G(eo), P(roduct)]\n typ: type of earning, start with `PG_` in Bloomberg FLDS - default `Revenue`\n ccy: currency of earnings\n level: hierarchy level of earnings\n\n Returns:\n pd.DataFrame\n\n Examples:\n >>> data = earning('AMD US Equity', Eqy_Fund_Year=2017, Number_Of_Periods=1)\n >>> data.round(2)\n level fy2017 fy2017_pct\n Asia-Pacific 1.0 3540.0 66.43\n \u00a0\u00a0\u00a0China 2.0 1747.0 49.35\n \u00a0\u00a0\u00a0Japan 2.0 1242.0 35.08\n \u00a0\u00a0\u00a0Singapore 2.0 551.0 15.56\n United States 1.0 1364.0 25.60\n Europe 1.0 263.0 4.94\n Other Countries 1.0 162.0 3.04"}
{"_id": "q_7246", "text": "Active futures contract\n\n Args:\n ticker: futures ticker, i.e., ESA Index, Z A Index, CLA Comdty, etc.\n dt: date\n\n Returns:\n str: ticker name"}
{"_id": "q_7247", "text": "Get proper ticker from generic ticker\n\n Args:\n gen_ticker: generic ticker\n dt: date\n freq: futures contract frequency\n log: level of logs\n\n Returns:\n str: exact futures ticker"}
{"_id": "q_7248", "text": "Check exchange hours vs local hours\n\n Args:\n tickers: list of tickers\n tz_exch: exchange timezone\n tz_loc: local timezone\n\n Returns:\n Local and exchange hours"}
{"_id": "q_7249", "text": "Data file location for Bloomberg historical data\n\n Args:\n ticker: ticker name\n dt: date\n typ: [TRADE, BID, ASK, BID_BEST, ASK_BEST, BEST_BID, BEST_ASK]\n\n Returns:\n file location\n\n Examples:\n >>> os.environ['BBG_ROOT'] = ''\n >>> hist_file(ticker='ES1 Index', dt='2018-08-01') == ''\n True\n >>> os.environ['BBG_ROOT'] = '/data/bbg'\n >>> hist_file(ticker='ES1 Index', dt='2018-08-01')\n '/data/bbg/Index/ES1 Index/TRADE/2018-08-01.parq'"}
{"_id": "q_7250", "text": "Data file location for Bloomberg reference data\n\n Args:\n ticker: ticker name\n fld: field\n has_date: whether add current date to data file\n cache: if has_date is True, whether to load file from latest cached\n ext: file extension\n **kwargs: other overrides passed to ref function\n\n Returns:\n file location\n\n Examples:\n >>> import shutil\n >>>\n >>> os.environ['BBG_ROOT'] = ''\n >>> ref_file('BLT LN Equity', fld='Crncy') == ''\n True\n >>> os.environ['BBG_ROOT'] = '/data/bbg'\n >>> ref_file('BLT LN Equity', fld='Crncy', cache=True)\n '/data/bbg/Equity/BLT LN Equity/Crncy/ovrd=None.parq'\n >>> ref_file('BLT LN Equity', fld='Crncy')\n ''\n >>> cur_dt = utils.cur_time(tz=utils.DEFAULT_TZ)\n >>> ref_file(\n ... 'BLT LN Equity', fld='DVD_Hist_All', has_date=True, cache=True,\n ... ).replace(cur_dt, '[cur_date]')\n '/data/bbg/Equity/BLT LN Equity/DVD_Hist_All/asof=[cur_date], ovrd=None.parq'\n >>> ref_file(\n ... 'BLT LN Equity', fld='DVD_Hist_All', has_date=True,\n ... cache=True, DVD_Start_Dt='20180101',\n ... ).replace(cur_dt, '[cur_date]')[:-5]\n '/data/bbg/Equity/BLT LN Equity/DVD_Hist_All/asof=[cur_date], DVD_Start_Dt=20180101'\n >>> sample = 'asof=2018-11-02, DVD_Start_Dt=20180101, DVD_End_Dt=20180501.pkl'\n >>> root_path = 'xbbg/tests/data'\n >>> sub_path = f'{root_path}/Equity/AAPL US Equity/DVD_Hist_All'\n >>> os.environ['BBG_ROOT'] = root_path\n >>> for tmp_file in files.all_files(sub_path): os.remove(tmp_file)\n >>> files.create_folder(sub_path)\n >>> sample in shutil.copy(f'{root_path}/{sample}', sub_path)\n True\n >>> new_file = ref_file(\n ... 'AAPL US Equity', 'DVD_Hist_All', DVD_Start_Dt='20180101',\n ... has_date=True, cache=True, ext='pkl'\n ... )\n >>> new_file.split('/')[-1] == f'asof={cur_dt}, DVD_Start_Dt=20180101.pkl'\n True\n >>> old_file = 'asof=2018-11-02, DVD_Start_Dt=20180101, DVD_End_Dt=20180501.pkl'\n >>> old_full = '/'.join(new_file.split('/')[:-1] + [old_file])\n >>> updated_file = old_full.replace('2018-11-02', cur_dt)\n >>> updated_file in shutil.copy(old_full, updated_file)\n True\n >>> exist_file = ref_file(\n ... 'AAPL US Equity', 'DVD_Hist_All', DVD_Start_Dt='20180101',\n ... has_date=True, cache=True, ext='pkl'\n ... )\n >>> exist_file == updated_file\n False\n >>> exist_file = ref_file(\n ... 'AAPL US Equity', 'DVD_Hist_All', DVD_Start_Dt='20180101',\n ... DVD_End_Dt='20180501', has_date=True, cache=True, ext='pkl'\n ... )\n >>> exist_file == updated_file\n True"}
{"_id": "q_7251", "text": "Check whether data is done for the day and save\n\n Args:\n data: data\n ticker: ticker\n dt: date\n typ: [TRADE, BID, ASK, BID_BEST, ASK_BEST, BEST_BID, BEST_ASK]\n\n Examples:\n >>> os.environ['BBG_ROOT'] = 'xbbg/tests/data'\n >>> sample = pd.read_parquet('xbbg/tests/data/aapl.parq')\n >>> save_intraday(sample, 'AAPL US Equity', '2018-11-02')\n >>> # Invalid exchange\n >>> save_intraday(sample, 'AAPL XX Equity', '2018-11-02')\n >>> # Invalid empty data\n >>> save_intraday(pd.DataFrame(), 'AAPL US Equity', '2018-11-02')\n >>> # Invalid date - too close\n >>> cur_dt = utils.cur_time()\n >>> save_intraday(sample, 'AAPL US Equity', cur_dt)"}
{"_id": "q_7252", "text": "Exchange info for given ticker\n\n Args:\n ticker: ticker or exchange\n\n Returns:\n pd.Series\n\n Examples:\n >>> exch_info('SPY US Equity')\n tz America/New_York\n allday [04:00, 20:00]\n day [09:30, 16:00]\n pre [04:00, 09:30]\n post [16:01, 20:00]\n dtype: object\n >>> exch_info('ES1 Index')\n tz America/New_York\n allday [18:00, 17:00]\n day [08:00, 17:00]\n dtype: object\n >>> exch_info('Z 1 Index')\n tz Europe/London\n allday [01:00, 21:00]\n day [01:00, 21:00]\n dtype: object\n >>> exch_info('TESTTICKER Corp').empty\n True\n >>> exch_info('US')\n tz America/New_York\n allday [04:00, 20:00]\n day [09:30, 16:00]\n pre [04:00, 09:30]\n post [16:01, 20:00]\n dtype: object"}
{"_id": "q_7253", "text": "Get info for given market\n\n Args:\n ticker: Bloomberg full ticker\n\n Returns:\n dict\n\n Examples:\n >>> info = market_info('SHCOMP Index')\n >>> info['exch']\n 'EquityChina'\n >>> info = market_info('ICICIC=1 IS Equity')\n >>> info['freq'], info['is_fut']\n ('M', True)\n >>> info = market_info('INT1 Curncy')\n >>> info['freq'], info['is_fut']\n ('M', True)\n >>> info = market_info('CL1 Comdty')\n >>> info['freq'], info['is_fut']\n ('M', True)\n >>> # Wrong tickers\n >>> market_info('C XX Equity')\n {}\n >>> market_info('XXX Comdty')\n {}\n >>> market_info('Bond_ISIN Corp')\n {}\n >>> market_info('XYZ Index')\n {}\n >>> market_info('XYZ Curncy')\n {}"}
{"_id": "q_7254", "text": "Currency pair info\n\n Args:\n local: local currency\n base: base currency\n\n Returns:\n CurrencyPair\n\n Examples:\n >>> ccy_pair(local='HKD', base='USD')\n CurrencyPair(ticker='HKD Curncy', factor=1.0, power=1)\n >>> ccy_pair(local='GBp')\n CurrencyPair(ticker='GBP Curncy', factor=100, power=-1)\n >>> ccy_pair(local='USD', base='GBp')\n CurrencyPair(ticker='GBP Curncy', factor=0.01, power=1)\n >>> ccy_pair(local='XYZ', base='USD')\n CurrencyPair(ticker='', factor=1.0, power=1)\n >>> ccy_pair(local='GBP', base='GBp')\n CurrencyPair(ticker='', factor=0.01, power=1)\n >>> ccy_pair(local='GBp', base='GBP')\n CurrencyPair(ticker='', factor=100.0, power=1)"}
{"_id": "q_7255", "text": "Market close time for ticker\n\n Args:\n ticker: ticker name\n dt: date\n timing: [EOD (default), BOD]\n tz: conversion to timezone\n\n Returns:\n str: date & time\n\n Examples:\n >>> market_timing('7267 JT Equity', dt='2018-09-10')\n '2018-09-10 14:58'\n >>> market_timing('7267 JT Equity', dt='2018-09-10', tz=timezone.TimeZone.NY)\n '2018-09-10 01:58:00-04:00'\n >>> market_timing('7267 JT Equity', dt='2018-01-10', tz='NY')\n '2018-01-10 00:58:00-05:00'\n >>> market_timing('7267 JT Equity', dt='2018-09-10', tz='SPX Index')\n '2018-09-10 01:58:00-04:00'\n >>> market_timing('8035 JT Equity', dt='2018-09-10', timing='BOD')\n '2018-09-10 09:01'\n >>> market_timing('Z 1 Index', dt='2018-09-10', timing='FINISHED')\n '2018-09-10 21:00'\n >>> market_timing('TESTTICKER Corp', dt='2018-09-10')\n ''"}
{"_id": "q_7256", "text": "Load parameters for assets\n\n Args:\n cat: category\n\n Returns:\n dict\n\n Examples:\n >>> import pandas as pd\n >>>\n >>> assets = load_info(cat='assets')\n >>> all(cat in assets for cat in ['Equity', 'Index', 'Curncy', 'Corp'])\n True\n >>> os.environ['BBG_PATH'] = ''\n >>> exch = load_info(cat='exch')\n >>> pd.Series(exch['EquityUS']).allday\n [400, 2000]\n >>> test_root = f'{PKG_PATH}/tests'\n >>> os.environ['BBG_PATH'] = test_root\n >>> ovrd_exch = load_info(cat='exch')\n >>> # Somehow os.environ is not set properly in doctest environment\n >>> ovrd_exch.update(_load_yaml_(f'{test_root}/markets/exch.yml'))\n >>> pd.Series(ovrd_exch['EquityUS']).allday\n [300, 2100]"}
{"_id": "q_7257", "text": "Convert YAML input to hours\n\n Args:\n num: number in YMAL file, e.g., 900, 1700, etc.\n\n Returns:\n str\n\n Examples:\n >>> to_hour(900)\n '09:00'\n >>> to_hour(1700)\n '17:00'"}
{"_id": "q_7258", "text": "Make folder as well as all parent folders if not exists\n\n Args:\n path_name: full path name\n is_file: whether input is name of file"}
{"_id": "q_7259", "text": "Search all files with criteria\n Returned list will be sorted by last modified\n\n Args:\n path_name: full path name\n keyword: keyword to search\n ext: file extensions, split by ','\n full_path: whether return full path (default True)\n has_date: whether has date in file name (default False)\n date_fmt: date format to check for has_date parameter\n\n Returns:\n list: all file names with criteria fulfilled"}
{"_id": "q_7260", "text": "Search all folders with criteria\n Returned list will be sorted by last modified\n\n Args:\n path_name: full path name\n keyword: keyword to search\n has_date: whether has date in file name (default False)\n date_fmt: date format to check for has_date parameter\n\n Returns:\n list: all folder names fulfilled criteria"}
{"_id": "q_7261", "text": "Sort files or folders by modified time\n\n Args:\n files_or_folders: list of files or folders\n\n Returns:\n list"}
{"_id": "q_7262", "text": "Filter files or dates by date patterns\n\n Args:\n files_or_folders: list of files or folders\n date_fmt: date format\n\n Returns:\n list"}
{"_id": "q_7263", "text": "File modified time in python\n\n Args:\n file_name: file name\n\n Returns:\n pd.Timestamp"}
{"_id": "q_7264", "text": "Get interval from defined session\n\n Args:\n ticker: ticker\n session: session\n\n Returns:\n Session of start_time and end_time\n\n Examples:\n >>> get_interval('005490 KS Equity', 'day_open_30')\n Session(start_time='09:00', end_time='09:30')\n >>> get_interval('005490 KS Equity', 'day_normal_30_20')\n Session(start_time='09:31', end_time='15:00')\n >>> get_interval('005490 KS Equity', 'day_close_20')\n Session(start_time='15:01', end_time='15:20')\n >>> get_interval('700 HK Equity', 'am_open_30')\n Session(start_time='09:30', end_time='10:00')\n >>> get_interval('700 HK Equity', 'am_normal_30_30')\n Session(start_time='10:01', end_time='11:30')\n >>> get_interval('700 HK Equity', 'am_close_30')\n Session(start_time='11:31', end_time='12:00')\n >>> get_interval('ES1 Index', 'day_exact_2130_2230')\n Session(start_time=None, end_time=None)\n >>> get_interval('ES1 Index', 'allday_exact_2130_2230')\n Session(start_time='21:30', end_time='22:30')\n >>> get_interval('ES1 Index', 'allday_exact_2130_0230')\n Session(start_time='21:30', end_time='02:30')\n >>> get_interval('AMLP US', 'day_open_30')\n Session(start_time=None, end_time=None)\n >>> get_interval('7974 JP Equity', 'day_normal_180_300') is SessNA\n True\n >>> get_interval('Z 1 Index', 'allday_normal_30_30')\n Session(start_time='01:31', end_time='20:30')\n >>> get_interval('GBP Curncy', 'day')\n Session(start_time='17:02', end_time='17:00')"}
{"_id": "q_7265", "text": "Shift start time by mins\n\n Args:\n start_time: start time in terms of HH:MM string\n mins: number of minutes (+ / -)\n\n Returns:\n end time in terms of HH:MM string"}
{"_id": "q_7266", "text": "Time intervals for market open\n\n Args:\n session: [allday, day, am, pm, night]\n mins: mintues after open\n\n Returns:\n Session of start_time and end_time"}
{"_id": "q_7267", "text": "Time intervals for market close\n\n Args:\n session: [allday, day, am, pm, night]\n mins: mintues before close\n\n Returns:\n Session of start_time and end_time"}
{"_id": "q_7268", "text": "Explicitly specify start time and end time\n\n Args:\n session: predefined session\n start_time: start time in terms of HHMM string\n end_time: end time in terms of HHMM string\n\n Returns:\n Session of start_time and end_time"}
{"_id": "q_7269", "text": "Convert to tz\n\n Args:\n dt: date time\n to_tz: to tz\n from_tz: from tz - will be ignored if tz from dt is given\n\n Returns:\n str: date & time\n\n Examples:\n >>> dt_1 = pd.Timestamp('2018-09-10 16:00', tz='Asia/Hong_Kong')\n >>> tz_convert(dt_1, to_tz='NY')\n '2018-09-10 04:00:00-04:00'\n >>> dt_2 = pd.Timestamp('2018-01-10 16:00')\n >>> tz_convert(dt_2, to_tz='HK', from_tz='NY')\n '2018-01-11 05:00:00+08:00'\n >>> dt_3 = '2018-09-10 15:00'\n >>> tz_convert(dt_3, to_tz='NY', from_tz='JP')\n '2018-09-10 02:00:00-04:00'"}
{"_id": "q_7270", "text": "Full infomation for missing query"}
{"_id": "q_7271", "text": "Check number of trials for missing values\n\n Returns:\n int: number of trials already tried"}
{"_id": "q_7272", "text": "Decorator for public views that do not require authentication\n Sets an attribute in the fuction STRONGHOLD_IS_PUBLIC to True"}
{"_id": "q_7273", "text": "Get the version of the package from the given file by\n executing it and extracting the given `name`."}
{"_id": "q_7274", "text": "Find all of the packages."}
{"_id": "q_7275", "text": "Echo a command before running it. Defaults to repo as cwd"}
{"_id": "q_7276", "text": "Return a Command that checks that certain files exist.\n\n Raises a ValueError if any of the files are missing.\n\n Note: The check is skipped if the `--skip-npm` flag is used."}
{"_id": "q_7277", "text": "Wrap a setup command\n\n Parameters\n ----------\n cmds: list(str)\n The names of the other commands to run prior to the command.\n strict: boolean, optional\n Whether to raise errors when a pre-command fails."}
{"_id": "q_7278", "text": "Expand data file specs into valid data files metadata.\n\n Parameters\n ----------\n data_specs: list of tuples\n See [createcmdclass] for description.\n existing: list of tuples\n The existing distribution data_files metadata.\n\n Returns\n -------\n A valid list of data_files items."}
{"_id": "q_7279", "text": "Translate and compile a glob pattern to a regular expression matcher."}
{"_id": "q_7280", "text": "Iterate over all the parts of a path.\n\n Splits path recursively with os.path.split()."}
{"_id": "q_7281", "text": "Join translated glob pattern parts.\n\n This is different from a simple join, as care need to be taken\n to allow ** to match ZERO or more directories."}
{"_id": "q_7282", "text": "Translate a glob PATTERN PART to a regular expression."}
{"_id": "q_7283", "text": "Send DDL to create the specified `table`\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None"}
{"_id": "q_7284", "text": "Send DDL to create the specified `table` indexes\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None"}
{"_id": "q_7285", "text": "Send DDL to create the specified `table` triggers\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None"}
{"_id": "q_7286", "text": "Write the contents of `table`\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n - `reader`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader` object that allows reading from the data source.\n\n Returns None"}
{"_id": "q_7287", "text": "Write TRIGGERs existing on `table` to the output file\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None"}
{"_id": "q_7288", "text": "Utility for sending a predefined request and printing response as well\n as storing messages in a list, useful for testing\n\n Parameters\n ----------\n session: blpapi.session.Session\n request: blpapi.request.Request\n Request to be sent\n\n Returns\n -------\n List of all messages received"}
{"_id": "q_7289", "text": "Initialize blpapi.Session services"}
{"_id": "q_7290", "text": "Get Open, High, Low, Close, Volume, and numEvents for a ticker.\n Return pandas DataFrame\n\n Parameters\n ----------\n ticker: string\n String corresponding to ticker\n start_datetime: string\n UTC datetime in format YYYY-mm-ddTHH:MM:SS\n end_datetime: string\n UTC datetime in format YYYY-mm-ddTHH:MM:SS\n event_type: string {TRADE, BID, ASK, BID_BEST, ASK_BEST, BEST_BID,\n BEST_ASK}\n Requested data event type\n interval: int {1... 1440}\n Length of time bars\n elms: list of tuples\n List of tuples where each tuple corresponds to the other elements\n to be set. Refer to the IntradayBarRequest section in the\n 'Services & schemas reference guide' for more info on these values"}
{"_id": "q_7291", "text": "Enqueue task with specified data."}
{"_id": "q_7292", "text": "This method is a good one to extend if you want to create a queue which always applies an extra predicate."}
{"_id": "q_7293", "text": "Designed to be passed as the default kwarg in simplejson.dumps. Serializes dates and datetimes to ISO strings."}
{"_id": "q_7294", "text": "Returns a new connection to the database."}
{"_id": "q_7295", "text": "Run a set of InsertWorkers and record their performance."}
{"_id": "q_7296", "text": "Used for development only"}
{"_id": "q_7297", "text": "Returns the number of connections cached by the pool."}
{"_id": "q_7298", "text": "OperationalError's are emitted by the _mysql library for\n almost every error code emitted by MySQL. Because of this we\n verify that the error is actually a connection error before\n terminating the connection and firing off a PoolConnectionException"}
{"_id": "q_7299", "text": "Build a simple expression ready to be added onto another query.\n\n >>> simple_expression(joiner=' AND ', name='bob', role='admin')\n \"`name`=%(_QB_name)s AND `name`=%(_QB_role)s\", { '_QB_name': 'bob', '_QB_role': 'admin' }"}
{"_id": "q_7300", "text": "Build a update query.\n\n >>> update('foo_table', a=5, b=2)\n \"UPDATE `foo_table` SET `a`=%(_QB_a)s, `b`=%(_QB_b)s\", { '_QB_a': 5, '_QB_b': 2 }"}
{"_id": "q_7301", "text": "Connect to the database specified"}
{"_id": "q_7302", "text": "Start a step."}
{"_id": "q_7303", "text": "Stop a step."}
{"_id": "q_7304", "text": "load steps -> basically load all the datetime isoformats into datetimes"}
{"_id": "q_7305", "text": "Assemble one EVM instruction from its textual representation.\n\n :param asmcode: assembly code for one instruction\n :type asmcode: str\n :param pc: program counter of the instruction(optional)\n :type pc: int\n :param fork: fork name (optional)\n :type fork: str\n :return: An Instruction object\n :rtype: Instruction\n\n Example use::\n\n >>> print assemble_one('LT')"}
{"_id": "q_7306", "text": "Assemble a sequence of textual representation of EVM instructions\n\n :param asmcode: assembly code for any number of instructions\n :type asmcode: str\n :param pc: program counter of the first instruction(optional)\n :type pc: int\n :param fork: fork name (optional)\n :type fork: str\n :return: An generator of Instruction objects\n :rtype: generator[Instructions]\n\n Example use::\n\n >>> assemble_one('''PUSH1 0x60\\n \\\n PUSH1 0x40\\n \\\n MSTORE\\n \\\n PUSH1 0x2\\n \\\n PUSH2 0x108\\n \\\n PUSH1 0x0\\n \\\n POP\\n \\\n SSTORE\\n \\\n PUSH1 0x40\\n \\\n MLOAD\\n \\\n ''')"}
{"_id": "q_7307", "text": "Disassemble a single instruction from a bytecode\n\n :param bytecode: the bytecode stream\n :type bytecode: str | bytes | bytearray | iterator\n :param pc: program counter of the instruction(optional)\n :type pc: int\n :param fork: fork name (optional)\n :type fork: str\n :return: an Instruction object\n :rtype: Instruction\n\n Example use::\n\n >>> print disassemble_one('\\x60\\x10')"}
{"_id": "q_7308", "text": "Disassemble all instructions in bytecode\n\n :param bytecode: an evm bytecode (binary)\n :type bytecode: str | bytes | bytearray | iterator\n :param pc: program counter of the first instruction(optional)\n :type pc: int\n :param fork: fork name (optional)\n :type fork: str\n :return: An generator of Instruction objects\n :rtype: list[Instruction]\n\n Example use::\n\n >>> for inst in disassemble_all(bytecode):\n ... print(instr)\n\n ...\n PUSH1 0x60\n PUSH1 0x40\n MSTORE\n PUSH1 0x2\n PUSH2 0x108\n PUSH1 0x0\n POP\n SSTORE\n PUSH1 0x40\n MLOAD"}
{"_id": "q_7309", "text": "Convert block number to fork name.\n\n :param block_number: block number\n :type block_number: int\n :return: fork name\n :rtype: str\n\n Example use::\n\n >>> block_to_fork(0)\n ...\n \"frontier\"\n >>> block_to_fork(4370000)\n ...\n \"byzantium\"\n >>> block_to_fork(4370001)\n ...\n \"byzantium\""}
{"_id": "q_7310", "text": "Disconnects from the websocket connection and joins the Thread.\n\n :return:"}
{"_id": "q_7311", "text": "Issues a reconnection by setting the reconnect_required event.\n\n :return:"}
{"_id": "q_7312", "text": "Creates a websocket connection.\n\n :return:"}
{"_id": "q_7313", "text": "Handles and passes received data to the appropriate handlers.\n\n :return:"}
{"_id": "q_7314", "text": "Stops ping, pong and connection timers.\n\n :return:"}
{"_id": "q_7315", "text": "Sends a ping message to the API and starts pong timers.\n\n :return:"}
{"_id": "q_7316", "text": "Sends the given Payload to the API via the websocket connection.\n\n :param kwargs: payload paarameters as key=value pairs\n :return:"}
{"_id": "q_7317", "text": "Unpauses the connection.\n\n Send a message up to client that he should re-subscribe to all\n channels.\n\n :return:"}
{"_id": "q_7318", "text": "Distributes system messages to the appropriate handler.\n\n System messages include everything that arrives as a dict,\n or a list containing a heartbeat.\n\n :param data:\n :param ts:\n :return:"}
{"_id": "q_7319", "text": "Handle INFO messages from the API and issues relevant actions.\n\n :param data:\n :param ts:"}
{"_id": "q_7320", "text": "Handles data messages by passing them up to the client.\n\n :param data:\n :param ts:\n :return:"}
{"_id": "q_7321", "text": "Resubscribes to all channels found in self.channel_configs.\n\n :param soft: if True, unsubscribes first.\n :return: None"}
{"_id": "q_7322", "text": "Handles authentication responses.\n\n :param dtype:\n :param data:\n :param ts:\n :return:"}
{"_id": "q_7323", "text": "Handles configuration messages.\n\n :param dtype:\n :param data:\n :param ts:\n :return:"}
{"_id": "q_7324", "text": "Reset the client.\n\n :return:"}
{"_id": "q_7325", "text": "Return a queue containing all received candles data.\n\n :param pair: str, Symbol pair to request data for\n :param timeframe: str\n :return: Queue()"}
{"_id": "q_7326", "text": "Send configuration to websocket server\n\n :param decimals_as_strings: bool, turn on/off decimals as strings\n :param ts_as_dates: bool, decide to request timestamps as dates instead\n :param sequencing: bool, turn on sequencing\n\t:param ts: bool, request the timestamp to be appended to every array\n sent by the server\n :param kwargs:\n :return:"}
{"_id": "q_7327", "text": "Unsubscribe to the passed pair's ticker channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:"}
{"_id": "q_7328", "text": "Subscribe to the passed pair's order book channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:"}
{"_id": "q_7329", "text": "Unsubscribe to the passed pair's order book channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:"}
{"_id": "q_7330", "text": "Subscribe to the passed pair's raw order book channel.\n\n :param pair: str, Symbol pair to request data for\n :param prec:\n :param kwargs:\n :return:"}
{"_id": "q_7331", "text": "Unsubscribe to the passed pair's raw order book channel.\n\n :param pair: str, Symbol pair to request data for\n :param prec:\n :param kwargs:\n :return:"}
{"_id": "q_7332", "text": "Subscribe to the passed pair's trades channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:"}
{"_id": "q_7333", "text": "Unsubscribe to the passed pair's trades channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:"}
{"_id": "q_7334", "text": "Authenticate with the Bitfinex API.\n\n :return:"}
{"_id": "q_7335", "text": "Internal callback for device command messages, parses source device from topic string and\n passes the information on to the registered device command callback"}
{"_id": "q_7336", "text": "Internal callback for gateway command messages, parses source device from topic string and\n passes the information on to the registered device command callback"}
{"_id": "q_7337", "text": "Internal callback for gateway notification messages, parses source device from topic string and\n passes the information on to the registered device command callback"}
{"_id": "q_7338", "text": "Register one or more new device types, each request can contain a maximum of 512KB."}
{"_id": "q_7339", "text": "Publish an event to Watson IoT Platform.\n\n # Parameters\n event (string): Name of this event\n msgFormat (string): Format of the data for this event\n data (dict): Data for this event\n qos (int): MQTT quality of service level to use (`0`, `1`, or `2`)\n on_publish(function): A function that will be called when receipt \n of the publication is confirmed. \n \n # Callback and QoS\n The use of the optional #on_publish function has different implications depending \n on the level of qos used to publish the event: \n \n - qos 0: the client has asynchronously begun to send the event\n - qos 1 and 2: the client has confirmation of delivery from the platform"}
{"_id": "q_7340", "text": "Update an existing device"}
{"_id": "q_7341", "text": "Iterate through all Connectors"}
{"_id": "q_7342", "text": "List all device management extension packages"}
{"_id": "q_7343", "text": "Create a new device management extension package\n In case of failure it throws APIException"}
{"_id": "q_7344", "text": "Update a schema. Throws APIException on failure."}
{"_id": "q_7345", "text": "Disconnect the client from IBM Watson IoT Platform"}
{"_id": "q_7346", "text": "Called when the broker responds to our connection request.\n\n The value of rc determines success or not:\n 0: Connection successful\n 1: Connection refused - incorrect protocol version\n 2: Connection refused - invalid client identifier\n 3: Connection refused - server unavailable\n 4: Connection refused - bad username or password\n 5: Connection refused - not authorised\n 6-255: Currently unused."}
{"_id": "q_7347", "text": "Subscribe to device event messages\n\n # Parameters\n typeId (string): typeId for the subscription, optional. Defaults to all device types (MQTT `+` wildcard)\n deviceId (string): deviceId for the subscription, optional. Defaults to all devices (MQTT `+` wildcard)\n eventId (string): eventId for the subscription, optional. Defaults to all events (MQTT `+` wildcard)\n msgFormat (string): msgFormat for the subscription, optional. Defaults to all formats (MQTT `+` wildcard)\n qos (int): MQTT quality of service level to use (`0`, `1`, or `2`)\n\n # Returns\n int: If the subscription was successful then the return Message ID (mid) for the subscribe request\n will be returned. The mid value can be used to track the subscribe request by checking against\n the mid argument if you register a subscriptionCallback method.\n If the subscription fails then the return value will be `0`"}
{"_id": "q_7348", "text": "Publish a command to a device\n\n # Parameters\n typeId (string) : The type of the device this command is to be published to\n deviceId (string): The id of the device this command is to be published to\n command (string) : The name of the command\n msgFormat (string) : The format of the command payload\n data (dict) : The command data\n qos (int) : The equivalent MQTT semantics of quality of service using the same constants (optional, defaults to `0`)\n on_publish (function) : A function that will be called when receipt of the publication is confirmed. This has\n different implications depending on the qos:\n - qos 0 : the client has asynchronously begun to send the event\n - qos 1 and 2 : the client has confirmation of delivery from WIoTP"}
{"_id": "q_7349", "text": "Internal callback for messages that have not been handled by any of the specific internal callbacks, these\n messages are not passed on to any user provided callback"}
{"_id": "q_7350", "text": "Internal callback for device event messages, parses source device from topic string and\n passes the information on to the registerd device event callback"}
{"_id": "q_7351", "text": "Internal callback for device status messages, parses source device from topic string and\n passes the information on to the registerd device status callback"}
{"_id": "q_7352", "text": "Internal callback for application command messages, parses source application from topic string and\n passes the information on to the registerd applicaion status callback"}
{"_id": "q_7353", "text": "Retrieves the last cached message for specified event from a specific device."}
{"_id": "q_7354", "text": "Retrieves a list of the last cached message for all events from a specific device."}
{"_id": "q_7355", "text": "Initiates a device management request, such as reboot.\n In case of failure it throws APIException"}
{"_id": "q_7356", "text": "Force a flush of the index to storage. Renders index\n inaccessible."}
{"_id": "q_7357", "text": "Returns the ``k``-nearest objects to the given coordinates.\n\n :param coordinates: sequence or array\n This may be an object that satisfies the numpy array\n protocol, providing the index's dimension * 2 coordinate\n pairs representing the `mink` and `maxk` coordinates in\n each dimension defining the bounds of the query window.\n\n :param num_results: integer\n The number of results to return nearest to the given coordinates.\n If two index entries are equidistant, *both* are returned.\n This property means that :attr:`num_results` may return more\n items than specified\n\n :param objects: True / False / 'raw'\n If True, the nearest method will return index objects that\n were pickled when they were stored with each index entry, as\n well as the id and bounds of the index entries.\n If 'raw', it will return the object as entered into the database\n without the :class:`rtree.index.Item` wrapper.\n\n Example of finding the three items nearest to this one::\n\n >>> from rtree import index\n >>> idx = index.Index()\n >>> idx.insert(4321, (34.37, 26.73, 49.37, 41.73), obj=42)\n >>> hits = idx.nearest((0, 0, 10, 10), 3, objects=True)"}
{"_id": "q_7358", "text": "Deletes items from the index with the given ``'id'`` within the\n specified coordinates.\n\n :param id: long integer\n A long integer that is the identifier for this index entry. IDs\n need not be unique to be inserted into the index, and it is up\n to the user to ensure they are unique if this is a requirement.\n\n :param coordinates: sequence or array\n Dimension * 2 coordinate pairs, representing the min\n and max coordinates in each dimension of the item to be\n deleted from the index. Their ordering will depend on the\n index's :attr:`interleaved` data member.\n These are not the coordinates of a space containing the\n item, but those of the item itself. Together with the\n id parameter, they determine which item will be deleted.\n This may be an object that satisfies the numpy array protocol.\n\n Example::\n\n >>> from rtree import index\n >>> idx = index.Index()\n >>> idx.delete(4321,\n ... (34.3776829412, 26.7375853734, 49.3776829412,\n ... 41.7375853734))"}
{"_id": "q_7359", "text": "Must be overridden. Must return a string with the loaded data."}
{"_id": "q_7360", "text": "Deletes the item from the container within the specified\n coordinates.\n\n :param obj: object\n Any object.\n\n :param coordinates: sequence or array\n Dimension * 2 coordinate pairs, representing the min\n and max coordinates in each dimension of the item to be\n deleted from the index. Their ordering will depend on the\n index's :attr:`interleaved` data member.\n These are not the coordinates of a space containing the\n item, but those of the item itself. Together with the\n id parameter, they determine which item will be deleted.\n This may be an object that satisfies the numpy array protocol.\n\n Example::\n\n >>> from rtree import index\n >>> idx = index.RtreeContainer()\n >>> idx.delete(object(),\n ... (34.3776829412, 26.7375853734, 49.3776829412,\n ... 41.7375853734))\n Traceback (most recent call last):\n ...\n IndexError: object is not in the index"}
{"_id": "q_7361", "text": "Define delay adjustment policy"}
{"_id": "q_7362", "text": "Convert string into camel case.\n\n Args:\n string: String to convert.\n\n Returns:\n string: Camel case string."}
{"_id": "q_7363", "text": "Convert string into capital case.\n First letters will be uppercase.\n\n Args:\n string: String to convert.\n\n Returns:\n string: Capital case string."}
{"_id": "q_7364", "text": "Convert string into spinal case.\n Join punctuation with backslash.\n\n Args:\n string: String to convert.\n\n Returns:\n string: Spinal cased string."}
{"_id": "q_7365", "text": "Convert string into sentence case.\n First letter capped and each punctuations are joined with space.\n\n Args:\n string: String to convert.\n\n Returns:\n string: Sentence cased string."}
{"_id": "q_7366", "text": "Convert string into snake case.\n Join punctuation with underscore\n\n Args:\n string: String to convert.\n\n Returns:\n string: Snake cased string."}
{"_id": "q_7367", "text": "Attempt an import of the specified application"}
{"_id": "q_7368", "text": "Initializes the Flask application with Common."}
{"_id": "q_7369", "text": "Return a PIL Image instance cropped from `image`.\n\n Image has an aspect ratio provided by dividing `width` / `height`),\n sized down to `width`x`height`. Any 'excess pixels' are trimmed away\n in respect to the pixel of `image` that corresponds to `ppoi` (Primary\n Point of Interest).\n\n `image`: A PIL Image instance\n `width`: Integer, width of the image to return (in pixels)\n `height`: Integer, height of the image to return (in pixels)\n `ppoi`: A 2-tuple of floats with values greater than 0 and less than 1\n These values are converted into a cartesian coordinate that\n signifies the 'center pixel' which the crop will center on\n (to trim the excess from the 'long side').\n\n Determines whether to trim away pixels from either the left/right or\n top/bottom sides by comparing the aspect ratio of `image` vs the\n aspect ratio of `width`x`height`.\n\n Will trim from the left/right sides if the aspect ratio of `image`\n is greater-than-or-equal-to the aspect ratio of `width`x`height`.\n\n Will trim from the top/bottom sides if the aspect ration of `image`\n is less-than the aspect ratio or `width`x`height`.\n\n Similar to Kevin Cazabon's ImageOps.fit method but uses the\n ppoi value as an absolute centerpoint (as opposed as a\n percentage to trim off the 'long sides')."}
{"_id": "q_7370", "text": "Return a BytesIO instance of `image` that fits in a bounding box.\n\n Bounding box dimensions are `width`x`height`."}
{"_id": "q_7371", "text": "Return a BytesIO instance of `image` with inverted colors."}
{"_id": "q_7372", "text": "Ensure data is prepped properly before handing off to ImageField."}
{"_id": "q_7373", "text": "Process the field's placeholder image.\n\n Ensures the placeholder image has been saved to the same storage class\n as the field in a top level folder with a name specified by\n settings.VERSATILEIMAGEFIELD_SETTINGS['placeholder_directory_name']\n\n This should be called by the VersatileImageFileDescriptor __get__.\n If self.placeholder_image_name is already set it just returns right away."}
{"_id": "q_7374", "text": "Return field's value just before saving."}
{"_id": "q_7375", "text": "Update field's ppoi field, if defined.\n\n This method is hooked up this field's pre_save method to update\n the ppoi immediately before the model instance (`instance`)\n it is associated with is saved.\n\n This field's ppoi can be forced to update with force=True,\n which is how VersatileImageField.pre_save calls this method."}
{"_id": "q_7376", "text": "Handle data sent from MultiValueField forms that set ppoi values.\n\n `instance`: The model instance that is being altered via a form\n `data`: The data sent from the form to this field which can be either:\n * `None`: This is unset data from an optional field\n * A two-position tuple: (image_form_data, ppoi_data)\n * `image_form-data` options:\n * `None` the file for this field is unchanged\n * `False` unassign the file form the field\n * `ppoi_data` data structure:\n * `%(x_coordinate)sx%(y_coordinate)s': The ppoi data to\n assign to the unchanged file"}
{"_id": "q_7377", "text": "Unregister the FilteredImage subclass currently assigned to attr_name.\n\n If a FilteredImage subclass isn't already registered to filters.\n `attr_name` NotRegistered will raise."}
{"_id": "q_7378", "text": "Return the appropriate URL.\n\n URL is constructed based on these field conditions:\n * If empty (not `self.name`) and a placeholder is defined, the\n URL to the placeholder is returned.\n * Otherwise, defaults to vanilla ImageFieldFile behavior."}
{"_id": "q_7379", "text": "Return the location where filtered images are stored."}
{"_id": "q_7380", "text": "Return the location where sized images are stored."}
{"_id": "q_7381", "text": "Return the location where filtered + sized images are stored."}
{"_id": "q_7382", "text": "Preprocess an image.\n\n An API hook for image pre-processing. Calls any image format specific\n pre-processors (if defined). I.E. If `image_format` is 'JPEG', this\n method will look for a method named `preprocess_JPEG`, if found\n `image` will be passed to it.\n\n Arguments:\n * `image`: a PIL Image instance\n * `image_format`: str, a valid PIL format (i.e. 'JPEG' or 'GIF')\n\n Subclasses should return a 2-tuple:\n * [0]: A PIL Image instance.\n * [1]: A dictionary of additional keyword arguments to be used\n when the instance is saved. If no additional keyword\n arguments, return an empty dict ({})."}
{"_id": "q_7383", "text": "Receive a PIL Image instance of a JPEG and returns 2-tuple.\n\n Args:\n * [0]: Image instance, converted to RGB\n * [1]: Dict with a quality key (mapped to the value of `QUAL` as\n defined by the `VERSATILEIMAGEFIELD_JPEG_RESIZE_QUALITY`\n setting)"}
{"_id": "q_7384", "text": "Return a PIL Image instance stored at `path_to_image`."}
{"_id": "q_7385", "text": "Return PPOI value as a string."}
{"_id": "q_7386", "text": "Create a resized image.\n\n `path_to_image`: The path to the image with the media directory to\n resize. If `None`, the\n VERSATILEIMAGEFIELD_PLACEHOLDER_IMAGE will be used.\n `save_path_on_storage`: Where on self.storage to save the resized image\n `width`: Width of resized image (int)\n `height`: Desired height of resized image (int)\n `filename_key`: A string that will be used in the sized image filename\n to signify what operation was done to it.\n Examples: 'crop' or 'scale'"}
{"_id": "q_7387", "text": "Return a `path_to_image` location on `storage` as dictated by `width`, `height`\n and `filename_key`"}
{"_id": "q_7388", "text": "Return the 'filtered path'"}
{"_id": "q_7389", "text": "Validate a list of size keys.\n\n `sizes`: An iterable of 2-tuples, both strings. Example:\n [\n ('large', 'url'),\n ('medium', 'crop__400x400'),\n ('small', 'thumbnail__100x100')\n ]"}
{"_id": "q_7390", "text": "Build a URL from `image_key`."}
{"_id": "q_7391", "text": "Takes a raw `Instruction` and translates it into a human readable text\n representation. As of writing, the text representation for WASM is not yet\n standardized, so we just emit some generic format."}
{"_id": "q_7392", "text": "Takes a `FunctionBody` and optionally a `FunctionType`, yielding the string \n representation of the function line by line. The function type is required\n for formatting function parameter and return value information."}
{"_id": "q_7393", "text": "Decodes raw bytecode, yielding `Instruction`s."}
{"_id": "q_7394", "text": "Deprecates a function, printing a warning on the first usage."}
{"_id": "q_7395", "text": "Checks the validity of the input.\n\n In case of an invalid input throws ValueError."}
{"_id": "q_7396", "text": "Helper method that returns the index of the string based on node's\n starting index"}
{"_id": "q_7397", "text": "Returns the Largest Common Substring of Strings provided in stringIdxs.\n If stringIdxs is not provided, the LCS of all strings is returned.\n\n ::param stringIdxs: Optional: List of indexes of strings."}
{"_id": "q_7398", "text": "Helper method returns the starting indexes of strings in GST"}
{"_id": "q_7399", "text": "Helper method, returns the edge label between a node and it's parent"}
{"_id": "q_7400", "text": "Generator of unique terminal symbols used for building the Generalized Suffix Tree.\n Unicode Private Use Area U+E000..U+F8FF is used to ensure that terminal symbols\n are not part of the input string."}
{"_id": "q_7401", "text": "connect to the server"}
{"_id": "q_7402", "text": "Parse read a response from the AGI and parse it.\n\n :return dict: The AGI response parsed into a dict."}
{"_id": "q_7403", "text": "Parse AGI results using Regular expression.\n\n AGI Result examples::\n\n 100 result=0 Trying...\n\n 200 result=0\n\n 200 result=-1\n\n 200 result=132456\n\n 200 result= (timeout)\n\n 510 Invalid or unknown command\n\n 520-Invalid command syntax. Proper usage follows:\n int() argument must be a string, a bytes-like object or a number, not\n 'NoneType'\n\n HANGUP"}
{"_id": "q_7404", "text": "Mostly used for unit testing. Allow to use a static uuid and reset\n all counter"}
{"_id": "q_7405", "text": "Mostly used for debugging"}
{"_id": "q_7406", "text": "Returns data from a package directory.\n 'path' should be an absolute path."}
{"_id": "q_7407", "text": "Create a graph of constraints for both must- and cannot-links"}
{"_id": "q_7408", "text": "Translates a regular Scikit-Learn estimator or pipeline to a PMML pipeline.\n\n\tParameters:\n\t----------\n\tobj: BaseEstimator\n\t\tThe object.\n\n\tactive_fields: list of strings, optional\n\t\tFeature names. If missing, \"x1\", \"x2\", .., \"xn\" are assumed.\n\n\ttarget_fields: list of strings, optional\n\t\tLabel name(s). If missing, \"y\" is assumed."}
{"_id": "q_7409", "text": "Converts a fitted Scikit-Learn pipeline to PMML.\n\n\tParameters:\n\t----------\n\tpipeline: PMMLPipeline\n\t\tThe pipeline.\n\n\tpmml: string\n\t\tThe path to where the PMML document should be stored.\n\n\tuser_classpath: list of strings, optional\n\t\tThe paths to JAR files that provide custom Transformer, Selector and/or Estimator converter classes.\n\t\tThe JPMML-SkLearn classpath is constructed by appending user JAR files to package JAR files.\n\n\twith_repr: boolean, optional\n\t\tIf true, insert the string representation of pipeline into the PMML document.\n\n\tdebug: boolean, optional\n\t\tIf true, print information about the conversion process.\n\n\tjava_encoding: string, optional\n\t\tThe character encoding to use for decoding Java output and error byte streams."}
{"_id": "q_7410", "text": "Returns an instance of the formset"}
{"_id": "q_7411", "text": "If the formset is valid, save the associated models."}
{"_id": "q_7412", "text": "Handles POST requests, instantiating a formset instance with the passed\n POST variables and then checked for validity."}
{"_id": "q_7413", "text": "Overrides construct_formset to attach the model class as\n an attribute of the returned formset instance."}
{"_id": "q_7414", "text": "Returns the inline formset instances"}
{"_id": "q_7415", "text": "Handles GET requests and instantiates a blank version of the form and formsets."}
{"_id": "q_7416", "text": "Handles POST requests, instantiating a form and formset instances with the passed\n POST variables and then checked for validity."}
{"_id": "q_7417", "text": "If `inlines_names` has been defined, add each formset to the context under\n its corresponding entry in `inlines_names`"}
{"_id": "q_7418", "text": "Returns the start date for a model instance"}
{"_id": "q_7419", "text": "Returns an integer representing the first day of the week.\n\n 0 represents Monday, 6 represents Sunday."}
{"_id": "q_7420", "text": "Returns a queryset of models for the month requested"}
{"_id": "q_7421", "text": "Injects variables necessary for rendering the calendar into the context.\n\n Variables added are: `calendar`, `weekdays`, `month`, `next_month` and `previous_month`."}
{"_id": "q_7422", "text": "Get primary key properties for a SQLAlchemy model.\n\n :param model: SQLAlchemy model class"}
{"_id": "q_7423", "text": "Deserialize a serialized value to a model instance.\n\n If the parent schema is transient, create a new (transient) instance.\n Otherwise, attempt to find an existing instance in the database.\n :param value: The value to deserialize."}
{"_id": "q_7424", "text": "Deserialize data to internal representation.\n\n :param session: Optional SQLAlchemy session.\n :param instance: Optional existing instance to modify.\n :param transient: Optional switch to allow transient instantiation."}
{"_id": "q_7425", "text": "Deletes old stellar tables that are not used anymore"}
{"_id": "q_7426", "text": "Takes a snapshot of the database"}
{"_id": "q_7427", "text": "Returns a list of snapshots"}
{"_id": "q_7428", "text": "Removes a snapshot"}
{"_id": "q_7429", "text": "Renames a snapshot"}
{"_id": "q_7430", "text": "Replaces a snapshot"}
{"_id": "q_7431", "text": "Updates indexes after each epoch for shuffling"}
{"_id": "q_7432", "text": "Defines the default function for cleaning text.\n\n This function operates over a list."}
{"_id": "q_7433", "text": "Apply function to list of elements.\n\n Automatically determines the chunk size."}
{"_id": "q_7434", "text": "Analyze document length statistics for padding strategy"}
{"_id": "q_7435", "text": "Return a new Colorful object with the given color config."}
{"_id": "q_7436", "text": "Parse the given rgb.txt file into a Python dict.\n\n See https://en.wikipedia.org/wiki/X11_color_names for more information\n\n :param str path: the path to the X11 rgb.txt file"}
{"_id": "q_7437", "text": "Sanitze the given color palette so it can\n be safely used by Colorful.\n\n It will convert colors specified in hex RGB to\n a RGB channel triplet."}
{"_id": "q_7438", "text": "Detect what color palettes are supported.\n It'll return a valid color mode to use\n with colorful.\n\n :param dict env: the environment dict like returned by ``os.envion``"}
{"_id": "q_7439", "text": "Convert the given hex string to a\n valid RGB channel triplet."}
{"_id": "q_7440", "text": "Check if the given hex value is a valid RGB color\n\n It should match the format: [0-9a-fA-F]\n and be of length 3 or 6."}
{"_id": "q_7441", "text": "Translate the given color name to a valid\n ANSI escape code.\n\n :parma str colorname: the name of the color to resolve\n :parma str offset: the offset for the color code\n :param int colormode: the color mode to use. See ``translate_rgb_to_ansi_code``\n :parma dict colorpalette: the color palette to use for the color name mapping\n\n :returns str: the color as ANSI escape code\n\n :raises ColorfulError: if the given color name is invalid"}
{"_id": "q_7442", "text": "Resolve the given modifier name to a valid\n ANSI escape code.\n\n :param str modifiername: the name of the modifier to resolve\n :param int colormode: the color mode to use. See ``translate_rgb_to_ansi_code``\n\n :returns str: the ANSI escape code for the modifier\n\n :raises ColorfulError: if the given modifier name is invalid"}
{"_id": "q_7443", "text": "Translate the given style to an ANSI escape code\n sequence.\n\n ``style`` examples are:\n\n * green\n * bold\n * red_on_black\n * bold_green\n * italic_yellow_on_cyan\n\n :param str style: the style to translate\n :param int colormode: the color mode to use. See ``translate_rgb_to_ansi_code``\n :parma dict colorpalette: the color palette to use for the color name mapping"}
{"_id": "q_7444", "text": "Style the given string according to the given\n ANSI style string.\n\n :param str string: the string to style\n :param tuple ansi_style: the styling string returned by ``translate_style``\n :param int colormode: the color mode to use. See ``translate_rgb_to_ansi_code``\n\n :returns: a string containing proper ANSI sequence"}
{"_id": "q_7445", "text": "Use a predefined style as color palette\n\n :param str style_name: the name of the style"}
{"_id": "q_7446", "text": "Format the given string with the given ``args`` and ``kwargs``.\n The string can contain references to ``c`` which is provided by\n this colorful object.\n\n :param str string: the string to format"}
{"_id": "q_7447", "text": "Get data from the USB device."}
{"_id": "q_7448", "text": "Get device humidity reading.\n\n Params:\n - sensors: optional list of sensors to get a reading for, examples:\n [0,] - get reading for sensor 0\n [0, 1,] - get reading for sensors 0 and 1\n None - get readings for all sensors"}
{"_id": "q_7449", "text": "Read data from device."}
{"_id": "q_7450", "text": "Update, rolling back on failure."}
{"_id": "q_7451", "text": "Create a new temporary file and write some initial text to it.\n\n :param text: the text to write to the temp file\n :type text: str\n :returns: the file name of the newly created temp file\n :rtype: str"}
{"_id": "q_7452", "text": "Get a list of contacts from one or more address books.\n\n :param address_books: the address books to search\n :type address_books: list(address_book.AddressBook)\n :param query: a search query to select contacts\n :type quer: str\n :param method: the search method, one of \"all\", \"name\" or \"uid\"\n :type method: str\n :param reverse: reverse the order of the returned contacts\n :type reverse: bool\n :param group: group results by address book\n :type group: bool\n :param sort: the field to use for sorting, one of \"first_name\", \"last_name\"\n :type sort: str\n :returns: contacts from the address_books that match the query\n :rtype: list(CarddavObject)"}
{"_id": "q_7453", "text": "Merge the parsed arguments from argparse into the config object.\n\n :param args: the parsed command line arguments\n :type args: argparse.Namespace\n :param config: the parsed config file\n :type config: config.Config\n :returns: the merged config object\n :rtype: config.Config"}
{"_id": "q_7454", "text": "Load all address books with the given names from the config.\n\n :param names: the address books to load\n :type names: list(str)\n :param config: the config instance to use when looking up address books\n :type config: config.Config\n :param search_queries: a mapping of address book names to search queries\n :type search_queries: dict\n :yields: the loaded address books\n :ytype: addressbook.AddressBook"}
{"_id": "q_7455", "text": "Prepare the search query string from the given command line args.\n\n Each address book can get a search query string to filter vcards befor\n loading them. Depending on the question if the address book is used for\n source or target searches different regexes have to be combined into one\n search string.\n\n :param args: the parsed command line\n :type args: argparse.Namespace\n :returns: a dict mapping abook names to their loading queries, if the query\n is None it means that all cards should be loaded\n :rtype: dict(str:str or None)"}
{"_id": "q_7456", "text": "Print a phone application friendly contact table.\n\n :param search_terms: used as search term to filter the contacts before\n printing\n :type search_terms: str\n :param vcard_list: the vcards to search for matching entries which should\n be printed\n :type vcard_list: list of carddav_object.CarddavObject\n :param parsable: machine readable output: columns devided by tabulator (\\t)\n :type parsable: bool\n :returns: None\n :rtype: None"}
{"_id": "q_7457", "text": "Print a user friendly contacts table.\n\n :param vcard_list: the vcards to print\n :type vcard_list: list of carddav_object.CarddavObject\n :param parsable: machine readable output: columns devided by tabulator (\\t)\n :type parsable: bool\n :returns: None\n :rtype: None"}
{"_id": "q_7458", "text": "Modify a contact in an external editor.\n\n :param selected_vcard: the contact to modify\n :type selected_vcard: carddav_object.CarddavObject\n :param input_from_stdin_or_file: new data from stdin (or a file) that\n should be incorperated into the contact, this should be a yaml\n formatted string\n :type input_from_stdin_or_file: str\n :param open_editor: whether to open the new contact in the edior after\n creation\n :type open_editor: bool\n :returns: None\n :rtype: None"}
{"_id": "q_7459", "text": "Remove a contact from the addressbook.\n\n :param selected_vcard: the contact to delete\n :type selected_vcard: carddav_object.CarddavObject\n :param force: delete without confirmation\n :type force: bool\n :returns: None\n :rtype: None"}
{"_id": "q_7460", "text": "Open the vcard file for a contact in an external editor.\n\n :param selected_vcard: the contact to edit\n :type selected_vcard: carddav_object.CarddavObject\n :param editor: the eitor command to use\n :type editor: str\n :returns: None\n :rtype: None"}
{"_id": "q_7461", "text": "Merge two contacts into one.\n\n :param vcard_list: the vcards from which to choose contacts for mergeing\n :type vcard_list: list of carddav_object.CarddavObject\n :param selected_address_books: the addressbooks to use to find the target\n contact\n :type selected_address_books: list(addressbook.AddressBook)\n :param search_terms: the search terms to find the target contact\n :type search_terms: str\n :param target_uid: the uid of the target contact or empty\n :type target_uid: str\n :returns: None\n :rtype: None"}
{"_id": "q_7462", "text": "Find the name of the action for the supplied alias. If no action is\n asociated with the given alias, None is returned.\n\n :param alias: the alias to look up\n :type alias: str\n :rturns: the name of the corresponding action or None\n :rtype: str or NoneType"}
{"_id": "q_7463", "text": "Convert the named field to bool.\n\n The current value should be one of the strings \"yes\" or \"no\". It will\n be replaced with its boolean counterpart. If the field is not present\n in the config object, the default value is used.\n\n :param config: the config section where to set the option\n :type config: configobj.ConfigObj\n :param name: the name of the option to convert\n :type name: str\n :param default: the default value to use if the option was not\n previously set\n :type default: bool\n :returns: None"}
{"_id": "q_7464", "text": "Use this if you want to create a new contact from user input."}
{"_id": "q_7465", "text": "Get some part of the \"N\" entry in the vCard as a list\n\n :param part: the name to get e.g. \"prefix\" or \"given\"\n :type part: str\n :returns: a list of entries for this name part\n :rtype: list(str)"}
{"_id": "q_7466", "text": "categories variable must be a list"}
{"_id": "q_7467", "text": "Parse type value of phone numbers, email and post addresses.\n\n :param types: list of type values\n :type types: list(str)\n :param value: the corresponding label, required for more verbose\n exceptions\n :type value: str\n :param supported_types: all allowed standard types\n :type supported_types: list(str)\n :returns: tuple of standard and custom types and pref integer\n :rtype: tuple(list(str), list(str), int)"}
{"_id": "q_7468", "text": "converts list to string recursively so that nested lists are supported\n\n :param input: a list of strings and lists of strings (and so on recursive)\n :type input: list\n :param delimiter: the deimiter to use when joining the items\n :type delimiter: str\n :returns: the recursively joined list\n :rtype: str"}
{"_id": "q_7469", "text": "Convert string to date object.\n\n :param input: the date string to parse\n :type input: str\n :returns: the parsed datetime object\n :rtype: datetime.datetime"}
{"_id": "q_7470", "text": "Calculate the minimum length of initial substrings of uid1 and uid2\n for them to be different.\n\n :param uid1: first uid to compare\n :type uid1: str\n :param uid2: second uid to compare\n :type uid2: str\n :returns: the length of the shortes unequal initial substrings\n :rtype: int"}
{"_id": "q_7471", "text": "Search in all fields for contacts matching query.\n\n :param query: the query to search for\n :type query: str\n :yields: all found contacts\n :rtype: generator(carddav_object.CarddavObject)"}
{"_id": "q_7472", "text": "Search in the name filed for contacts matching query.\n\n :param query: the query to search for\n :type query: str\n :yields: all found contacts\n :rtype: generator(carddav_object.CarddavObject)"}
{"_id": "q_7473", "text": "Search for contacts with a matching uid.\n\n :param query: the query to search for\n :type query: str\n :yields: all found contacts\n :rtype: generator(carddav_object.CarddavObject)"}
{"_id": "q_7474", "text": "Search this address book for contacts matching the query.\n\n The method can be one of \"all\", \"name\" and \"uid\". The backend for this\n address book migth be load()ed if needed.\n\n :param query: the query to search for\n :type query: str\n :param method: the type of fileds to use when seaching\n :type method: str\n :returns: all found contacts\n :rtype: list(carddav_object.CarddavObject)"}
{"_id": "q_7475", "text": "Create a dictionary of shortend UIDs for all contacts.\n\n All arguments are only used if the address book is not yet initialized\n and will just be handed to self.load().\n\n :param query: see self.load()\n :type query: str\n :returns: the contacts mapped by the shortes unique prefix of their UID\n :rtype: dict(str: CarddavObject)"}
{"_id": "q_7476", "text": "Get the shortend UID for the given UID.\n\n :param uid: the full UID to shorten\n :type uid: str\n :returns: the shortend uid or the empty string\n :rtype: str"}
{"_id": "q_7477", "text": "Load all vcard files in this address book from disk.\n\n If a search string is given only files which contents match that will\n be loaded.\n\n :param query: a regular expression to limit the results\n :type query: str\n :param search_in_source_files: apply search regexp directly on the .vcf files to speed up parsing (less accurate)\n :type search_in_source_files: bool\n :returns: the number of successfully loaded cards and the number of\n errors\n :rtype: int, int\n :throws: AddressBookParseError"}
{"_id": "q_7478", "text": "Create the JSON for configuring arthur to collect data\n\n https://github.com/grimoirelab/arthur#adding-tasks\n Sample for git:\n\n {\n \"tasks\": [\n {\n \"task_id\": \"arthur.git\",\n \"backend\": \"git\",\n \"backend_args\": {\n \"gitpath\": \"/tmp/arthur_git/\",\n \"uri\": \"https://github.com/grimoirelab/arthur.git\"\n },\n \"category\": \"commit\",\n \"archive_args\": {\n \"archive_path\": '/tmp/test_archives',\n \"fetch_from_archive\": false,\n \"archive_after\": None\n },\n \"scheduler_args\": {\n \"delay\": 10\n }\n }\n ]\n }"}
{"_id": "q_7479", "text": "Return the GitHub SHA for a file in the repository"}
{"_id": "q_7480", "text": "Execute the merge identities phase\n\n :param config: a Mordred config object"}
{"_id": "q_7481", "text": "Execute the panels phase\n\n :param config: a Mordred config object"}
{"_id": "q_7482", "text": "Config logging level output output"}
{"_id": "q_7483", "text": "Get params to execute the micro-mordred"}
{"_id": "q_7484", "text": "Upload a panel to Elasticsearch if it does not exist yet.\n\n If a list of data sources is specified, upload only those\n elements (visualizations, searches) that match that data source.\n\n :param panel_file: file name of panel (dashobard) to upload\n :param data_sources: list of data sources\n :param strict: only upload a dashboard if it is newer than the one already existing"}
{"_id": "q_7485", "text": "Upload to Kibiter the title for the dashboard.\n\n The title is shown on top of the dashboard menu, and is Usually\n the name of the project being dashboarded.\n This is done only for Kibiter 6.x.\n\n :param kibiter_major: major version of kibiter"}
{"_id": "q_7486", "text": "Create the menu definition to access the panels in a dashboard.\n\n :param menu: dashboard menu to upload\n :param kibiter_major: major version of kibiter"}
{"_id": "q_7487", "text": "Remove existing menu for dashboard, if any.\n\n Usually, we remove the menu before creating a new one.\n\n :param kibiter_major: major version of kibiter"}
{"_id": "q_7488", "text": "Get the menu entries from the panel definition"}
{"_id": "q_7489", "text": "Order the dashboard menu"}
{"_id": "q_7490", "text": "Compose projects.json only for mbox, but using the mailing_lists lists\n\n change: 'https://dev.eclipse.org/mailman/listinfo/emft-dev'\n to: 'emfg-dev /home/bitergia/mboxes/emft-dev.mbox/emft-dev.mbox\n\n :param projects: projects.json\n :return: projects.json with mbox"}
{"_id": "q_7491", "text": "Compose projects.json for git\n\n We need to replace '/c/' by '/gitroot/' for instance\n\n change: 'http://git.eclipse.org/c/xwt/org.eclipse.xwt.git'\n to: 'http://git.eclipse.org/gitroot/xwt/org.eclipse.xwt.git'\n\n :param projects: projects.json\n :param data: eclipse JSON\n :return: projects.json with git"}
{"_id": "q_7492", "text": "Compose projects.json for mailing lists\n\n At upstream has two different key for mailing list: 'mailings_lists' and 'dev_list'\n The key 'mailing_lists' is an array with mailing lists\n The key 'dev_list' is a dict with only one mailing list\n\n :param projects: projects.json\n :param data: eclipse JSON\n :return: projects.json with mailing_lists"}
{"_id": "q_7493", "text": "Compose projects.json for github\n\n :param projects: projects.json\n :param data: eclipse JSON\n :return: projects.json with github"}
{"_id": "q_7494", "text": "Compose the projects JSON file only with the projects name\n\n :param projects: projects.json\n :param data: eclipse JSON with the origin format\n :return: projects.json with titles"}
{"_id": "q_7495", "text": "Compose projects.json with all data sources\n\n :param projects: projects.json\n :param data: eclipse JSON\n :return: projects.json with all data sources"}
{"_id": "q_7496", "text": "Execute autorefresh for areas of code study if configured"}
{"_id": "q_7497", "text": "Execute the studies configured for the current backend"}
{"_id": "q_7498", "text": "Retain the identities in SortingHat based on the `retention_time`\n value declared in the setup.cfg.\n\n :param retention_time: maximum number of minutes wrt the current date to retain the SortingHat data"}
{"_id": "q_7499", "text": "return list with the repositories for a backend_section"}
{"_id": "q_7500", "text": "Convert from eclipse projects format to grimoire projects json format"}
{"_id": "q_7501", "text": "Change a param in the config"}
{"_id": "q_7502", "text": "Get Elasticsearch version.\n\n Get the version of Elasticsearch. This is useful because\n Elasticsearch and Kibiter are paired (same major version for 5, 6).\n\n :param url: Elasticseearch url hosting Kibiter indices\n :returns: major version, as string"}
{"_id": "q_7503", "text": "Start a task manager per backend to complete the tasks.\n\n :param task_cls: list of tasks classes to be executed\n :param big_delay: seconds before global tasks are executed, should be days usually\n :param small_delay: seconds before backend tasks are executed, should be minutes\n :param wait_for_threads: boolean to set when threads are infinite or\n should be synchronized in a meeting point"}
{"_id": "q_7504", "text": "Tasks that should be done just one time"}
{"_id": "q_7505", "text": "Validates the provided config to make sure all the required fields are \n there."}
{"_id": "q_7506", "text": "Customize the message format based on the log level."}
{"_id": "q_7507", "text": "Initialize the dictionary of architectures for assembling via keystone"}
{"_id": "q_7508", "text": "Sys.out replacer, by default with stderr.\n\n Use it like this:\n with replace_print_with(fileobj):\n print \"hello\" # writes to the file\n print \"done\" # prints to stdout\n\n Args:\n fileobj: a file object to replace stdout.\n\n Yields:\n The printer."}
{"_id": "q_7509", "text": "Compact a list of integers into a comma-separated string of intervals.\n\n Args:\n value_list: A list of sortable integers such as a list of numbers\n\n Returns:\n A compact string representation, such as \"1-5,8,12-15\""}
{"_id": "q_7510", "text": "Get a storage client using the provided credentials or defaults."}
{"_id": "q_7511", "text": "Load context from a text file in gcs.\n\n Args:\n gcs_file_path: The target file path; should have the 'gs://' prefix.\n credentials: Optional credential to be used to load the file from gcs.\n\n Returns:\n The content of the text file as a string."}
{"_id": "q_7512", "text": "Check whether the file exists, in GCS.\n\n Args:\n gcs_file_path: The target file path; should have the 'gs://' prefix.\n credentials: Optional credential to be used to load the file from gcs.\n\n Returns:\n True if the file's there."}
{"_id": "q_7513", "text": "True iff an object exists matching the input GCS pattern.\n\n The GCS pattern must be a full object reference or a \"simple pattern\" that\n conforms to the dsub input and output parameter restrictions:\n\n * No support for **, ? wildcards or [] character ranges\n * Wildcards may only appear in the file name\n\n Args:\n file_pattern: eg. 'gs://foo/ba*'\n credentials: Optional credential to be used to load the file from gcs.\n\n Raises:\n ValueError: if file_pattern breaks the rules.\n\n Returns:\n True iff a file exists that matches that pattern."}
{"_id": "q_7514", "text": "True if each output contains at least one file or no output specified."}
{"_id": "q_7515", "text": "Return a dict object representing a pipeline input argument."}
{"_id": "q_7516", "text": "Return a multi-line string of the full pipeline docker command."}
{"_id": "q_7517", "text": "Builds pipeline args for execution.\n\n Args:\n project: string name of project.\n script: Body of the script to execute.\n job_params: dictionary of values for labels, envs, inputs, and outputs\n for this job.\n task_params: dictionary of values for labels, envs, inputs, and outputs\n for this task.\n reserved_labels: dictionary of reserved labels (e.g. task-id,\n task-attempt)\n preemptible: use a preemptible VM for the job\n logging_uri: path for job logging output.\n scopes: list of scope.\n keep_alive: Seconds to keep VM alive on failure\n\n Returns:\n A nested dictionary with one entry under the key pipelineArgs containing\n the pipeline arguments."}
{"_id": "q_7518", "text": "Convert the integer UTC time value into a local datetime."}
{"_id": "q_7519", "text": "Returns a Pipeline objects for the job."}
{"_id": "q_7520", "text": "Kills the operations associated with the specified job or job.task.\n\n Args:\n user_ids: List of user ids who \"own\" the job(s) to cancel.\n job_ids: List of job_ids to cancel.\n task_ids: List of task-ids to cancel.\n labels: List of LabelParam, each must match the job(s) to be canceled.\n create_time_min: a timezone-aware datetime value for the earliest create\n time of a task, inclusive.\n create_time_max: a timezone-aware datetime value for the most recent\n create time of a task, inclusive.\n\n Returns:\n A list of tasks canceled and a list of error messages."}
{"_id": "q_7521", "text": "Returns the most relevant status string and last updated date string.\n\n This string is meant for display only.\n\n Returns:\n A printable status string and date string."}
{"_id": "q_7522", "text": "Create a task name from a job-id, task-id, and task-attempt.\n\n Task names are used internally by dsub as well as by the docker task runner.\n The name is formatted as \"<job-id>.<task-id>[.task-attempt]\". Task names\n follow formatting conventions allowing them to be safely used as a docker\n name.\n\n Args:\n job_id: (str) the job ID.\n task_id: (str) the task ID.\n task_attempt: (int) the task attempt.\n\n Returns:\n a task name string."}
{"_id": "q_7523", "text": "Rewrite string so that all characters are valid in a docker name suffix."}
{"_id": "q_7524", "text": "Return a tuple for sorting 'most recent first'."}
{"_id": "q_7525", "text": "Determine if the provided time is within the range, inclusive."}
{"_id": "q_7526", "text": "Return a Task object with this task's info."}
{"_id": "q_7527", "text": "Returns a command to delocalize logs.\n\n Args:\n logging_path: location of log files.\n user_project: name of the project to be billed for the request.\n\n Returns:\n eg. 'gs://bucket/path/myfile' or 'gs://bucket/script-foobar-12'"}
{"_id": "q_7528", "text": "The local dir for staging files for that particular task."}
{"_id": "q_7529", "text": "Returns a command that will stage recursive inputs."}
{"_id": "q_7530", "text": "Returns a directory or file path to be the target for \"gsutil cp\".\n\n If the filename contains a wildcard, then the target path must\n be a directory in order to ensure consistency whether the source pattern\n contains one or multiple files.\n\n\n Args:\n local_file_path: A full path terminating in a file or a file wildcard.\n\n Returns:\n The path to use as the \"gsutil cp\" target."}
{"_id": "q_7531", "text": "Returns a command that will stage inputs."}
{"_id": "q_7532", "text": "Get the dsub version out of the _dsub_version.py source file.\n\n Setup.py should not import dsub version from dsub directly since ambiguity in\n import order could lead to an old version of dsub setting the version number.\n Parsing the file directly is simpler than using import tools (whose interface\n varies between python 2.7, 3.4, and 3.5).\n\n Returns:\n string of dsub version.\n\n Raises:\n ValueError: if the version is not found."}
{"_id": "q_7533", "text": "Return a dict with variables for the 'prepare' action."}
{"_id": "q_7534", "text": "Return a dict with variables for the 'localization' action."}
{"_id": "q_7535", "text": "Return a dict with variables for the 'delocalization' action."}
{"_id": "q_7536", "text": "Returns a dictionary of for the user container environment."}
{"_id": "q_7537", "text": "Returns the status of this operation.\n\n Raises:\n ValueError: if the operation status cannot be determined.\n\n Returns:\n A printable status string (RUNNING, SUCCESS, CANCELED or FAILURE)."}
{"_id": "q_7538", "text": "Returns the most relevant status string and failed action.\n\n This string is meant for display only.\n\n Returns:\n A printable status string and name of failed action (if any)."}
{"_id": "q_7539", "text": "Rounds ram up to the nearest multiple of _MEMORY_MULTIPLE."}
{"_id": "q_7540", "text": "Returns a custom machine type string."}
{"_id": "q_7541", "text": "Build a VirtualMachine object for a Pipeline request.\n\n Args:\n network (dict): Network details for the pipeline to run in.\n machine_type (str): GCE Machine Type string for the pipeline.\n preemptible (bool): Use a preemptible VM for the job.\n service_account (dict): Service account configuration for the VM.\n boot_disk_size_gb (int): Boot disk size in GB.\n disks (list[dict]): List of disks to mount.\n accelerators (list[dict]): List of accelerators to attach to the VM.\n labels (dict[string, string]): Labels for the VM.\n cpu_platform (str): The CPU platform to request.\n nvidia_driver_version (str): The NVIDIA driver version to use when attaching\n an NVIDIA GPU accelerator.\n\n Returns:\n An object representing a VirtualMachine."}
{"_id": "q_7542", "text": "Build an Action object for a Pipeline request.\n\n Args:\n name (str): An optional name for the container.\n image_uri (str): The URI to pull the container image from.\n commands (List[str]): commands and arguments to run inside the container.\n entrypoint (str): overrides the ENTRYPOINT specified in the container.\n environment (dict[str,str]): The environment to pass into the container.\n pid_namespace (str): The PID namespace to run the action inside.\n flags (str): Flags that control the execution of this action.\n port_mappings (dict[int, int]): A map of container to host port mappings for\n this container.\n mounts (List): A list of mounts to make available to the action.\n labels (dict[str]): Labels to associate with the action.\n\n Returns:\n An object representing an Action resource."}
{"_id": "q_7543", "text": "Returns a provider for job submission requests."}
{"_id": "q_7544", "text": "Add provider required arguments epilog message, parse, and validate."}
{"_id": "q_7545", "text": "A string with the arguments to point dstat to the same provider+project."}
{"_id": "q_7546", "text": "Returns a URI with placeholders replaced by metadata values."}
{"_id": "q_7547", "text": "Inserts task metadata into the logging URI.\n\n The core behavior is inspired by the Google Pipelines API:\n (1) If a the uri ends in \".log\", then that is the logging path.\n (2) Otherwise, the uri is treated as \"directory\" for logs and a filename\n needs to be automatically generated.\n\n For (1), if the job is a --tasks job, then the {task-id} is inserted\n before \".log\".\n\n For (2), the file name generated is {job-id}, or for --tasks jobs, it is\n {job-id}.{task-id}.\n\n In both cases .{task-attempt} is inserted before .log for --retries jobs.\n\n In addition, full task metadata substitution is supported. The URI\n may include substitution strings such as\n \"{job-id}\", \"{task-id}\", \"{job-name}\", \"{user-id}\", and \"{task-attempt}\".\n\n Args:\n uri: User-specified logging URI which may contain substitution fields.\n job_metadata: job-global metadata.\n task_metadata: tasks-specific metadata.\n\n Returns:\n The logging_uri formatted as described above."}
{"_id": "q_7548", "text": "Validated google-v2 arguments."}
{"_id": "q_7549", "text": "Extract job-global resources requirements from input args.\n\n Args:\n args: parsed command-line arguments\n\n Returns:\n Resources object containing the requested resources for the job"}
{"_id": "q_7550", "text": "Print status info as we wait for those jobs.\n\n Blocks until either all of the listed jobs succeed,\n or one of them fails.\n\n Args:\n provider: job service provider\n job_ids: a set of job IDs (string) to wait for\n poll_interval: integer seconds to wait between iterations\n stop_on_failure: whether to stop waiting if one of the tasks fails.\n\n Returns:\n Empty list if there was no error,\n a list of error messages from the failed tasks otherwise."}
{"_id": "q_7551", "text": "Wait for job and retry any tasks that fail.\n\n Stops retrying an individual task when: it succeeds, is canceled, or has been\n retried \"retries\" times.\n\n This function exits when there are no tasks running and there are no tasks\n eligible to be retried.\n\n Args:\n provider: job service provider\n job_id: a single job ID (string) to wait for\n poll_interval: integer seconds to wait between iterations\n retries: number of retries\n job_descriptor: job descriptor used to originally submit job\n\n Returns:\n Empty list if there was no error,\n a list containing an error message from a failed task otherwise."}
{"_id": "q_7552", "text": "A list with, for each job, its dominant task.\n\n The dominant task is the one that exemplifies its job's\n status. It is either:\n - the first (FAILURE or CANCELED) task, or if none\n - the first RUNNING task, or if none\n - the first SUCCESS task.\n\n Args:\n tasks: a list of tasks to consider\n\n Returns:\n A list with, for each job, its dominant task."}
{"_id": "q_7553", "text": "Waits until any of the listed jobs is not running.\n\n In particular, if any of the jobs sees one of its tasks fail,\n we count the whole job as failing (but do not terminate the remaining\n tasks ourselves).\n\n Args:\n provider: job service provider\n job_ids: a list of job IDs (string) to wait for\n poll_interval: integer seconds to wait between iterations\n\n Returns:\n A set of the jobIDs with still at least one running task."}
{"_id": "q_7554", "text": "Validates that job and task argument names do not overlap."}
{"_id": "q_7555", "text": "Helper function to return an appropriate set of mount parameters."}
{"_id": "q_7556", "text": "Convenience function simplifies construction of the logging uri."}
{"_id": "q_7557", "text": "Split a string into a pair, which can have one empty value.\n\n Args:\n pair_string: The string to be split.\n separator: The separator to be used for splitting.\n nullable_idx: The location to be set to null if the separator is not in the\n input string. Should be either 0 or 1.\n\n Returns:\n A list containing the pair.\n\n Raises:\n IndexError: If nullable_idx is not 0 or 1."}
{"_id": "q_7558", "text": "Parses task parameters from a TSV.\n\n Args:\n tasks: Dict containing the path to a TSV file and task numbers to run\n variables, input, and output parameters as column headings. Subsequent\n lines specify parameter values, one row per job.\n retries: Number of retries allowed.\n input_file_param_util: Utility for producing InputFileParam objects.\n output_file_param_util: Utility for producing OutputFileParam objects.\n\n Returns:\n task_descriptors: an array of records, each containing the task-id,\n task-attempt, 'envs', 'inputs', 'outputs', 'labels' that defines the set of\n parameters for each task of the job.\n\n Raises:\n ValueError: If no job records were provided"}
{"_id": "q_7559", "text": "Parse flags of key=value pairs and return a list of argclass.\n\n For pair variables, we need to:\n * split the input into name=value pairs (value optional)\n * Create the EnvParam object\n\n Args:\n labels: list of 'key' or 'key=value' strings.\n argclass: Container class for args, must instantiate with argclass(k, v).\n\n Returns:\n list of argclass objects."}
{"_id": "q_7560", "text": "Convert the timeout duration to seconds.\n\n The value must be of the form \"<integer><unit>\" where supported\n units are s, m, h, d, w (seconds, minutes, hours, days, weeks).\n\n Args:\n interval: A \"<integer><unit>\" string.\n valid_units: A list of supported units.\n\n Returns:\n A string of the form \"<integer>s\" or None if timeout is empty."}
{"_id": "q_7561", "text": "Produce a default variable name if none is specified."}
{"_id": "q_7562", "text": "Find the file provider for a URI."}
{"_id": "q_7563", "text": "Do basic validation of the uri, return the path and filename."}
{"_id": "q_7564", "text": "Return a valid docker_path from a Google Persistent Disk url."}
{"_id": "q_7565", "text": "Return a MountParam given a GCS bucket, disk image or local path."}
{"_id": "q_7566", "text": "Turn the specified name and value into a valid Google label."}
{"_id": "q_7567", "text": "For each task, ensure that each task param entry is not None."}
{"_id": "q_7568", "text": "Return a new dict with any empty items removed.\n\n Note that this is not a deep check. If d contains a dictionary which\n itself contains empty items, those are never checked.\n\n This method exists to make to_serializable() functions cleaner.\n We could revisit this some day, but for now, the serialized objects are\n stripped of empty values to keep the output YAML more compact.\n\n Args:\n d: a dictionary\n required: list of required keys (for example, TaskDescriptors always emit\n the \"task-id\", even if None)\n\n Returns:\n A dictionary with empty items removed."}
{"_id": "q_7569", "text": "Converts a task-id to the numeric task-id.\n\n Args:\n task_id: task-id in either task-n or n format\n\n Returns:\n n"}
{"_id": "q_7570", "text": "Raise ValueError if the label is invalid."}
{"_id": "q_7571", "text": "Populate a JobDescriptor from the local provider's original meta.yaml.\n\n The local job provider had the first incarnation of a YAML file for each\n task. That idea was extended here in the JobDescriptor and the local\n provider adopted the JobDescriptor.to_yaml() call to write its meta.yaml.\n\n The JobDescriptor.from_yaml() detects if it receives a local provider's\n \"v0\" meta.yaml and calls this function.\n\n Args:\n job: an object produced from decoding meta.yaml.\n\n Returns:\n A JobDescriptor populated as best we can from the old meta.yaml."}
{"_id": "q_7572", "text": "Populate and return a JobDescriptor from a YAML string."}
{"_id": "q_7573", "text": "Returns the task_descriptor corresponding to task_id."}
{"_id": "q_7574", "text": "Return a dictionary of environment variables for the user container."}
{"_id": "q_7575", "text": "Returns a dict combining the field for job and task params."}
{"_id": "q_7576", "text": "Kill jobs or job tasks.\n\n This function separates ddel logic from flag parsing and user output. Users\n of ddel who intend to access the data programmatically should use this.\n\n Args:\n provider: an instantiated dsub provider.\n user_ids: a set of user ids who \"own\" the job(s) to delete.\n job_ids: a set of job ids to delete.\n task_ids: a set of task ids to delete.\n labels: a set of LabelParam, each must match the job(s) to be cancelled.\n create_time_min: a timezone-aware datetime value for the earliest create\n time of a task, inclusive.\n create_time_max: a timezone-aware datetime value for the most recent create\n time of a task, inclusive.\n\n Returns:\n list of job ids which were deleted."}
{"_id": "q_7577", "text": "Return the value for the specified action."}
{"_id": "q_7578", "text": "Return the environment for the operation."}
{"_id": "q_7579", "text": "Return the image for the operation."}
{"_id": "q_7580", "text": "Return all events of a particular type."}
{"_id": "q_7581", "text": "Generate formatted jobs individually, in order of create-time.\n\n Args:\n provider: an instantiated dsub provider.\n statuses: a set of status strings that eligible jobs may match.\n user_ids: a set of user strings that eligible jobs may match.\n job_ids: a set of job-id strings eligible jobs may match.\n job_names: a set of job-name strings eligible jobs may match.\n task_ids: a set of task-id strings eligible tasks may match.\n task_attempts: a set of task-attempt strings eligible tasks may match.\n labels: set of LabelParam that all tasks must match.\n create_time_min: a timezone-aware datetime value for the earliest create\n time of a task, inclusive.\n create_time_max: a timezone-aware datetime value for the most recent create\n time of a task, inclusive.\n max_tasks: (int) maximum number of tasks to return per dstat job lookup.\n page_size: the page size to use for each query to the backend. May be\n ignored by some provider implementations.\n summary_output: (bool) summarize the job list.\n\n Yields:\n Individual task dictionaries with associated metadata"}
{"_id": "q_7582", "text": "Returns a list of zones based on any wildcard input.\n\n This function is intended to provide an easy method for producing a list\n of desired zones for a pipeline to run in.\n\n The Pipelines API default zone list is \"any zone\". The problem with\n \"any zone\" is that it can lead to incurring Cloud Storage egress charges\n if the GCE zone selected is in a different region than the GCS bucket.\n See https://cloud.google.com/storage/pricing#network-egress.\n\n A user with a multi-region US bucket would want to pipelines to run in\n a \"us-*\" zone.\n A user with a regional bucket in US would want to restrict pipelines to\n run in a zone in that region.\n\n Rarely does the specific zone matter for a pipeline.\n\n This function allows for a simple short-hand such as:\n [ \"us-*\" ]\n [ \"us-central1-*\" ]\n These examples will expand out to the full list of US and us-central1 zones\n respectively.\n\n Args:\n input_list: list of zone names/patterns\n\n Returns:\n A list of zones, with any wildcard zone specifications expanded."}
{"_id": "q_7583", "text": "Converts a datestamp from RFC3339 UTC to a datetime.\n\n Args:\n rfc3339_utc_string: a datetime string in RFC3339 UTC \"Zulu\" format\n\n Returns:\n A datetime."}
{"_id": "q_7584", "text": "Returns the job-id or job-id.task-id for the operation."}
{"_id": "q_7585", "text": "Cancel a batch of operations.\n\n Args:\n batch_fn: API-specific batch function.\n cancel_fn: API-specific cancel function.\n ops: A list of operations to cancel.\n\n Returns:\n A list of operations canceled and a list of error messages."}
{"_id": "q_7586", "text": "Specific check for auth error codes.\n\n Return True if we should retry.\n\n False otherwise.\n Args:\n exception: An exception to test for transience.\n\n Returns:\n True if we should retry. False otherwise."}
{"_id": "q_7587", "text": "Configures genomics API client.\n\n Args:\n api_name: Name of the Google API (for example: \"genomics\")\n api_version: Version of the API (for example: \"v2alpha1\")\n credentials: Credentials to be used for the gcloud API calls.\n\n Returns:\n A configured Google Genomics API client with appropriate credentials."}
{"_id": "q_7588", "text": "Executes operation.\n\n Args:\n api: The base API object\n\n Returns:\n A response body object"}
{"_id": "q_7589", "text": "Returns a type from a snippit of python source. Should normally be\n something just like 'str' or 'Object'.\n\n arg_type the source to be evaluated\n T the default type\n arg context of where this type was extracted\n sig context from where the arg was extracted\n\n Returns a type or a Type"}
{"_id": "q_7590", "text": "Returns a jsonified response with the specified HTTP status code.\n\n The positional and keyword arguments are passed directly to the\n :func:`flask.jsonify` function which creates the response."}
{"_id": "q_7591", "text": "Performs the actual sending action and returns the result"}
{"_id": "q_7592", "text": "Return the Exception data in a format for JSON-RPC"}
{"_id": "q_7593", "text": "An `inspect.getargspec` with a relaxed sanity check to support Cython.\n\n Motivation:\n\n A Cython-compiled function is *not* an instance of Python's\n types.FunctionType. That is the sanity check the standard Py2\n library uses in `inspect.getargspec()`. So, an exception is raised\n when calling `argh.dispatch_command(cythonCompiledFunc)`. However,\n the CyFunctions do have perfectly usable `.func_code` and\n `.func_defaults` which is all `inspect.getargspec` needs.\n\n This function just copies `inspect.getargspec()` from the standard\n library but relaxes the test to a more duck-typing one of having\n both `.func_code` and `.func_defaults` attributes."}
{"_id": "q_7594", "text": "Prompts user for input. Correctly handles prompt message encoding."}
{"_id": "q_7595", "text": "Encodes given value so it can be written to given file object.\n\n Value may be Unicode, binary string or any other data type.\n\n The exact behaviour depends on the Python version:\n\n Python 3.x\n\n `sys.stdout` is a `_io.TextIOWrapper` instance that accepts `str`\n (unicode) and breaks on `bytes`.\n\n It is OK to simply assume that everything is Unicode unless special\n handling is introduced in the client code.\n\n Thus, no additional processing is performed.\n\n Python 2.x\n\n `sys.stdout` is a file-like object that accepts `str` (bytes)\n and breaks when `unicode` is passed to `sys.stdout.write()`.\n\n We can expect both Unicode and bytes. They need to be encoded so as\n to match the file object encoding.\n\n The output is binary if the object doesn't explicitly require Unicode."}
{"_id": "q_7596", "text": "Adds types, actions, etc. to given argument specification.\n For example, ``default=3`` implies ``type=int``.\n\n :param arg: a :class:`argh.utils.Arg` instance"}
{"_id": "q_7597", "text": "Declares an argument for given function. Does not register the function\n anywhere, nor does it modify the function in any way.\n\n The signature of the decorator matches that of\n :meth:`argparse.ArgumentParser.add_argument`, only some keywords are not\n required if they can be easily guessed (e.g. you don't have to specify type\n or action when an `int` or `bool` default value is supplied).\n\n Typical use cases:\n\n - In combination with :func:`expects_obj` (which is not recommended);\n - in combination with ordinary function signatures to add details that\n cannot be expressed with that syntax (e.g. help message).\n\n Usage::\n\n from argh import arg\n\n @arg('path', help='path to the file to load')\n @arg('--format', choices=['yaml','json'])\n @arg('-v', '--verbosity', choices=range(0,3), default=2)\n def load(path, something=None, format='json', dry_run=False, verbosity=1):\n loaders = {'json': json.load, 'yaml': yaml.load}\n loader = loaders[args.format]\n data = loader(args.path)\n if not args.dry_run:\n if verbosity < 1:\n print('saving to the database')\n put_to_database(data)\n\n In this example:\n\n - `path` declaration is extended with `help`;\n - `format` declaration is extended with `choices`;\n - `dry_run` declaration is not duplicated;\n - `verbosity` is extended with `choices` and the default value is\n overridden. (If both function signature and `@arg` define a default\n value for an argument, `@arg` wins.)\n\n .. note::\n\n It is recommended to avoid using this decorator unless there's no way\n to tune the argument's behaviour or presentation using ordinary\n function signatures. Readability counts, don't repeat yourself."}
{"_id": "q_7598", "text": "Make a guess about the config file location an try loading it."}
{"_id": "q_7599", "text": "Validate a configuration key according to `section.item`."}
{"_id": "q_7600", "text": "Searches the given text for mentions and expands them.\n\n For example:\n \"@source.nick\" will be expanded to \"@<source.nick source.url>\"."}
{"_id": "q_7601", "text": "Try loading given cache file."}
{"_id": "q_7602", "text": "Checks if specified URL is cached."}
{"_id": "q_7603", "text": "Retrieves tweets from the cache."}
{"_id": "q_7604", "text": "Tries to remove cached tweets."}
{"_id": "q_7605", "text": "Retrieve your personal timeline."}
{"_id": "q_7606", "text": "Get or set config item."}
{"_id": "q_7607", "text": "Return human-readable relative time string."}
{"_id": "q_7608", "text": "Copy the Query object, optionally replacing the filters, order_by, or\n limit information on the copy. This is mostly an internal detail that\n you can ignore."}
{"_id": "q_7609", "text": "Returns only the first result from the query, if any."}
{"_id": "q_7610", "text": "This function handles all on_delete semantics defined on OneToMany columns.\n\n This function only exists because 'cascade' is *very* hard to get right."}
{"_id": "q_7611", "text": "Performs the actual prefix, suffix, and pattern match operations."}
{"_id": "q_7612", "text": "Estimates the total work necessary to calculate the prefix match over the\n given index with the provided prefix."}
{"_id": "q_7613", "text": "Search for model ids that match the provided filters.\n\n Arguments:\n\n * *filters* - A list of filters that apply to the search of one of\n the following two forms:\n\n 1. ``'column:string'`` - a plain string will match a word in a\n text search on the column\n\n .. note:: Read the documentation about the ``Query`` object\n for what is actually passed during text search\n\n 2. ``('column', min, max)`` - a numeric column range search,\n between min and max (inclusive by default)\n\n .. note:: Read the documentation about the ``Query`` object\n for information about open-ended ranges\n\n 3. ``['column:string1', 'column:string2']`` - will match any\n of the provided words in a text search on the column\n\n 4. ``Prefix('column', 'prefix')`` - will match prefixes of\n words in a text search on the column\n\n 5. ``Suffix('column', 'suffix')`` - will match suffixes of\n words in a text search on the column\n\n 6. ``Pattern('column', 'pattern')`` - will match patterns over\n words in a text search on the column\n\n * *order_by* - A string that names the numeric column by which to\n sort the results by. Prefixing with '-' will return results in\n descending order\n\n .. note:: While you can technically pass a non-numeric index as an\n *order_by* clause, the results will basically be to order the\n results by string comparison of the ids (10 will come before 2).\n\n .. note:: If you omit the ``order_by`` argument, results will be\n ordered by the last filter. If the last filter was a text\n filter, see the previous note. If the last filter was numeric,\n then results will be ordered by that result.\n\n * *offset* - A numeric starting offset for results\n * *count* - The maximum number of results to return from the query"}
{"_id": "q_7614", "text": "This utility function will iterate over all entities of a provided model,\n refreshing their indices. This is primarily useful after adding an index\n on a column.\n\n Arguments:\n\n * *model* - the model whose entities you want to reindex\n * *block_size* - the maximum number of entities you want to fetch from\n Redis at a time, defaulting to 100\n\n This function will yield its progression through re-indexing all of your\n entities.\n\n Example use::\n\n for progress, total in refresh_indices(MyModel, block_size=200):\n print \"%s of %s\"%(progress, total)\n\n .. note:: This uses the session object to handle index refresh via calls to\n ``.commit()``. If you have any outstanding entities known in the\n session, they will be committed."}
{"_id": "q_7615", "text": "This utility function will clean out old index data that was accidentally\n left during item deletion in rom versions <= 0.27.0 . You should run this\n after you have upgraded all of your clients to version 0.28.0 or later.\n\n Arguments:\n\n * *model* - the model whose entities you want to reindex\n * *block_size* - the maximum number of items to check at a time\n defaulting to 100\n\n This function will yield its progression through re-checking all of the\n data that could be left over.\n\n Example use::\n\n for progress, total in clean_old_index(MyModel, block_size=200):\n print \"%s of %s\"%(progress, total)"}
{"_id": "q_7616", "text": "Adds an entity to the session."}
{"_id": "q_7617", "text": "... Actually write data to Redis. This is an internal detail. Please don't\n call me directly."}
{"_id": "q_7618", "text": "Deletes the entity immediately. Also performs any on_delete operations\n specified as part of column definitions."}
{"_id": "q_7619", "text": "Will fetch one or more entities of this type from the session or\n Redis.\n\n Used like::\n\n MyModel.get(5)\n MyModel.get([1, 6, 2, 4])\n\n Passing a list or a tuple will return multiple entities, in the same\n order that the ids were passed."}
{"_id": "q_7620", "text": "Parse the options, set defaults and then fire up PhantomJS."}
{"_id": "q_7621", "text": "Call PhantomJS with the specified flags and options."}
{"_id": "q_7622", "text": "Release, incrementing the internal counter by one."}
{"_id": "q_7623", "text": "Register an approximation of memory used by FTP server process\n and all of its children."}
{"_id": "q_7624", "text": "Connect to FTP server, login and return an ftplib.FTP instance."}
{"_id": "q_7625", "text": "Decorator. Bring coroutine result up, so it can be used as async context\n\n ::\n\n >>> async def foo():\n ...\n ... ...\n ... return AsyncContextInstance(...)\n ...\n ... ctx = await foo()\n ... async with ctx:\n ...\n ... # do\n\n ::\n\n >>> @async_enterable\n ... async def foo():\n ...\n ... ...\n ... return AsyncContextInstance(...)\n ...\n ... async with foo() as ctx:\n ...\n ... # do\n ...\n ... ctx = await foo()\n ... async with ctx:\n ...\n ... # do"}
{"_id": "q_7626", "text": "Context manager with threading lock for set locale on enter, and set it\n back to original state on exit.\n\n ::\n\n >>> with setlocale(\"C\"):\n ... ..."}
{"_id": "q_7627", "text": "Count `data` for throttle\n\n :param data: bytes of data for count\n :type data: :py:class:`bytes`\n\n :param start: start of read/write time from\n :py:meth:`asyncio.BaseEventLoop.time`\n :type start: :py:class:`float`"}
{"_id": "q_7628", "text": "Set throttle limit\n\n :param value: bytes per second\n :type value: :py:class:`int` or :py:class:`None`"}
{"_id": "q_7629", "text": "Parsing directory server response.\n\n :param s: response line\n :type s: :py:class:`str`\n\n :rtype: :py:class:`pathlib.PurePosixPath`"}
{"_id": "q_7630", "text": "Parsing Microsoft Windows `dir` output\n\n :param b: response line\n :type b: :py:class:`bytes` or :py:class:`str`\n\n :return: (path, info)\n :rtype: (:py:class:`pathlib.PurePosixPath`, :py:class:`dict`)"}
{"_id": "q_7631", "text": "Create stream for write data to `destination` file.\n\n :param destination: destination path of file on server side\n :type destination: :py:class:`str` or :py:class:`pathlib.PurePosixPath`\n\n :param offset: byte offset for stream start position\n :type offset: :py:class:`int`\n\n :rtype: :py:class:`aioftp.DataConnectionThrottleStreamIO`"}
{"_id": "q_7632", "text": "Compute jenks natural breaks on a sequence of `values`, given `nb_class`,\n the number of desired class.\n\n Parameters\n ----------\n values : array-like\n The Iterable sequence of numbers (integer/float) to be used.\n nb_class : int\n The desired number of class (as some other functions requests\n a `k` value, `nb_class` is like `k` + 1). Have to be lesser than\n the length of `values` and greater than 2.\n\n Returns\n -------\n breaks : tuple of floats\n The computed break values, including minimum and maximum, in order\n to have all the bounds for building `nb_class` class,\n so the returned tuple has a length of `nb_class` + 1.\n\n\n Examples\n --------\n Using nb_class = 3, expecting 4 break values , including min and max :\n\n >>> jenks_breaks(\n [1.3, 7.1, 7.3, 2.3, 3.9, 4.1, 7.8, 1.2, 4.3, 7.3, 5.0, 4.3],\n nb_class = 3) # Should output (1.2, 2.3, 5.0, 7.8)"}
{"_id": "q_7633", "text": "Copy the contents of the screen to PIL image memory.\n\n :param bbox: optional bounding box (x1,y1,x2,y2)\n :param childprocess: pyscreenshot can cause an error,\n if it is used on more different virtual displays\n and back-end is not in different process.\n Some back-ends are always different processes: scrot, imagemagick\n The default is False if the program was started inside IDLE,\n otherwise it is True.\n :param backend: back-end can be forced if set (examples:scrot, wx,..),\n otherwise back-end is automatic"}
{"_id": "q_7634", "text": "Open a Mapchete process.\n\n Parameters\n ----------\n config : MapcheteConfig object, config dict or path to mapchete file\n Mapchete process configuration\n mode : string\n * ``memory``: Generate process output on demand without reading\n pre-existing data or writing new data.\n * ``readonly``: Just read data without processing new data.\n * ``continue``: (default) Don't overwrite existing output.\n * ``overwrite``: Overwrite existing output.\n zoom : list or integer\n process zoom level or a pair of minimum and maximum zoom level\n bounds : tuple\n left, bottom, right, top process boundaries in output pyramid\n single_input_file : string\n single input file if supported by process\n with_cache : bool\n process output data cached in memory\n\n Returns\n -------\n Mapchete\n a Mapchete process object"}
{"_id": "q_7635", "text": "Determine zoom levels."}
{"_id": "q_7636", "text": "Worker function running the process."}
{"_id": "q_7637", "text": "Yield process tiles.\n\n Tiles intersecting with the input data bounding boxes as well as\n process bounds, if provided, are considered process tiles. This is to\n avoid iterating through empty tiles.\n\n Parameters\n ----------\n zoom : integer\n zoom level process tiles should be returned from; if none is given,\n return all process tiles\n\n yields\n ------\n BufferedTile objects"}
{"_id": "q_7638", "text": "Process a large batch of tiles.\n\n Parameters\n ----------\n process : MapcheteProcess\n process to be run\n zoom : list or int\n either single zoom level or list of minimum and maximum zoom level;\n None processes all (default: None)\n tile : tuple\n zoom, row and column of tile to be processed (cannot be used with\n zoom)\n multi : int\n number of workers (default: number of CPU cores)\n max_chunksize : int\n maximum number of process tiles to be queued for each worker;\n (default: 1)"}
{"_id": "q_7639", "text": "Process a large batch of tiles and yield report messages per tile.\n\n Parameters\n ----------\n zoom : list or int\n either single zoom level or list of minimum and maximum zoom level;\n None processes all (default: None)\n tile : tuple\n zoom, row and column of tile to be processed (cannot be used with\n zoom)\n multi : int\n number of workers (default: number of CPU cores)\n max_chunksize : int\n maximum number of process tiles to be queued for each worker;\n (default: 1)"}
{"_id": "q_7640", "text": "Run the Mapchete process.\n\n Execute, write and return data.\n\n Parameters\n ----------\n process_tile : Tile or tile index tuple\n Member of the process tile pyramid (not necessarily the output\n pyramid, if output has a different metatiling setting)\n\n Returns\n -------\n data : NumPy array or features\n process output"}
{"_id": "q_7641", "text": "Extract data from tile."}
{"_id": "q_7642", "text": "Calculate hillshading from elevation data.\n\n Parameters\n ----------\n elevation : array\n input elevation data\n azimuth : float\n horizontal angle of light source (315: North-West)\n altitude : float\n vertical angle of light source (90 would result in slope shading)\n z : float\n vertical exaggeration factor\n scale : float\n scale factor of pixel size units versus height units (insert 112000\n when having elevation values in meters in a geodetic projection)\n\n Returns\n -------\n hillshade : array"}
{"_id": "q_7643", "text": "Extract contour lines from elevation data.\n\n Parameters\n ----------\n elevation : array\n input elevation data\n interval : integer\n elevation value interval when drawing contour lines\n field : string\n output field name containing elevation value\n base : integer\n elevation base value the intervals are computed from\n\n Returns\n -------\n contours : iterable\n contours as GeoJSON-like pairs of properties and geometry"}
{"_id": "q_7644", "text": "Clip array by geometry.\n\n Parameters\n ----------\n array : array\n raster data to be clipped\n geometries : iterable\n geometries used to clip source array\n inverted : bool\n invert clipping (default: False)\n clip_buffer : int\n buffer (in pixels) geometries before applying clip\n\n Returns\n -------\n clipped array : array"}
{"_id": "q_7645", "text": "Create tile pyramid out of input raster."}
{"_id": "q_7646", "text": "Create a tile pyramid out of an input raster dataset."}
{"_id": "q_7647", "text": "Determine minimum and maximum zoomlevel."}
{"_id": "q_7648", "text": "Validate whether value is found in config and has the right type.\n\n Parameters\n ----------\n config : dict\n configuration dictionary\n values : list\n list of (str, type) tuples of values and value types expected in config\n\n Returns\n -------\n True if config is valid.\n\n Raises\n ------\n Exception if value is not found or has the wrong type."}
{"_id": "q_7649", "text": "Return hash of x."}
{"_id": "q_7650", "text": "Validate and return zoom levels."}
{"_id": "q_7651", "text": "Snaps bounds to tiles boundaries of specific zoom level.\n\n Parameters\n ----------\n bounds : bounds to be snapped\n pyramid : TilePyramid\n zoom : int\n\n Returns\n -------\n Bounds(left, bottom, right, top)"}
{"_id": "q_7652", "text": "Clips bounds by clip.\n\n Parameters\n ----------\n bounds : bounds to be clipped\n clip : clip bounds\n\n Returns\n -------\n Bounds(left, bottom, right, top)"}
{"_id": "q_7653", "text": "Return parameter dictionary per zoom level."}
{"_id": "q_7654", "text": "Return the element filtered by zoom level.\n\n - An input integer or float gets returned as is.\n - An input string is checked whether it starts with \"zoom\". Then, the\n provided zoom level gets parsed and compared with the actual zoom\n level. If zoom levels match, the element gets returned.\n TODOs/gotchas:\n - Elements are unordered, which can lead to unexpected results when\n defining the YAML config.\n - Provided zoom levels for one element in config file are not allowed\n to \"overlap\", i.e. there is not yet a decision mechanism implemented\n which handles this case."}
{"_id": "q_7655", "text": "Return element only if zoom condition matches with config string."}
{"_id": "q_7656", "text": "Flatten dict tree into dictionary where keys are paths of old dict."}
{"_id": "q_7657", "text": "Reverse tree flattening."}
{"_id": "q_7658", "text": "Process bounds this process is currently initialized with.\n\n This gets triggered by using the ``init_bounds`` kwarg. If not set, it will\n be equal to self.bounds."}
{"_id": "q_7659", "text": "Effective process bounds required to initialize inputs.\n\n Process bounds sometimes have to be larger, because all intersecting process\n tiles have to be covered as well."}
{"_id": "q_7660", "text": "Output object of driver."}
{"_id": "q_7661", "text": "Input items used for process stored in a dictionary.\n\n Keys are the hashes of the input parameters, values the respective\n InputData classes."}
{"_id": "q_7662", "text": "Optional baselevels configuration.\n\n baselevels:\n min: <zoom>\n max: <zoom>\n lower: <resampling method>\n higher: <resampling method>"}
{"_id": "q_7663", "text": "Return configuration parameters snapshot for zoom as dictionary.\n\n Parameters\n ----------\n zoom : int\n zoom level\n\n Returns\n -------\n configuration snapshot : dictionary\n zoom level dependent process configuration"}
{"_id": "q_7664", "text": "Return process bounding box for zoom level.\n\n Parameters\n ----------\n zoom : int or None\n if None, the union of all zoom level areas is returned\n\n Returns\n -------\n process area : shapely geometry"}
{"_id": "q_7665", "text": "Generate indexes for given zoom level.\n\n Parameters\n ----------\n mp : Mapchete object\n process output to be indexed\n out_dir : path\n optionally override process output directory\n zoom : int\n zoom level to be processed\n geojson : bool\n generate GeoJSON index (default: False)\n gpkg : bool\n generate GeoPackage index (default: False)\n shapefile : bool\n generate Shapefile index (default: False)\n txt : bool\n generate tile path list textfile (default: False)\n vrt : bool\n GDAL-style VRT file (default: False)\n fieldname : str\n field name which contains paths of tiles (default: \"location\")\n basepath : str\n if set, use custom base path instead of output path\n for_gdal : bool\n use GDAL compatible remote paths, i.e. add \"/vsicurl/\" before path\n (default: True)"}
{"_id": "q_7666", "text": "Return raster metadata."}
{"_id": "q_7667", "text": "Example process for testing.\n\n Inputs:\n -------\n file1\n raster file\n\n Parameters:\n -----------\n\n Output:\n -------\n np.ndarray"}
{"_id": "q_7668", "text": "Check if output format is valid with other process parameters.\n\n Parameters\n ----------\n config : dictionary\n output configuration parameters\n\n Returns\n -------\n is_valid : bool"}
{"_id": "q_7669", "text": "Return all available output formats.\n\n Returns\n -------\n formats : list\n all available output formats"}
{"_id": "q_7670", "text": "Return output class of driver.\n\n Returns\n -------\n output : ``OutputData``\n output writer object"}
{"_id": "q_7671", "text": "Return input class of driver.\n\n Returns\n -------\n input_params : ``InputData``\n input parameters"}
{"_id": "q_7672", "text": "Dump output JSON and verify parameters if output metadata exist."}
{"_id": "q_7673", "text": "Determine target file path.\n\n Parameters\n ----------\n tile : ``BufferedTile``\n must be member of output ``TilePyramid``\n\n Returns\n -------\n path : string"}
{"_id": "q_7674", "text": "Create directory and subdirectory if necessary.\n\n Parameters\n ----------\n tile : ``BufferedTile``\n must be member of output ``TilePyramid``"}
{"_id": "q_7675", "text": "Check whether process output is allowed with output driver.\n\n Parameters\n ----------\n process_data : raw process output\n\n Returns\n -------\n True or False"}
{"_id": "q_7676", "text": "Return verified and cleaned output.\n\n Parameters\n ----------\n process_data : raw process output\n\n Returns\n -------\n NumPy array or list of features."}
{"_id": "q_7677", "text": "Extract subset from multiple tiles.\n\n input_data_tiles : list of (``Tile``, process data) tuples\n out_tile : ``Tile``\n\n Returns\n -------\n NumPy array or list of features."}
{"_id": "q_7678", "text": "Calculate slope and aspect map.\n\n Return a pair of arrays 2 pixels smaller than the input elevation array.\n\n Slope is returned in radians, from 0 for sheer face to pi/2 for\n flat ground. Aspect is returned in radians, counterclockwise from -pi\n at north around to pi.\n\n Logic here is borrowed from hillshade.cpp:\n http://www.perrygeo.net/wordpress/?p=7\n\n Parameters\n ----------\n elevation : array\n input elevation data\n xres : float\n column width\n yres : float\n row height\n z : float\n vertical exaggeration factor\n scale : float\n scale factor of pixel size units versus height units (insert 112000\n when having elevation values in meters in a geodetic projection)\n\n Returns\n -------\n slope shade : array"}
{"_id": "q_7679", "text": "Return hillshaded numpy array.\n\n Parameters\n ----------\n elevation : array\n input elevation data\n tile : Tile\n tile covering the array\n z : float\n vertical exaggeration factor\n scale : float\n scale factor of pixel size units versus height units (insert 112000\n when having elevation values in meters in a geodetic projection)"}
{"_id": "q_7680", "text": "Return ``BufferedTile`` object of this ``BufferedTilePyramid``.\n\n Parameters\n ----------\n zoom : integer\n zoom level\n row : integer\n tile matrix row\n col : integer\n tile matrix column\n\n Returns\n -------\n buffered tile : ``BufferedTile``"}
{"_id": "q_7681", "text": "Return all tiles intersecting with bounds.\n\n Bounds values will be cleaned if they cross the antimeridian or are\n outside of the Northern or Southern tile pyramid bounds.\n\n Parameters\n ----------\n bounds : tuple\n (left, bottom, right, top) bounding values in tile pyramid CRS\n zoom : integer\n zoom level\n\n Yields\n ------\n intersecting tiles : generator\n generates ``BufferedTiles``"}
{"_id": "q_7682", "text": "All metatiles intersecting with given bounding box.\n\n Parameters\n ----------\n geometry : ``shapely.geometry``\n zoom : integer\n zoom level\n\n Yields\n ------\n intersecting tiles : generator\n generates ``BufferedTiles``"}
{"_id": "q_7683", "text": "Return all tiles intersecting with input geometry.\n\n Parameters\n ----------\n geometry : ``shapely.geometry``\n zoom : integer\n zoom level\n\n Yields\n ------\n intersecting tiles : ``BufferedTile``"}
{"_id": "q_7684", "text": "Return all BufferedTiles intersecting with tile.\n\n Parameters\n ----------\n tile : ``BufferedTile``\n another tile"}
{"_id": "q_7685", "text": "Return dictionary representation of pyramid parameters."}
{"_id": "q_7686", "text": "Return tile neighbors.\n\n Tile neighbors are unique, i.e. in some edge cases, where both the left\n and right neighbor wrapped around the antimeridian is the same. Also,\n neighbors ouside the northern and southern TilePyramid boundaries are\n excluded, because they are invalid.\n\n -------------\n | 8 | 1 | 5 |\n -------------\n | 4 | x | 2 |\n -------------\n | 7 | 3 | 6 |\n -------------\n\n Parameters\n ----------\n connectedness : int\n [4 or 8] return four direct neighbors or all eight.\n\n Returns\n -------\n list of BufferedTiles"}
{"_id": "q_7687", "text": "Read, stretch and return raster data.\n\n Inputs:\n -------\n raster\n raster file\n\n Parameters:\n -----------\n resampling : str\n rasterio.Resampling method\n scale_method : str\n - dtype_scale: use dtype minimum and maximum values\n - minmax_scale: use dataset bands minimum and maximum values\n - crop: clip data to output dtype\n scales_minmax : tuple\n tuple of band specific scale values\n\n Output:\n -------\n np.ndarray"}
{"_id": "q_7688", "text": "Open process output as input for other process.\n\n Parameters\n ----------\n tile : ``Tile``\n process : ``MapcheteProcess``\n kwargs : keyword arguments"}
{"_id": "q_7689", "text": "Serve a Mapchete process.\n\n Creates the Mapchete host and serves both web page with OpenLayers and the\n WMTS simple REST endpoint."}
{"_id": "q_7690", "text": "Extract a numpy array from a raster file."}
{"_id": "q_7691", "text": "Extract raster data window array.\n\n Parameters\n ----------\n in_raster : array or ReferencedRaster\n in_affine : ``Affine`` required if in_raster is an array\n out_tile : ``BufferedTile``\n\n Returns\n -------\n extracted array : array"}
{"_id": "q_7692", "text": "Extract and resample from array to target tile.\n\n Parameters\n ----------\n in_raster : array\n in_affine : ``Affine``\n out_tile : ``BufferedTile``\n resampling : string\n one of rasterio's resampling methods (default: nearest)\n nodataval : integer or float\n raster nodata value (default: 0)\n\n Returns\n -------\n resampled array : array"}
{"_id": "q_7693", "text": "Determine if distance over antimeridian is shorter than normal distance."}
{"_id": "q_7694", "text": "Turn input data into a proper array for further usage.\n\n Outut array is always 3-dimensional with the given data type. If the output\n is masked, the fill_value corresponds to the given nodata value and the\n nodata value will be burned into the data array.\n\n Parameters\n ----------\n data : array or iterable\n array (masked or normal) or iterable containing arrays\n nodata : integer or float\n nodata value (default: 0) used if input is not a masked array and\n for output array\n masked : bool\n return a NumPy Array or a NumPy MaskedArray (default: True)\n dtype : string\n data type of output array (default: \"int16\")\n\n Returns\n -------\n array : array"}
{"_id": "q_7695", "text": "Reproject a geometry to target CRS.\n\n Also, clips geometry if it lies outside the destination CRS boundary.\n Supported destination CRSes for clipping: 4326 (WGS84), 3857 (Spherical\n Mercator) and 3035 (ETRS89 / ETRS-LAEA).\n\n Parameters\n ----------\n geometry : ``shapely.geometry``\n src_crs : ``rasterio.crs.CRS`` or EPSG code\n CRS of source data\n dst_crs : ``rasterio.crs.CRS`` or EPSG code\n target CRS\n error_on_clip : bool\n raises a ``RuntimeError`` if a geometry is outside of CRS bounds\n (default: False)\n validity_check : bool\n checks if reprojected geometry is valid and throws ``TopologicalError``\n if invalid (default: True)\n antimeridian_cutting : bool\n cut geometry at Antimeridian; can result in a multipart output geometry\n\n Returns\n -------\n geometry : ``shapely.geometry``"}
{"_id": "q_7696", "text": "Segmentize Polygon outer ring by segmentize value.\n\n Just Polygon geometry type supported.\n\n Parameters\n ----------\n geometry : ``shapely.geometry``\n segmentize_value: float\n\n Returns\n -------\n geometry : ``shapely.geometry``"}
{"_id": "q_7697", "text": "Write features to GeoJSON file.\n\n Parameters\n ----------\n in_data : features\n out_schema : dictionary\n output schema for fiona\n out_tile : ``BufferedTile``\n tile used for output extent\n out_path : string\n output path for GeoJSON file"}
{"_id": "q_7698", "text": "Return geometry of a specific type if possible.\n\n Filters and splits up GeometryCollection into target types. This is\n necessary when after clipping and/or reprojecting the geometry types from\n source geometries change (i.e. a Polygon becomes a LineString or a\n LineString becomes Point) in some edge cases.\n\n Parameters\n ----------\n geometry : ``shapely.geometry``\n target_type : string\n target geometry type\n allow_multipart : bool\n allow multipart geometries (default: True)\n\n Returns\n -------\n cleaned geometry : ``shapely.geometry``\n returns None if input geometry type differs from target type\n\n Raises\n ------\n GeometryTypeError : if geometry type does not match target_type"}
{"_id": "q_7699", "text": "Yield single part geometries if geom is multipart, otherwise yield geom.\n\n Parameters:\n -----------\n geom : shapely geometry\n\n Returns:\n --------\n shapely single part geometries"}
{"_id": "q_7700", "text": "Convert and optionally clip input raster data.\n\n Inputs:\n -------\n raster\n singleband or multiband data input\n clip (optional)\n vector data used to clip output\n\n Parameters\n ----------\n td_resampling : str (default: 'nearest')\n Resampling used when reading from TileDirectory.\n td_matching_method : str ('gdal' or 'min') (default: 'gdal')\n gdal: Uses GDAL's standard method. Here, the target resolution is\n calculated by averaging the extent's pixel sizes over both x and y\n axes. This approach returns a zoom level which may not have the\n best quality but will speed up reading significantly.\n min: Returns the zoom level which matches the minimum resolution of the\n extents four corner pixels. This approach returns the zoom level\n with the best possible quality but with low performance. If the\n tile extent is outside of the destination pyramid, a\n TopologicalError will be raised.\n td_matching_max_zoom : int (optional, default: None)\n If set, it will prevent reading from zoom levels above the maximum.\n td_matching_precision : int (default: 8)\n Round resolutions to n digits before comparing.\n td_fallback_to_higher_zoom : bool (default: False)\n In case no data is found at zoom level, try to read data from higher\n zoom levels. Enabling this setting can lead to many IO requests in\n areas with no data.\n clip_pixelbuffer : int\n Use pixelbuffer when clipping output by geometry. (default: 0)\n\n Output\n ------\n np.ndarray"}
{"_id": "q_7701", "text": "Determine the best base zoom level for a raster.\n\n \"Best\" means the maximum zoom level where no oversampling has to be done.\n\n Parameters\n ----------\n input_file : path to raster file\n tile_pyramid_type : ``TilePyramid`` projection (``geodetic`` or``mercator``)\n\n Returns\n -------\n zoom : integer"}
{"_id": "q_7702", "text": "Determine whether file path is remote or local.\n\n Parameters\n ----------\n path : path to file\n\n Returns\n -------\n is_remote : bool"}
{"_id": "q_7703", "text": "Check if file exists either remote or local.\n\n Parameters:\n -----------\n path : path to file\n\n Returns:\n --------\n exists : bool"}
{"_id": "q_7704", "text": "Return absolute path if path is local.\n\n Parameters:\n -----------\n path : path to file\n base_dir : base directory used for absolute path\n\n Returns:\n --------\n absolute path"}
{"_id": "q_7705", "text": "Return relative path if path is local.\n\n Parameters:\n -----------\n path : path to file\n base_dir : directory where path sould be relative to\n\n Returns:\n --------\n relative path"}
{"_id": "q_7706", "text": "Write local or remote."}
{"_id": "q_7707", "text": "Read local or remote."}
{"_id": "q_7708", "text": "Attach a reducer function to a given type in the dispatch table."}
{"_id": "q_7709", "text": "Return the number of CPUs the current process can use.\n\n The returned number of CPUs accounts for:\n * the number of CPUs in the system, as given by\n ``multiprocessing.cpu_count``;\n * the CPU affinity settings of the current process\n (available with Python 3.4+ on some Unix systems);\n * CFS scheduler CPU bandwidth limit (available on Linux only, typically\n set by docker and similar container orchestration systems);\n * the value of the LOKY_MAX_CPU_COUNT environment variable if defined.\n and is given as the minimum of these constraints.\n It is also always larger or equal to 1."}
{"_id": "q_7710", "text": "Evaluates calls from call_queue and places the results in result_queue.\n\n This worker is run in a separate process.\n\n Args:\n call_queue: A ctx.Queue of _CallItems that will be read and\n evaluated by the worker.\n result_queue: A ctx.Queue of _ResultItems that will written\n to by the worker.\n initializer: A callable initializer, or None\n initargs: A tuple of args for the initializer\n process_management_lock: A ctx.Lock avoiding worker timeout while some\n workers are being spawned.\n timeout: maximum time to wait for a new item in the call_queue. If that\n time is expired, the worker will shutdown.\n worker_exit_lock: Lock to avoid flagging the executor as broken on\n workers timeout.\n current_depth: Nested parallelism level, to avoid infinite spawning."}
{"_id": "q_7711", "text": "Fills call_queue with _WorkItems from pending_work_items.\n\n This function never blocks.\n\n Args:\n pending_work_items: A dict mapping work ids to _WorkItems e.g.\n {5: <_WorkItem...>, 6: <_WorkItem...>, ...}\n work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work ids\n are consumed and the corresponding _WorkItems from\n pending_work_items are transformed into _CallItems and put in\n call_queue.\n call_queue: A ctx.Queue that will be filled with _CallItems\n derived from _WorkItems."}
{"_id": "q_7712", "text": "Wrapper for non-picklable object to use cloudpickle to serialize them.\n\n Note that this wrapper tends to slow down the serialization process as it\n is done with cloudpickle which is typically slower compared to pickle. The\n proper way to solve serialization issues is to avoid defining functions and\n objects in the main scripts and to implement __reduce__ functions for\n complex classes."}
{"_id": "q_7713", "text": "Return a wrapper for an fd."}
{"_id": "q_7714", "text": "Return the current ReusableExectutor instance.\n\n Start a new instance if it has not been started already or if the previous\n instance was left in a broken state.\n\n If the previous instance does not have the requested number of workers, the\n executor is dynamically resized to adjust the number of workers prior to\n returning.\n\n Reusing a singleton instance spares the overhead of starting new worker\n processes and importing common python packages each time.\n\n ``max_workers`` controls the maximum number of tasks that can be running in\n parallel in worker processes. By default this is set to the number of\n CPUs on the host.\n\n Setting ``timeout`` (in seconds) makes idle workers automatically shutdown\n so as to release system resources. New workers are respawn upon submission\n of new tasks so that ``max_workers`` are available to accept the newly\n submitted tasks. Setting ``timeout`` to around 100 times the time required\n to spawn new processes and import packages in them (on the order of 100ms)\n ensures that the overhead of spawning workers is negligible.\n\n Setting ``kill_workers=True`` makes it possible to forcibly interrupt\n previously spawned jobs to get a new instance of the reusable executor\n with new constructor argument values.\n\n The ``job_reducers`` and ``result_reducers`` are used to customize the\n pickling of tasks and results send to the executor.\n\n When provided, the ``initializer`` is run first in newly spawned\n processes with argument ``initargs``."}
{"_id": "q_7715", "text": "Wait for the cache to be empty before resizing the pool."}
{"_id": "q_7716", "text": "Return info about parent needed by child to unpickle process object"}
{"_id": "q_7717", "text": "Try to get current process ready to unpickle process object"}
{"_id": "q_7718", "text": "Close all the file descriptors except those in keep_fds."}
{"_id": "q_7719", "text": "Return a formated string with the exitcodes of terminated workers.\n\n If necessary, wait (up to .25s) for the system to correctly set the\n exitcode of one terminated worker."}
{"_id": "q_7720", "text": "Format a list of exit code with names of the signals if possible"}
{"_id": "q_7721", "text": "Run semaphore tracker."}
{"_id": "q_7722", "text": "Make sure that semaphore tracker process is running.\n\n This can be run from any process. Usually a child process will use\n the semaphore created by its parent."}
{"_id": "q_7723", "text": "A simple event processor that prints out events."}
{"_id": "q_7724", "text": "Program counter."}
{"_id": "q_7725", "text": "Almost a copy of code.interact\n Closely emulate the interactive Python interpreter.\n\n This is a backwards compatible interface to the InteractiveConsole\n class. When readfunc is not specified, it attempts to import the\n readline module to enable GNU readline if it is available.\n\n Arguments (all optional, all default to None):\n\n banner -- passed to InteractiveConsole.interact()\n readfunc -- if not None, replaces InteractiveConsole.raw_input()\n local -- passed to InteractiveInterpreter.__init__()"}
{"_id": "q_7726", "text": "Split a command line's arguments in a shell-like manner returned\n as a list of lists. Use ';;' with white space to indicate separate\n commands.\n\n This is a modified version of the standard library's shlex.split()\n function, but with a default of posix=False for splitting, so that quotes\n in inputs are respected."}
{"_id": "q_7727", "text": "Run each function in `hooks' with args"}
{"_id": "q_7728", "text": "Eval arg and it is an integer return the value. Otherwise\n return None"}
{"_id": "q_7729", "text": "If no argument use the default. If arg is a an integer between\n least min_value and at_most, use that. Otherwise report an error.\n If there's a stack frame use that in evaluation."}
{"_id": "q_7730", "text": "Find the next token in str string from start_pos, we return\n the token and the next blank position after the token or\n str.size if this is the last token. Tokens are delimited by\n white space."}
{"_id": "q_7731", "text": "Script interface to read a command. `prompt' is a parameter for\n compatibilty and is ignored."}
{"_id": "q_7732", "text": "Closes both input and output"}
{"_id": "q_7733", "text": "Disassemble byte string of code. If end_line is negative\n it counts the number of statement linestarts to use."}
{"_id": "q_7734", "text": "Return a count of the number of frames"}
{"_id": "q_7735", "text": "If f_back is looking at a call function, return\n the name for it. Otherwise return None"}
{"_id": "q_7736", "text": "Print count entries of the stack trace"}
{"_id": "q_7737", "text": "Find subcmd in self.subcmds"}
{"_id": "q_7738", "text": "Show short help for a subcommand."}
{"_id": "q_7739", "text": "Add subcmd to the available subcommands for this object.\n It will have the supplied docstring, and subcmd_cb will be called\n when we want to run the command. min_len is the minimum length\n allowed to abbreviate the command. in_list indicates with the\n show command will be run when giving a list of all sub commands\n of this object. Some commands have long output like \"show commands\"\n so we might not want to show that."}
{"_id": "q_7740", "text": "Run subcmd_name with args using obj for the environent"}
{"_id": "q_7741", "text": "Enter the debugger.\n\nParameters\n----------\n\nlevel : how many stack frames go back. Usually it will be\nthe default 0. But sometimes though there may be calls in setup to the debugger\nthat you may want to skip.\n\nstep_ignore : how many line events to ignore after the\ndebug() call. 0 means don't even wait for the debug() call to finish.\n\nparam dbg_opts : is an optional \"options\" dictionary that gets fed\ntrepan.Debugger(); `start_opts' are the optional \"options\"\ndictionary that gets fed to trepan.Debugger.core.start().\n\nUse like this:\n\n.. code-block:: python\n\n ... # Possibly some Python code\n import trepan.api # Needed only once\n ... # Possibly some more Python code\n trepan.api.debug() # You can wrap inside conditional logic too\n pass # Stop will be here.\n # Below is code you want to use the debugger to do things.\n .... # more Python code\n # If you get to a place in the program where you aren't going\n # want to debug any more, but want to remove debugger trace overhead:\n trepan.api.stop()\n\nParameter \"level\" specifies how many stack frames go back. Usually it will be\nthe default 0. But sometimes though there may be calls in setup to the debugger\nthat you may want to skip.\n\nParameter \"step_ignore\" specifies how many line events to ignore after the\ndebug() call. 0 means don't even wait for the debug() call to finish.\n\nIn situations where you want an immediate stop in the \"debug\" call\nrather than the statement following it (\"pass\" above), add parameter\nstep_ignore=0 to debug() like this::\n\n import trepan.api # Needed only once\n # ... as before\n trepan.api.debug(step_ignore=0)\n # ... as before\n\nModule variable _debugger_obj_ from module trepan.debugger is used as\nthe debugger instance variable; it can be subsequently used to change\nsettings or alter behavior. It should be of type Debugger (found in\nmodule trepan). If not, it will get changed to that type::\n\n $ python\n >>> from trepan.debugger import debugger_obj\n >>> type(debugger_obj)\n <type 'NoneType'>\n >>> import trepan.api\n >>> trepan.api.debug()\n ...\n (Trepan) c\n >>> from trepan.debugger import debugger_obj\n >>> debugger_obj\n <trepan.debugger.Debugger instance at 0x7fbcacd514d0>\n >>>\n\nIf however you want your own separate debugger instance, you can\ncreate it from the debugger _class Debugger()_ from module\ntrepan.debugger::\n\n $ python\n >>> from trepan.debugger import Debugger\n >>> dbgr = Debugger() # Add options as desired\n >>> dbgr\n <trepan.debugger.Debugger instance at 0x2e25320>\n\n`dbg_opts' is an optional \"options\" dictionary that gets fed\ntrepan.Debugger(); `start_opts' are the optional \"options\"\ndictionary that gets fed to trepan.Debugger.core.start()."}
{"_id": "q_7742", "text": "Find the first frame that is a debugged frame. We do this\n Generally we want traceback information without polluting it with\n debugger frames. We can tell these because those are frames on the\n top which don't have f_trace set. So we'll look back from the top\n to find the fist frame where f_trace is set."}
{"_id": "q_7743", "text": "If arg is an int, use that otherwise take default."}
{"_id": "q_7744", "text": "Return True if arg is 'on' or 1 and False arg is 'off' or 0.\n Any other value is raises ValueError."}
{"_id": "q_7745", "text": "set a Boolean-valued debugger setting. 'obj' is a generally a\n subcommand that has 'name' and 'debugger.settings' attributes"}
{"_id": "q_7746", "text": "set an Integer-valued debugger setting. 'obj' is a generally a\n subcommand that has 'name' and 'debugger.settings' attributes"}
{"_id": "q_7747", "text": "Generic subcommand showing a boolean-valued debugger setting.\n 'obj' is generally a subcommand that has 'name' and\n 'debugger.setting' attributes."}
{"_id": "q_7748", "text": "Return True if we are looking at a def statement"}
{"_id": "q_7749", "text": "Get bacground from\n default values based on the TERM environment variable"}
{"_id": "q_7750", "text": "Pass as parameters R G B values in hex\n On return, variable is_dark_bg is set"}
{"_id": "q_7751", "text": "return suitable frame signature to key display expressions off of."}
{"_id": "q_7752", "text": "display any items that are active"}
{"_id": "q_7753", "text": "Set breakpoint at current location, or a specified frame"}
{"_id": "q_7754", "text": "Find the corresponding signal name for 'num'. Return None\n if 'num' is invalid."}
{"_id": "q_7755", "text": "Find the corresponding signal number for 'name'. Return None\n if 'name' is invalid."}
{"_id": "q_7756", "text": "Return a signal name for a signal name or signal\n number. Return None is name_num is an int but not a valid signal\n number and False if name_num is a not number. If name_num is a\n signal name or signal number, the canonic if name is returned."}
{"_id": "q_7757", "text": "Check to see if any of the signal handlers we are interested in have\n changed or is not initially set. Change any that are not right."}
{"_id": "q_7758", "text": "Print information about a signal"}
{"_id": "q_7759", "text": "Delegate the actions specified in 'arg' to another\n method."}
{"_id": "q_7760", "text": "Return a full pathname for filename if we can find one. path\n is a list of directories to prepend to filename. If no file is\n found we'll return None"}
{"_id": "q_7761", "text": "Do a shell-like path lookup for py_script and return the results.\n If we can't find anything return py_script"}
{"_id": "q_7762", "text": "used to write to a debugger that is connected to this\n server; `str' written will have a newline added to it"}
{"_id": "q_7763", "text": "Execution status of the program."}
{"_id": "q_7764", "text": "List commands arranged in an aligned columns"}
{"_id": "q_7765", "text": "Enter debugger read loop after your program has crashed.\n\n exc is a triple like you get back from sys.exc_info. If no exc\n parameter, is supplied, the values from sys.last_type,\n sys.last_value, sys.last_traceback are used. And if these don't\n exist either we'll assume that sys.exc_info() contains what we\n want and frameno is the index location of where we want to start.\n\n 'frameno' specifies how many frames to ignore in the traceback.\n The default is 1, that is, we don't need to show the immediate\n call into post_mortem. If you have wrapper functions that call\n this one, you may want to increase frameno."}
{"_id": "q_7766", "text": "Closes both socket and server connection."}
{"_id": "q_7767", "text": "This method the debugger uses to write. In contrast to\n writeline, no newline is added to the end to `str'. Also\n msg doesn't have to be a string."}
{"_id": "q_7768", "text": "Complete an arbitrary expression."}
{"_id": "q_7769", "text": "Add `frame_or_fn' to the list of functions that are not to\n be debugged"}
{"_id": "q_7770", "text": "Turns `filename' into its canonic representation and returns this\n string. This allows a user to refer to a given file in one of several\n equivalent ways.\n\n Relative filenames need to be fully resolved, since the current working\n directory might change over the course of execution.\n\n If filename is enclosed in < ... >, then we assume it is\n one of the bogus internal Python names like <string> which is seen\n for example when executing \"exec cmd\"."}
{"_id": "q_7771", "text": "Return filename or the basename of that depending on the\n basename setting"}
{"_id": "q_7772", "text": "Return True if debugging is in progress."}
{"_id": "q_7773", "text": "Does the magic to determine if we stop here and run a\n command processor or not. If so, return True and set\n self.stop_reason; if not, return False.\n\n Determining factors can be whether a breakpoint was\n encountered, whether we are stepping, next'ing, finish'ing,\n and, if so, whether there is an ignore counter."}
{"_id": "q_7774", "text": "Sets to stop on the next event that happens in frame 'frame'."}
{"_id": "q_7775", "text": "A mini stack trace routine for threads."}
{"_id": "q_7776", "text": "Get file information"}
{"_id": "q_7777", "text": "Check whether we should break here because of `b.funcname`."}
{"_id": "q_7778", "text": "remove breakpoint `bp"}
{"_id": "q_7779", "text": "Remove a breakpoint given its breakpoint number."}
{"_id": "q_7780", "text": "Enable or disable all breakpoints."}
{"_id": "q_7781", "text": "Enable or disable a breakpoint given its breakpoint number."}
{"_id": "q_7782", "text": "Read a line of input. Prompt and use_raw exist to be\n compatible with other input routines and are ignored.\n EOFError will be raised on EOF."}
{"_id": "q_7783", "text": "Restore an original login session, checking the signed session"}
{"_id": "q_7784", "text": "Yield each document in a Luminoso project in turn. Requires a client whose\n URL points to a project.\n\n If expanded=True, it will include additional fields that Luminoso added in\n its analysis, such as 'terms' and 'vector'.\n\n Otherwise, it will contain only the fields necessary to reconstruct the\n document: 'title', 'text', and 'metadata'.\n\n Shows a progress bar if progress=True."}
{"_id": "q_7785", "text": "Handle arguments for the 'lumi-download' command."}
{"_id": "q_7786", "text": "Read a JSON or CSV file and convert it into a JSON stream, which will\n be saved in an anonymous temp file."}
{"_id": "q_7787", "text": "Deduce the format of a file, within reason.\n\n - If the filename ends with .csv or .txt, it's csv.\n - If the filename ends with .jsons, it's a JSON stream (conveniently the\n format we want to output).\n - If the filename ends with .json, it could be a legitimate JSON file, or\n it could be a JSON stream, following a nonstandard convention that many\n people including us are guilty of. In that case:\n - If the first line is a complete JSON document, and there is more in the\n file besides the first line, then it is a JSON stream.\n - Otherwise, it is probably really JSON.\n - If the filename does not end with .json, .jsons, or .csv, we have to guess\n whether it's still CSV or tab-separated values or something like that.\n If it's JSON, the first character would almost certainly have to be a\n bracket or a brace. If it isn't, assume it's CSV or similar."}
{"_id": "q_7788", "text": "This function is meant to normalize data for upload to the Luminoso\n Analytics system. Currently it only normalizes dates.\n\n If date_format is not specified, or if there's no date in a particular doc,\n the the doc is yielded unchanged."}
{"_id": "q_7789", "text": "Convert a date in a given format to epoch time. Mostly a wrapper for\n datetime's strptime."}
{"_id": "q_7790", "text": "Open a CSV file using Python 2's CSV module, working around the deficiency\n where it can't handle the null bytes of UTF-16."}
{"_id": "q_7791", "text": "Handle command line arguments to convert a file to a JSON stream as a\n script."}
{"_id": "q_7792", "text": "Returns an object that makes requests to the API, authenticated\n with a saved or specified long-lived token, at URLs beginning with\n `url`.\n\n If no URL is specified, or if the specified URL is a path such as\n '/projects' without a scheme and domain, the client will default to\n https://analytics.luminoso.com/api/v5/.\n\n If neither token nor token_file are specified, the client will look\n for a token in $HOME/.luminoso/tokens.json. The file should contain\n a single json dictionary of the format\n `{'root_url': 'token', 'root_url2': 'token2', ...}`."}
{"_id": "q_7793", "text": "Take a long-lived API token and store it to a local file. Long-lived\n tokens can be retrieved through the UI. Optional arguments are the\n domain for which the token is valid and the file in which to store the\n token."}
{"_id": "q_7794", "text": "Make a DELETE request to the given path, and return the JSON-decoded\n result.\n\n Keyword parameters will be converted to URL parameters.\n\n DELETE requests ask to delete the object represented by this URL."}
{"_id": "q_7795", "text": "A convenience method designed to inform you when a project build has\n completed. It polls the API every `interval` seconds until there is\n not a build running. At that point, it returns the \"last_build_info\"\n field of the project record if the build succeeded, and raises a\n LuminosoError with the field as its message if the build failed.\n\n If a `path` is not specified, this method will assume that its URL is\n the URL for the project. Otherwise, it will use the specified path\n (which should be \"/projects/<project_id>/\")."}
{"_id": "q_7796", "text": "Get the \"root URL\" for a URL, as described in the LuminosoClient\n documentation."}
{"_id": "q_7797", "text": "Obtain the user's long-lived API token and save it in a local file.\n If the user has no long-lived API token, one will be created.\n Returns the token that was saved."}
{"_id": "q_7798", "text": "Make a request of the specified type and expect a JSON object in\n response.\n\n If the result has an 'error' value, raise a LuminosoAPIError with\n its contents. Otherwise, return the contents of the 'result' value."}
{"_id": "q_7799", "text": "Get the ID of an account you can use to access projects."}
{"_id": "q_7800", "text": "Get the documentation that the server sends for the API."}
{"_id": "q_7801", "text": "Wait for an asynchronous task to finish.\n\n Unlike the thin methods elsewhere on this object, this one is actually\n specific to how the Luminoso API works. This will poll an API\n endpoint to find out the status of the job numbered `job_id`,\n repeating every 5 seconds (by default) until the job is done. When\n the job is done, it will return an object representing the result of\n that job.\n\n In the Luminoso API, requests that may take a long time return a\n job ID instead of a result, so that your code can continue running\n in the meantime. When it needs the job to be done to proceed, it can\n use this method to wait.\n\n The base URL where it looks for that job is by default `jobs/id/`\n under the current URL, assuming that this LuminosoClient's URL\n represents a project. You can specify a different URL by changing\n `base_path`.\n\n If the job failed, will raise a LuminosoError with the job status\n as its message."}
{"_id": "q_7802", "text": "Get the raw text of a response.\n\n This is only generally useful for specific URLs, such as documentation."}
{"_id": "q_7803", "text": "Print a JSON list of JSON objects in CSV format."}
{"_id": "q_7804", "text": "Read parameters from input file, -j, and -p arguments, in that order."}
{"_id": "q_7805", "text": "Limit a document to just the three fields we should upload."}
{"_id": "q_7806", "text": "Given an iterator of documents, upload them as a Luminoso project."}
{"_id": "q_7807", "text": "Handle arguments for the 'lumi-upload' command."}
{"_id": "q_7808", "text": "Upload a file to Luminoso with the given account and project name.\n\n Given a file containing JSON, JSON stream, or CSV data, this verifies\n that we can successfully convert it to a JSON stream, then uploads that\n JSON stream."}
{"_id": "q_7809", "text": "Handle command line arguments, to upload a file to a Luminoso project\n as a script."}
{"_id": "q_7810", "text": "Obtain a short-lived token using a username and password, and use that\n token to create an auth object."}
{"_id": "q_7811", "text": "Set http session."}
{"_id": "q_7812", "text": "Login to enedis."}
{"_id": "q_7813", "text": "Get data."}
{"_id": "q_7814", "text": "Get the latest data from Enedis."}
{"_id": "q_7815", "text": "Load the view on first load"}
{"_id": "q_7816", "text": "Load the view on first load could also load based on session, group, etc.."}
{"_id": "q_7817", "text": "Execute the correct handler depending on what is connecting."}
{"_id": "q_7818", "text": "When enaml.js sends a message"}
{"_id": "q_7819", "text": "When pages change, update the menus"}
{"_id": "q_7820", "text": "Generate the handlers for this site"}
{"_id": "q_7821", "text": "Create the toolkit widget for the proxy object.\n\n This method is called during the top-down pass, just before the\n 'init_widget()' method is called. This method should create the\n toolkit widget and assign it to the 'widget' attribute."}
{"_id": "q_7822", "text": "Initialize the state of the toolkit widget.\n\n This method is called during the top-down pass, just after the\n 'create_widget()' method is called. This method should init the\n state of the widget. The child widgets will not yet be created."}
{"_id": "q_7823", "text": "A reimplemented destructor.\n\n This destructor will clear the reference to the toolkit widget\n and set its parent to None."}
{"_id": "q_7824", "text": "Handle the child added event from the declaration.\n\n This handler will insert the child toolkit widget in the correct.\n position. Subclasses which need more control should reimplement this\n method."}
{"_id": "q_7825", "text": "Handle the child removed event from the declaration.\n\n This handler will unparent the child toolkit widget. Subclasses\n which need more control should reimplement this method."}
{"_id": "q_7826", "text": "Default handler for those not explicitly defined"}
{"_id": "q_7827", "text": "Update the proxy widget when the Widget data\n changes."}
{"_id": "q_7828", "text": "Find nodes matching the given xpath query"}
{"_id": "q_7829", "text": "Initialize the widget with the source."}
{"_id": "q_7830", "text": "A change handler for the 'objects' list of the Include.\n\n If the object is initialized objects which are removed will be\n unparented and objects which are added will be reparented. Old\n objects will be destroyed if the 'destroy_old' flag is True."}
{"_id": "q_7831", "text": "When the children of the block change. Update the referenced\n block."}
{"_id": "q_7832", "text": "Registers a function as a hook. Multiple hooks can be registered for a given type, but the\n order in which they are invoke is unspecified.\n\n :param event_type: The event type this hook will be invoked for."}
{"_id": "q_7833", "text": "Callback from Flask"}
{"_id": "q_7834", "text": "Remove common indentation from string.\n\n Unlike doctrim there is no special treatment of the first line."}
{"_id": "q_7835", "text": "Find all section names and return a list with their names."}
{"_id": "q_7836", "text": "Generate table of contents for array of section names."}
{"_id": "q_7837", "text": "Print `msg` error and exit with status `exit_code`"}
{"_id": "q_7838", "text": "Gets a Item from the Menu by name. Note that the name is not\n case-sensitive but must be spelt correctly.\n\n :param string name: The name of the item.\n :raises StopIteration: Raises exception if no item is found.\n :return: An item object matching the search.\n :rtype: Item"}
{"_id": "q_7839", "text": "Clear out the current session on the remote and setup a new one.\n\n :return: A response from having expired the current session.\n :rtype: requests.Response"}
{"_id": "q_7840", "text": "Search for dominos pizza stores using a search term.\n\n :param string search: Search term.\n :return: A list of nearby stores matching the search term.\n :rtype: list"}
{"_id": "q_7841", "text": "Add an item to the current basket.\n\n :param Item item: Item from menu.\n :param int variant: Item SKU id. Ignored if the item is a side.\n :param int quantity: The quantity of item to be added.\n :return: A response having added an item to the current basket.\n :rtype: requests.Response"}
{"_id": "q_7842", "text": "Add a pizza to the current basket.\n\n :param Item item: Item from menu.\n :param int variant: Item SKU id. Some defaults are defined in the VARIANT enum.\n :param int quantity: The quantity of pizza to be added.\n :return: A response having added a pizza to the current basket.\n :rtype: requests.Response"}
{"_id": "q_7843", "text": "Add a side to the current basket.\n\n :param Item item: Item from menu.\n :param int quantity: The quantity of side to be added.\n :return: A response having added a side to the current basket.\n :rtype: requests.Response"}
{"_id": "q_7844", "text": "Remove an item from the current basket.\n\n :param int idx: Basket item id.\n :return: A response having removed an item from the current basket.\n :rtype: requests.Response"}
{"_id": "q_7845", "text": "Select the payment method going to be used to make a purchase.\n\n :param int method: Payment method id.\n :return: A response having set the payment option.\n :rtype: requests.Response"}
{"_id": "q_7846", "text": "Proceed with payment using the payment method selected earlier.\n\n :return: A response having processes the payment.\n :rtype: requests.Response"}
{"_id": "q_7847", "text": "Make a HTTP GET request to the Dominos UK API with the given parameters\n for the current session.\n\n :param string path: The API endpoint path.\n :params list kargs: A list of arguments.\n :return: A response from the Dominos UK API.\n :rtype: response.Response"}
{"_id": "q_7848", "text": "Make a HTTP POST request to the Dominos UK API with the given\n parameters for the current session.\n\n :param string path: The API endpoint path.\n :params list kargs: A list of arguments.\n :return: A response from the Dominos UK API.\n :rtype: response.Response"}
{"_id": "q_7849", "text": "Make a HTTP request to the Dominos UK API with the given parameters for\n the current session.\n\n :param verb func: HTTP method on the session.\n :param string path: The API endpoint path.\n :params list kargs: A list of arguments.\n :return: A response from the Dominos UK API.\n :rtype: response.Response"}
{"_id": "q_7850", "text": "Add an item to the end of the menu before the exit item\n\n :param MenuItem item: The item to be added"}
{"_id": "q_7851", "text": "Add the exit item if necessary. Used to make sure there aren't multiple exit items\n\n :return: True if item needed to be added, False otherwise\n :rtype: bool"}
{"_id": "q_7852", "text": "Redraws the menu and refreshes the screen. Should be called whenever something changes that needs to be redrawn."}
{"_id": "q_7853", "text": "Gets the next single character and decides what to do with it"}
{"_id": "q_7854", "text": "Select the current item and run it"}
{"_id": "q_7855", "text": "Take an old-style menuData dictionary and return a CursesMenu\n\n :param dict menu_data:\n :return: A new CursesMenu\n :rtype: CursesMenu"}
{"_id": "q_7856", "text": "Compute the maximum temporal distance.\n\n Returns\n -------\n max_temporal_distance : float"}
{"_id": "q_7857", "text": "Temporal distance cumulative density function.\n\n Returns\n -------\n x_values: numpy.array\n values for the x-axis\n cdf: numpy.array\n cdf values"}
{"_id": "q_7858", "text": "Remove dangling entries from the shapes directory.\n\n Parameters\n ----------\n db_conn: sqlite3.Connection\n connection to the GTFS object"}
{"_id": "q_7859", "text": "Given a set of transit events and the static walk network,\n \"transform\" the static walking network into a set of \"pseudo-connections\".\n\n As a first approximation, we add pseudo-connections to depart after each arrival of a transit connection\n to it's arrival stop.\n\n Parameters\n ----------\n transit_connections: list[Connection]\n start_time_dep : int\n start time in unixtime seconds\n end_time_dep: int\n end time in unixtime seconds (no new connections will be scanned after this time)\n transfer_margin: int\n required extra margin required for transfers in seconds\n walk_speed: float\n walking speed between stops in meters / second\n walk_network: networkx.Graph\n each edge should have the walking distance as a data attribute (\"d_walk\") expressed in meters\n\n Returns\n -------\n pseudo_connections: set[Connection]"}
{"_id": "q_7860", "text": "Get the earliest visit time of the stop."}
{"_id": "q_7861", "text": "Whether the spreading stop can infect using this event."}
{"_id": "q_7862", "text": "Create day_trips and day_stop_times views.\n\n day_trips: day_trips2 x trips = days x trips\n day_stop_times: day_trips2 x trips x stop_times = days x trips x stop_times"}
{"_id": "q_7863", "text": "Create a colourbar with limits of lwr and upr"}
{"_id": "q_7864", "text": "Write temporal networks by route type to disk.\n\n Parameters\n ----------\n gtfs: gtfspy.GTFS\n extract_output_dir: str"}
{"_id": "q_7865", "text": "Write out the database according to the GTFS format.\n\n Parameters\n ----------\n gtfs: gtfspy.GTFS\n output: str\n Path where to put the GTFS files\n if output ends with \".zip\" a ZIP-file is created instead.\n\n Returns\n -------\n None"}
{"_id": "q_7866", "text": "Remove columns ending with I from a pandas.DataFrame\n\n Parameters\n ----------\n df: dataFrame\n\n Returns\n -------\n None"}
{"_id": "q_7867", "text": "Context manager for making files with possibility of failure.\n\n If you are creating a file, it is possible that the code will fail\n and leave a corrupt intermediate file. This is especially damaging\n if this is used as automatic input to another process. This context\n manager helps by creating a temporary filename, your code runs and\n creates that temporary file, and then if no exceptions are raised,\n the context manager will move the temporary file to the original\n filename you intended to open.\n\n Parameters\n ----------\n fname : str\n Target filename, this file will be created if all goes well\n fname_tmp : str\n If given, this is used as the temporary filename.\n tmpdir : str or bool\n If given, put temporary files in this directory. If `True`,\n then find a good tmpdir that is not on local filesystem.\n save_tmpfile : bool\n If true, the temporary file is not deleteted if an exception\n is raised.\n keepext : bool, default False\n If true, have tmpfile have same extension as final file.\n\n Returns (as context manager value)\n ----------------------------------\n fname_tmp: str\n Temporary filename to be used. Same as `fname_tmp`\n if given as an argument.\n\n Raises\n ------\n Re-raises any except occuring during the context block."}
{"_id": "q_7868", "text": "Utility function to print sqlite queries before executing.\n\n Use instead of cur.execute(). First argument is cursor.\n\n cur.execute(stmt)\n becomes\n util.execute(cur, stmt)"}
{"_id": "q_7869", "text": "Create directories if they do not exist, otherwise do nothing.\n\n Return path for convenience"}
{"_id": "q_7870", "text": "Checks for rows that are not referenced in the the tables that should be linked\n\n stops <> stop_times using stop_I\n stop_times <> trips <> days, using trip_I\n trips <> routes, using route_I\n :return:"}
{"_id": "q_7871", "text": "Print coordinates within a sequence.\n\n This is only used for debugging. Printed in a form that can be\n pasted into Python for visualization."}
{"_id": "q_7872", "text": "Find corresponding shape points for a list of stops and create shape break points.\n\n Parameters\n ----------\n stops: stop-sequence (list)\n List of stop points\n shape: list of shape points\n shape-sequence of shape points\n\n Returns\n -------\n break_points: list[int]\n stops[i] corresponds to shape[break_points[i]]. This list can\n be used to partition the shape points into segments between\n one stop and the next.\n badness: float\n Lower indicates better fit to the shape. This is the sum of\n distances (in meters) between every each stop and its closest\n shape point. This is not needed in normal use, but in the\n cases where you must determine the best-fitting shape for a\n stop-sequence, use this."}
{"_id": "q_7873", "text": "Get all scheduled stops on a particular route_id.\n\n Given a route_id, return the trip-stop-list with\n latitude/longitudes. This is a bit more tricky than it seems,\n because we have to go from table route->trips->stop_times. This\n functions finds an arbitrary trip (in trip table) with this route ID\n and, and then returns all stop points for that trip.\n\n Parameters\n ----------\n cur : sqlite3.Cursor\n cursor to sqlite3 DB containing GTFS\n route_id : string or any\n route_id to get stop points of\n offset : int\n LIMIT offset if you don't want the first trip returned.\n tripid_glob : string\n If given, allows you to limit tripids which can be selected.\n Mainly useful in debugging.\n\n Returns\n -------\n stop-list\n List of stops in stop-seq format."}
{"_id": "q_7874", "text": "Interpolate passage times for shape points.\n\n Parameters\n ----------\n shape_distances: list\n list of cumulative distances along the shape\n shape_breaks: list\n list of shape_breaks\n stop_times: list\n list of stop_times\n\n Returns\n -------\n shape_times: list of ints (seconds) / numpy array\n interpolated shape passage times\n\n The values of stop times before the first shape-break are given the first\n stopping time, and the any shape points after the last break point are\n given the value of the last shape point."}
{"_id": "q_7875", "text": "Get the earliest arrival time at the target, given a departure time.\n\n Parameters\n ----------\n dep_time : float, int\n time in unix seconds\n transfer_margin: float, int\n transfer margin in seconds\n\n Returns\n -------\n arrival_time : float\n Arrival time in the given time unit (seconds after unix epoch)."}
{"_id": "q_7876", "text": "Get a stop-to-stop network describing a single mode of travel.\n\n Parameters\n ----------\n gtfs : gtfspy.GTFS\n route_type : int\n See gtfspy.route_types.TRANSIT_ROUTE_TYPES for the list of possible types.\n link_attributes: list[str], optional\n defaulting to use the following link attributes:\n \"n_vehicles\" : Number of vehicles passed\n \"duration_min\" : minimum travel time between stops\n \"duration_max\" : maximum travel time between stops\n \"duration_median\" : median travel time between stops\n \"duration_avg\" : average travel time between stops\n \"d\" : distance along straight line (wgs84_distance)\n \"distance_shape\" : minimum distance along shape\n \"capacity_estimate\" : approximate capacity passed through the stop\n \"route_I_counts\" : dict from route_I to counts\n start_time_ut: int\n start time of the time span (in unix time)\n end_time_ut: int\n end time of the time span (in unix time)\n\n Returns\n -------\n net: networkx.DiGraph\n A directed graph Directed graph"}
{"_id": "q_7877", "text": "Compute stop-to-stop networks for all travel modes and combine them into a single network.\n The modes of transport are encoded to a single network.\n The network consists of multiple links corresponding to each travel mode.\n Walk mode is not included.\n\n Parameters\n ----------\n gtfs: gtfspy.GTFS\n\n Returns\n -------\n net: networkx.MultiDiGraph\n keys should be one of route_types.TRANSIT_ROUTE_TYPES (i.e. GTFS route_types)"}
{"_id": "q_7878", "text": "Compute the temporal network of the data, and return it as a pandas.DataFrame\n\n Parameters\n ----------\n gtfs : gtfspy.GTFS\n start_time_ut: int | None\n start time of the time span (in unix time)\n end_time_ut: int | None\n end time of the time span (in unix time)\n route_type: int | None\n Specifies which mode of public transport are included, or whether all modes should be included.\n The int should be one of the standard GTFS route_types:\n (see also gtfspy.route_types.TRANSIT_ROUTE_TYPES )\n If route_type is not specified, all modes are included.\n\n Returns\n -------\n events_df: pandas.DataFrame\n Columns: departure_stop, arrival_stop, departure_time_ut, arrival_time_ut, route_type, route_I, trip_I"}
{"_id": "q_7879", "text": "Get stop pairs through which transfers take place\n\n Returns\n -------\n transfer_stop_pairs: list"}
{"_id": "q_7880", "text": "Get name of the GTFS timezone\n\n Returns\n -------\n timezone_name : str\n name of the time zone, e.g. \"Europe/Helsinki\""}
{"_id": "q_7881", "text": "Get the shapes of all routes.\n\n Parameters\n ----------\n use_shapes : bool, optional\n by default True (i.e. use shapes as the name of the function indicates)\n if False (fall back to lats and longitudes)\n\n Returns\n -------\n routeShapes: list of dicts that should have the following keys\n name, type, agency, lats, lons\n with types\n list, list, str, list, list"}
{"_id": "q_7882", "text": "Get closest stop to a given location.\n\n Parameters\n ----------\n lat: float\n latitude coordinate of the location\n lon: float\n longitude coordinate of the location\n\n Returns\n -------\n stop_I: int\n the index of the stop in the database"}
{"_id": "q_7883", "text": "Check that a trip takes place during a day\n\n Parameters\n ----------\n trip_I : int\n index of the trip in the gtfs data base\n day_start_ut : int\n the starting time of the day in unix time (seconds)\n\n Returns\n -------\n takes_place: bool\n boolean value describing whether the trip takes place during\n the given day or not"}
{"_id": "q_7884", "text": "Get all possible day start times between start_ut and end_ut\n Currently this function is used only by get_tripIs_within_range_by_dsut\n\n Parameters\n ----------\n start_ut : list<int>\n start time in unix time\n end_ut : list<int>\n end time in unix time\n max_time_overnight : list<int>\n the maximum length of time that a trip can take place on\n during the next day (i.e. after midnight run times like 25:35)\n\n Returns\n -------\n day_start_times_ut : list\n list of ints (unix times in seconds) for returning all possible day\n start times\n start_times_ds : list\n list of ints (unix times in seconds) stating the valid start time in\n day seconds\n end_times_ds : list\n list of ints (unix times in seconds) stating the valid end times in\n day_seconds"}
{"_id": "q_7885", "text": "Get all stop data as a pandas DataFrame for all stops, or an individual stop'\n\n Parameters\n ----------\n stop_I : int\n stop index\n\n Returns\n -------\n stop: pandas.DataFrame"}
{"_id": "q_7886", "text": "Obtain a list of events that take place during a time interval.\n Each event needs to be only partially overlap the given time interval.\n Does not include walking events.\n\n Parameters\n ----------\n start_time_ut : int\n start of the time interval in unix time (seconds)\n end_time_ut: int\n end of the time interval in unix time (seconds)\n route_type: int\n consider only events for this route_type\n\n Returns\n -------\n events: pandas.DataFrame\n with the following columns and types\n dep_time_ut: int\n arr_time_ut: int\n from_stop_I: int\n to_stop_I: int\n trip_I : int\n shape_id : int\n route_type : int\n\n See also\n --------\n get_transit_events_in_time_span : an older version of the same thing"}
{"_id": "q_7887", "text": "Return the first and last day_start_ut\n\n Returns\n -------\n first_day_start_ut: int\n last_day_start_ut: int"}
{"_id": "q_7888", "text": "Recover pre-computed travel_impedance between od-pairs from the database.\n\n Returns\n -------\n values: number | Pandas DataFrame"}
{"_id": "q_7889", "text": "Update the profile with the new labels.\n Each new label should have the same departure_time.\n\n Parameters\n ----------\n new_labels: list[LabelTime]\n\n Returns\n -------\n added: bool\n whether new_pareto_tuple was added to the set of pareto-optimal tuples"}
{"_id": "q_7890", "text": "Get the pareto_optimal set of Labels, given a departure time.\n\n Parameters\n ----------\n dep_time : float, int\n time in unix seconds\n first_leg_can_be_walk : bool, optional\n whether to allow walking to target to be included into the profile\n (I.e. whether this function is called when scanning a pseudo-connection:\n \"double\" walks are not allowed.)\n connection_arrival_time: float, int, optional\n used for computing the walking label if dep_time, i.e., connection.arrival_stop_next_departure_time, is infinity)\n connection: connection object\n\n Returns\n -------\n pareto_optimal_labels : set\n Set of Labels"}
{"_id": "q_7891", "text": "Do the actual import. Copy data and store in connection object.\n\n This function:\n - Creates the tables\n - Imports data (using self.gen_rows)\n - Run any post_import hooks.\n - Creates any indexs\n - Does *not* run self.make_views - those must be done\n after all tables are loaded."}
{"_id": "q_7892", "text": "Get mean latitude AND longitude of stops\n\n Parameters\n ----------\n gtfs: GTFS\n\n Returns\n -------\n mean_lat : float\n mean_lon : float"}
{"_id": "q_7893", "text": "Writes data from get_stats to csv file\n\n Parameters\n ----------\n gtfs: GTFS\n path_to_csv: str\n filepath to the csv file to be generated\n re_write:\n insted of appending, create a new one."}
{"_id": "q_7894", "text": "Return the frequency of all types of routes per day.\n\n Parameters\n -----------\n gtfs: GTFS\n\n Returns\n -------\n pandas.DataFrame with columns\n route_I, type, frequency"}
{"_id": "q_7895", "text": "A Python decorator for printing out the execution time for a function.\n\n Adapted from:\n www.andreas-jung.com/contents/a-python-decorator-for-measuring-the-execution-time-of-methods"}
{"_id": "q_7896", "text": "When receiving the filled out form, check for valid access."}
{"_id": "q_7897", "text": "Return a form class for a given string pointing to a lockdown form."}
{"_id": "q_7898", "text": "Check if each request is allowed to access the current resource."}
{"_id": "q_7899", "text": "Handle redirects properly."}
{"_id": "q_7900", "text": "Get the top or flop N results based on a column value for each specified group columns\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `value` (*str*): column name on which you will rank the results\n - `limit` (*int*): Number to specify the N results you want to retrieve.\n Use a positive number x to retrieve the first x results.\n Use a negative number -x to retrieve the last x results.\n\n *optional :*\n - `order` (*str*): `\"asc\"` or `\"desc\"` to sort by ascending ou descending order. By default : `\"asc\"`.\n - `group` (*str*, *list of str*): name(s) of columns on which you want to perform the group operation.\n\n ---\n\n ### Example\n\n **Input**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | lili | 1 | 50 |\n | lili | 1 | 20 |\n | toto | 1 | 100 |\n | toto | 1 | 200 |\n | toto | 1 | 300 |\n | lala | 1 | 100 |\n | lala | 1 | 150 |\n | lala | 1 | 250 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |\n\n\n ```cson\n top:\n value: 'value'\n limit: 4\n order: 'asc'\n ```\n\n **Output**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | lala | 1 | 250 |\n | toto | 1 | 300 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |"}
{"_id": "q_7901", "text": "Get the top or flop N results based on a function and a column value that agregates the input.\n The result is composed by all the original lines including only lines corresponding\n to the top groups\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `value` (*str*): Name of the column name on which you will rank the results.\n - `limit` (*int*): Number to specify the N results you want to retrieve from the sorted values.\n - Use a positive number x to retrieve the first x results.\n - Use a negative number -x to retrieve the last x results.\n - `aggregate_by` (*list of str*)): name(s) of columns you want to aggregate\n\n *optional :*\n - `order` (*str*): `\"asc\"` or `\"desc\"` to sort by ascending ou descending order. By default : `\"asc\"`.\n - `group` (*str*, *list of str*): name(s) of columns on which you want to perform the group operation.\n - `function` : Function to use to group over the group column\n\n ---\n\n ### Example\n\n **Input**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | lili | 1 | 50 |\n | lili | 1 | 20 |\n | toto | 1 | 100 |\n | toto | 1 | 200 |\n | toto | 1 | 300 |\n | lala | 1 | 100 |\n | lala | 1 | 150 |\n | lala | 1 | 250 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |\n\n ```cson\n top_group:\n group: [\"Category\"]\n value: 'value'\n aggregate_by: [\"variable\"]\n limit: 2\n order: \"desc\"\n ```\n\n **Output**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | toto | 1 | 100 |\n | toto | 1 | 200 |\n | toto | 1 | 300 |\n | lala | 1 | 100 |\n | lala | 1 | 150 |\n | lala | 1 | 250 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |"}
{"_id": "q_7902", "text": "Convert string column into datetime column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column to format\n - `format` (*str*): current format of the values (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))"}
{"_id": "q_7903", "text": "Convert datetime column into string column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - column (*str*): name of the column to format\n - format (*str*): format of the result values (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n\n *optional :*\n - new_column (*str*): name of the output column. By default `column` is overwritten."}
{"_id": "q_7904", "text": "Convert the format of a date\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column to change the format\n - `output_format` (*str*): format of the output values (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n\n *optional :*\n - `input_format` (*str*): format of the input values (by default let the parser detect it)\n - `new_column` (*str*): name of the output column (by default overwrite `column`)\n - `new_time_zone` (*str*): name of new time zone (by default no time zone conversion is done)\n\n ---\n\n ### Example\n\n **Input**\n\n label | date\n :------:|:----:\n France | 2017-03-22\n Europe | 2016-03-22\n\n ```cson\n change_date_format:\n column: 'date'\n input_format: '%Y-%m-%d'\n output_format: '%Y-%m'\n ```\n\n Output :\n\n label | date\n :------:|:----:\n France | 2017-03\n Europe | 2016-03"}
{"_id": "q_7905", "text": "Convert column's type into type\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column to convert\n - `type` (*str*): output type. It can be :\n - `\"int\"` : integer type\n - `\"float\"` : general number type\n - `\"str\"` : text type\n\n *optional :*\n - `new_column` (*str*): name of the output column.\n By default the `column` arguments is modified.\n\n ---\n\n ### Example\n\n **Input**\n\n | Column 1 | Column 2 | Column 3 |\n |:-------:|:--------:|:--------:|\n | 'one' | '2014' | 30.0 |\n | 'two' | 2015.0 | '1' |\n | 3.1 | 2016 | 450 |\n\n ```cson\n postprocess: [\n cast:\n column: 'Column 1'\n type: 'str'\n cast:\n column: 'Column 2'\n type: 'int'\n cast:\n column: 'Column 3'\n type: 'float'\n ]\n ```\n\n **Output**\n\n | Column 1 | Column 2 | Column 3 |\n |:-------:|:------:|:--------:|\n | 'one' | 2014 | 30.0 |\n | 'two' | 2015 | 1.0 |\n | '3.1' | 2016 | 450.0 |"}
{"_id": "q_7906", "text": "Return a line for each bars of a waterfall chart, totals, groups, subgroups.\n Compute the variation and variation rate for each line.\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `date` (*str*): name of the column that id the period of each lines\n - `value` (*str*): name of the column that contains the vaue for each lines\n - `start` (*dict*):\n - `label`: text displayed under the first master column\n - `id`: value in the date col that id lines for the first period\n - `end` (*dict*):\n - `label`: text displayed under the last master column\n - `id`: value in the date col that id lines for the second period\n\n *optional :*\n - `upperGroup` (*dict*):\n - `id`: name of the column that contains upperGroups unique IDs\n - `label`: not required, text displayed under each upperGroups bars,\n using ID when it's absent\n - `groupsOrder`: not required, order of upperGroups\n - `insideGroup` (*dict*):\n - `id`: name of the column that contains insideGroups unique IDs\n - `label`: not required, text displayed under each insideGroups bars,\n using ID when it's absent\n - `groupsOrder`: not required, order of insideGroups\n - `filters` (*list*): columns to filters on\n\n ---\n\n ### Example\n\n **Input**\n\n | product_id | played | date | ord | category_id | category_name |\n |:------------:|:--------:|:------:|:-----:|:-------------:|:---------------:|\n | super clap | 12 | t1 | 1 | clap | Clap |\n | clap clap | 1 | t1 | 10 | clap | Clap |\n | tac | 1 | t1 | 1 | snare | Snare |\n | super clap | 10 | t2 | 1 | clap | Clap |\n | tac | 100 | t2 | 1 | snare | Snare |\n | bom | 1 | t2 | 1 | tom | Tom |\n\n\n ```cson\n waterfall:\n upperGroup:\n id: 'category_id'\n label: 'category_name'\n insideGroup:\n id: 'product_id'\n groupsOrder: 'ord'\n date: 'date'\n value: 'played'\n start:\n label: 'Trimestre 1'\n id: 't1'\n end:\n label: 'Trimester 2'\n id: 't2'\n ```\n\n **Output**\n\n | value | label | variation | groups | type | order |\n |:-------:|:-----------:|:-----------:|:--------:|:------:|:-------:|\n | 14 | Trimestre 1 | NaN | NaN | NaN | NaN |\n | -3 | Clap | -0.230769 | clap | parent | NaN |\n | -2 | super clap | -0.166667 | clap | child | 1 |\n | -1 | clap clap | -1 | clap | child | 10 |\n | 99 | Snare | 99 | snare | parent | NaN |\n | 99 | tac | 99 | snare | child | 1 |\n | 1 | Tom | inf | tom | parent | NaN |\n | 1 | bom | inf | tom | child | 1 |\n | 111 | Trimester 2 | NaN | NaN | NaN | NaN |"}
{"_id": "q_7907", "text": "Get the absolute numeric value of each element of a column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column\n\n *optional :*\n - `new_column` (*str*): name of the column containing the result.\n By default, no new column will be created and `column` will be replaced.\n\n ---\n\n ### Example\n\n **Input**\n\n | ENTITY | VALUE_1 | VALUE_2 |\n |:------:|:-------:|:-------:|\n | A | -1.512 | -1.504 |\n | A | 0.432 | 0.14 |\n\n ```cson\n absolute_values:\n column: 'VALUE_1'\n new_column: 'Pika'\n ```\n\n **Output**\n\n | ENTITY | VALUE_1 | VALUE_2 | Pika |\n |:------:|:-------:|:-------:|:-----:|\n | A | -1.512 | -1.504 | 1.512 |\n | A | 0.432 | 0.14 | 0.432 |"}
{"_id": "q_7908", "text": "Pivot the data. Reverse operation of melting\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `index` (*list*): names of index columns.\n - `column` (*str*): column name to pivot on\n - `value` (*str*): column name containing the value to fill the pivoted df\n\n *optional :*\n - `agg_function` (*str*): aggregation function to use among 'mean' (default), 'count', 'mean', 'max', 'min'\n\n ---\n\n ### Example\n\n **Input**\n\n | variable | wave | year | value |\n |:--------:|:-------:|:--------:|:-----:|\n | toto | wave 1 | 2014 | 300 |\n | toto | wave 1 | 2015 | 250 |\n | toto | wave 1 | 2016 | 450 |\n\n ```cson\n pivot:\n index: ['variable','wave']\n column: 'year'\n value: 'value'\n ```\n\n **Output**\n\n | variable | wave | 2014 | 2015 | 2015 |\n |:--------:|:-------:|:------:|:----:|:----:|\n | toto | wave 1 | 300 | 250 | 450 |"}
{"_id": "q_7909", "text": "Pivot a dataframe by group of variables\n\n ---\n\n ### Parameters\n\n *mandatory :*\n * `variable` (*str*): name of the column used to create the groups.\n * `value` (*str*): name of the column containing the value to fill the pivoted df.\n * `new_columns` (*list of str*): names of the new columns.\n * `groups` (*dict*): names of the groups with their corresponding variables.\n **Warning**: the list of variables must have the same order as `new_columns`\n\n *optional :*\n * `id_cols` (*list of str*) : names of other columns to keep, default `None`.\n\n ---\n\n ### Example\n\n **Input**\n\n | type | variable | montant |\n |:----:|:----------:|:-------:|\n | A | var1 | 5 |\n | A | var1_evol | 0.3 |\n | A | var2 | 6 |\n | A | var2_evol | 0.2 |\n\n ```cson\n pivot_by_group :\n id_cols: ['type']\n variable: 'variable'\n value: 'montant'\n new_columns: ['value', 'variation']\n groups:\n 'Group 1' : ['var1', 'var1_evol']\n 'Group 2' : ['var2', 'var2_evol']\n ```\n\n **Ouput**\n\n | type | variable | value | variation |\n |:----:|:----------:|:-------:|:---------:|\n | A | Group 1 | 5 | 0.3 |\n | A | Group 2 | 6 | 0.2 |"}
{"_id": "q_7910", "text": "Aggregate values by groups.\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `group_cols` (*list*): list of columns used to group data\n - `aggregations` (*dict*): dictionnary of values columns to group as keys and aggregation\n function to use as values (See the [list of aggregation functions](\n https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#aggregation))\n\n ---\n\n ### Example\n\n **Input**\n\n | ENTITY | YEAR | VALUE_1 | VALUE_2 |\n |:------:|:----:|:-------:|:-------:|\n | A | 2017 | 10 | 3 |\n | A | 2017 | 20 | 1 |\n | A | 2018 | 10 | 5 |\n | A | 2018 | 30 | 4 |\n | B | 2017 | 60 | 4 |\n | B | 2017 | 40 | 3 |\n | B | 2018 | 50 | 7 |\n | B | 2018 | 60 | 6 |\n\n ```cson\n groupby:\n group_cols: ['ENTITY', 'YEAR']\n aggregations:\n 'VALUE_1': 'sum',\n 'VALUE_2': 'mean'\n ```\n\n **Output**\n\n | ENTITY | YEAR | VALUE_1 | VALUE_2 |\n |:------:|:----:|:-------:|:-------:|\n | A | 2017 | 30 | 2.0 |\n | A | 2018 | 40 | 4.5 |\n | B | 2017 | 100 | 3.5 |\n | B | 2018 | 110 | 6.5 |"}
{"_id": "q_7911", "text": "DEPRECATED - please use `compute_cumsum` instead"}
{"_id": "q_7912", "text": "Decorator to catch an exception and don't raise it.\n Logs information if a decorator failed.\n\n Note:\n We don't want possible exceptions during logging to be raised.\n This is used to decorate any function that gets executed\n before or after the execution of the decorated function."}
{"_id": "q_7913", "text": "Replaces data values and column names according to the locale\n\n ---\n\n ### Parameters\n\n - `values` (optional: dict):\n - key: term to be replaced\n - value:\n - key: the locale e.g. 'en' or 'fr'\n - value: term's translation\n - `columns` (optional: dict):\n - key: columns name to be replaced\n - value:\n - key: the locale e.g. 'en' or 'fr'\n - value: column name's translation\n - `locale` (optional: str): the locale you want to use.\n By default the client locale is used.\n\n ---\n\n ### Example\n\n **Input**\n\n | label | value |\n |:----------------:|:-----:|\n | France | 100 |\n | Europe wo France | 500 |\n\n ```cson\n rename:\n values:\n 'Europe wo France':\n 'en': 'Europe excl. France'\n 'fr': 'Europe excl. France'\n columns:\n 'value':\n 'en': 'revenue'\n 'fr': 'revenue'\n ```\n\n **Output**\n\n | label | revenue |\n |:-------------------:|:-------:|\n | France | 100 |\n | Europe excl. France | 500 |"}
{"_id": "q_7914", "text": "Aggregates data to reproduce \"All\" category for requester\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `id_cols` (*list*): the columns id to group\n - `cols_for_combination` (*dict*): colums corresponding to\n the filters as key and their default value as value\n\n *optional :*\n - `agg_func` (*str*, *list* or *dict*): the function(s) to use for aggregating the data.\n Accepted combinations are:\n - string function name\n - list of functions and/or function names, e.g. [np.sum, 'mean']\n - dict of axis labels -> functions, function names or list of such."}
{"_id": "q_7915", "text": "Get the value of a function's parameter based on its signature\n and the call's args and kwargs.\n\n Example:\n >>> def foo(a, b, c=3, d=4):\n ... pass\n ...\n >>> # what would be the value of \"c\" when calling foo(1, b=2, c=33) ?\n >>> get_param_value_from_func_call('c', foo, [1], {'b': 2, 'c': 33})\n 33"}
{"_id": "q_7916", "text": "Creates aggregates following a given hierarchy\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `levels` (*list of str*): name of the columns composing the hierarchy (from the top to the bottom level).\n - `groupby_vars` (*list of str*): name of the columns with value to aggregate.\n - `extra_groupby_cols` (*list of str*) optional: other columns used to group in each level.\n\n *optional :*\n - `var_name` (*str*) : name of the result variable column. By default, `\u201ctype\u201d`.\n - `value_name` (*str*): name of the result value column. By default, `\u201cvalue\u201d`.\n - `agg_func` (*str*): name of the aggregation operation. By default, `\u201csum\u201d`.\n - `drop_levels` (*list of str*): the names of the levels that you may want to discard from the output.\n\n ---\n\n ### Example\n\n **Input**\n\n | Region | City | Population |\n |:---------:|:--------:|:-----------:|\n | Idf | Panam| 200 |\n | Idf | Antony | 50 |\n | Nord | Lille | 20 |\n\n ```cson\n roll_up:\n levels: [\"Region\", \"City\"]\n groupby_vars: \"Population\"\n ```\n\n **Output**\n\n | Region | City | Population | value | type |\n |:---------:|:--------:|:-----------:|:--------:|:------:|\n | Idf | Panam| 200 | Panam | City |\n | Idf | Antony | 50 | Antony | City |\n | Nord | Lille | 20 | Lille | City |\n | Idf | Nan | 250 | Idf | Region |\n | Nord | Nan | 20 | Nord | Region |"}
{"_id": "q_7917", "text": "Keep the row of the data corresponding to the minimal value in a column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (str): name of the column containing the value you want to keep the minimum\n\n *optional :*\n - `groups` (*str or list(str)*): name of the column(s) used for 'groupby' logic\n (the function will return the argmax by group)\n ---\n\n ### Example\n\n **Input**\n\n | variable | wave | year | value |\n |:--------:|:-------:|:--------:|:-----:|\n | toto | wave 1 | 2014 | 300 |\n | toto | wave 1 | 2015 | 250 |\n | toto | wave 1 | 2016 | 450 |\n\n ```cson\n argmin:\n column: 'year'\n ]\n ```\n\n **Output**\n\n | variable | wave | year | value |\n |:--------:|:-------:|:--------:|:-----:|\n | toto | wave 1 | 2015 | 250 |"}
{"_id": "q_7918", "text": "Can fill NaN values from a column with a given value or a column\n\n ---\n\n ### Parameters\n\n - `column` (*str*): name of column you want to fill\n - `value`: NaN will be replaced by this value\n - `column_value`: NaN will be replaced by value from this column\n\n *NOTE*: You must set either the 'value' parameter or the 'column_value' parameter\n\n ---\n\n ### Example\n\n **Input**\n\n | variable | wave | year | my_value |\n |:--------:|:-------:|:--------:|:--------:|\n | toto | wave 1 | 2014 | 300 |\n | toto | wave 1 | 2015 | |\n | toto | wave 1 | 2016 | 450 |\n\n ```cson\n fillna:\n column: 'my_value'\n value: 0\n ```\n\n **Output**\n\n | variable | wave | year | my_value |\n |:--------:|:-------:|:--------:|:--------:|\n | toto | wave 1 | 2014 | 300 |\n | toto | wave 1 | 2015 | 0 |\n | toto | wave 1 | 2016 | 450 |"}
{"_id": "q_7919", "text": "add a human readable offset to `dateobj` and return corresponding date.\n\n rely on `pandas.Timedelta` and add the following extra shortcuts:\n - \"w\", \"week\" and \"weeks\" for a week (i.e. 7days)\n - \"month', \"months\" for a month (i.e. no day computation, just increment the month)\n - \"y\", \"year', \"years\" for a year (i.e. no day computation, just increment the year)"}
{"_id": "q_7920", "text": "return `dateobj` + `nb_months`\n\n If landing date doesn't exist (e.g. february, 30th), return the last\n day of the landing month.\n\n >>> add_months(date(2018, 1, 1), 1)\n datetime.date(2018, 1, 1)\n >>> add_months(date(2018, 1, 1), -1)\n datetime.date(2017, 12, 1)\n >>> add_months(date(2018, 1, 1), 25)\n datetime.date(2020, 2, 1)\n >>> add_months(date(2018, 1, 1), -25)\n datetime.date(2015, 12, 1)\n >>> add_months(date(2018, 1, 31), 1)\n datetime.date(2018, 2, 28)"}
{"_id": "q_7921", "text": "return `dateobj` + `nb_years`\n\n If landing date doesn't exist (e.g. february, 30th), return the last\n day of the landing month.\n\n >>> add_years(date(2018, 1, 1), 1)\n datetime.date(2019, 1, 1)\n >>> add_years(date(2018, 1, 1), -1)\n datetime.date(2017, 1, 1)\n >>> add_years(date(2020, 2, 29), 1)\n datetime.date(2021, 2, 28)\n >>> add_years(date(2020, 2, 29), -1)\n datetime.date(2019, 2, 28)"}
{"_id": "q_7922", "text": "parse `datestr` and return corresponding date object.\n\n `datestr` should be a string matching `date_fmt` and parseable by `strptime`\n but some offset can also be added using `(datestr) + OFFSET` or `(datestr) -\n OFFSET` syntax. When using this syntax, `OFFSET` should be understable by\n `pandas.Timedelta` (cf.\n http://pandas.pydata.org/pandas-docs/stable/timedeltas.html) and `w`, `week`\n `month` and `year` offset keywords are also accepted. `datestr` MUST be wrapped\n with parenthesis.\n\n Additionally, the following symbolic names are supported: `TODAY`,\n `YESTERDAY`, `TOMORROW`.\n\n Example usage:\n\n >>> parse_date('2018-01-01', '%Y-%m-%d') datetime.date(2018, 1, 1)\n parse_date('(2018-01-01) + 1day', '%Y-%m-%d') datetime.date(2018, 1, 2)\n parse_date('(2018-01-01) + 2weeks', '%Y-%m-%d') datetime.date(2018, 1, 15)\n\n Parameters: `datestr`: the date to parse, formatted as `date_fmt`\n `date_fmt`: expected date format\n\n Returns: The `date` object. If date could not be parsed, a ValueError will\n be raised."}
{"_id": "q_7923", "text": "Filter dataframe your data by date.\n\n This function will interpret `start`, `stop` and `atdate` and build\n the corresponding date range. The caller must specify either:\n\n - `atdate`: keep all rows matching this date exactly,\n - `start`: keep all rows matching this date onwards.\n - `stop`: keep all rows matching dates before this one.\n - `start` and `stop`: keep all rows between `start` and `stop`,\n\n Any other combination will raise an error. The lower bound of the date range\n will be included, the upper bound will be excluded.\n\n When specified, `start`, `stop` and `atdate` values are expected to match the\n `date_format` format or a known symbolic value (i.e. 'TODAY', 'YESTERDAY' or 'TOMORROW').\n\n Additionally, the offset syntax \"(date) + offset\" is also supported (Mind\n the parenthesis around the date string). In that case, the offset must be\n one of the syntax supported by `pandas.Timedelta` (see [pandas doc](\n http://pandas.pydata.org/pandas-docs/stable/timedeltas.html))\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `date_col` (*str*): the name of the dataframe's column to filter on\n\n *optional :*\n - `date_format` (*str*): expected date format in column `date_col` (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior)\n - `start` (*str*): if specified, lower bound (included) of the date range\n - `stop` (*str*): if specified, upper bound (excluded) of the date range\n - `atdate` (*str*): if specified, the exact date we're filtering on"}
{"_id": "q_7924", "text": "Add a column to the dataframe according to the groupby logic on group_cols\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the desired column you need percentage on\n\n *optional :*\n - `group_cols` (*list*): names of columns for the groupby logic\n - `new_column` (*str*): name of the output column. By default `column` will be overwritten.\n\n ---\n\n **Input**\n\n | gender | sport | number |\n |:------:|:----------:|:------:|\n | male | bicycle | 17 |\n | female | basketball | 17 |\n | male | basketball | 3 |\n | female | football | 7 |\n | female | running | 30 |\n | male | running | 20 |\n | male | football | 21 |\n | female | bicycle | 17 |\n\n ```cson\n percentage:\n new_column: 'number_percentage'\n column: 'number'\n group_cols: ['sport']\n ```\n\n **Output**\n\n | gender | sport | number | number_percentage |\n |:------:|:----------:|:------:|:-----------------:|\n | male | bicycle | 17 | 50.0 |\n | female | basketball | 17 | 85.0 |\n | male | basketball | 3 | 15.0 |\n | female | football | 7 | 25.0 |\n | female | running | 30 | 60.0 |\n | male | running | 20 | 40.0 |\n | male | football | 21 | 75.0 |\n | female | bicycle | 17 | 50.0 |"}
{"_id": "q_7925", "text": "Get descriptor base path if string or return None."}
{"_id": "q_7926", "text": "Validate this Data Package."}
{"_id": "q_7927", "text": "Push Data Package to storage.\n\n All parameters should be used as keyword arguments.\n\n Args:\n descriptor (str): path to descriptor\n backend (str): backend name like `sql` or `bigquery`\n backend_options (dict): backend options mentioned in backend docs"}
{"_id": "q_7928", "text": "Pull Data Package from storage.\n\n All parameters should be used as keyword arguments.\n\n Args:\n descriptor (str): path where to store descriptor\n name (str): name of the pulled datapackage\n backend (str): backend name like `sql` or `bigquery`\n backend_options (dict): backend options mentioned in backend docs"}
{"_id": "q_7929", "text": "Convert resource's path and name to storage's table name.\n\n Args:\n path (str): resource path\n name (str): resource name\n\n Returns:\n str: table name"}
{"_id": "q_7930", "text": "Restore schemas from being compatible with storage schemas.\n\n Foreign keys related operations.\n\n Args:\n list: resources from storage\n\n Returns:\n list: restored resources"}
{"_id": "q_7931", "text": "It is possible for some of gdb's output to be read before it completely finished its response.\n In that case, a partial mi response was read, which cannot be parsed into structured data.\n We want to ALWAYS parse complete mi records. To do this, we store a buffer of gdb's\n output if the output did not end in a newline.\n\n Args:\n raw_output: Contents of the gdb mi output\n buf (str): Buffered gdb response from the past. This is incomplete and needs to be prepended to\n gdb's next output.\n\n Returns:\n (raw_output, buf)"}
{"_id": "q_7932", "text": "Write to gdb process. Block while parsing responses from gdb for a maximum of timeout_sec.\n\n Args:\n mi_cmd_to_write (str or list): String to write to gdb. If list, it is joined by newlines.\n timeout_sec (float): Maximum number of seconds to wait for response before exiting. Must be >= 0.\n raise_error_on_timeout (bool): If read_response is True, raise error if no response is received\n read_response (bool): Block and read response. If there is a separate thread running,\n this can be false, and the reading thread read the output.\n Returns:\n List of parsed gdb responses if read_response is True, otherwise []\n Raises:\n NoGdbProcessError if there is no gdb subprocess running\n TypeError if mi_cmd_to_write is not valid"}
{"_id": "q_7933", "text": "Get response from GDB, and block while doing so. If GDB does not have any response ready to be read\n by timeout_sec, an exception is raised.\n\n Args:\n timeout_sec (float): Maximum time to wait for reponse. Must be >= 0. Will return after\n raise_error_on_timeout (bool): Whether an exception should be raised if no response was found\n after timeout_sec\n\n Returns:\n List of parsed GDB responses, returned from gdbmiparser.parse_response, with the\n additional key 'stream' which is either 'stdout' or 'stderr'\n\n Raises:\n GdbTimeoutError if response is not received within timeout_sec\n ValueError if select returned unexpected file number\n NoGdbProcessError if there is no gdb subprocess running"}
{"_id": "q_7934", "text": "Get responses on windows. Assume no support for select and use a while loop."}
{"_id": "q_7935", "text": "Get responses on unix-like system. Use select to wait for output."}
{"_id": "q_7936", "text": "Read count characters starting at self.index,\n and return those characters as a string"}
{"_id": "q_7937", "text": "Parse gdb mi text and turn it into a dictionary.\n\n See https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Stream-Records.html#GDB_002fMI-Stream-Records\n for details on types of gdb mi output.\n\n Args:\n gdb_mi_text (str): String output from gdb\n\n Returns:\n dict with the following keys:\n type (either 'notify', 'result', 'console', 'log', 'target', 'done'),\n message (str or None),\n payload (str, list, dict, or None)"}
{"_id": "q_7938", "text": "Get notify message and payload dict"}
{"_id": "q_7939", "text": "Optimize by SGD, AdaGrad, or AdaDelta."}
{"_id": "q_7940", "text": "Return updates in the training."}
{"_id": "q_7941", "text": "Get parameters to be optimized."}
{"_id": "q_7942", "text": "Compute first glimpse position using down-sampled image."}
{"_id": "q_7943", "text": "All codes that create parameters should be put into 'setup' function."}
{"_id": "q_7944", "text": "Build the computation graph here."}
{"_id": "q_7945", "text": "Process all data with given function.\n The scheme of function should be x,y -> x,y."}
{"_id": "q_7946", "text": "Make targets be one-hot vectors."}
{"_id": "q_7947", "text": "Print dataset statistics."}
{"_id": "q_7948", "text": "We train over mini-batches and evaluate periodically."}
{"_id": "q_7949", "text": "Sample outputs from LM."}
{"_id": "q_7950", "text": "Compute the alignment weights based on the previous state."}
{"_id": "q_7951", "text": "Pad sequences to given length in the left or right side."}
{"_id": "q_7952", "text": "RMSPROP optimization core."}
{"_id": "q_7953", "text": "Run the model with validation data and return costs."}
{"_id": "q_7954", "text": "This function will be called after each iteration."}
{"_id": "q_7955", "text": "Create inner loop variables."}
{"_id": "q_7956", "text": "Internal scan with dummy input variables."}
{"_id": "q_7957", "text": "Momentum SGD optimization core."}
{"_id": "q_7958", "text": "Skip N batches in the training."}
{"_id": "q_7959", "text": "Load parameters for the training.\n This method can load free parameters and resume the training progress."}
{"_id": "q_7960", "text": "Train the model and return costs."}
{"_id": "q_7961", "text": "Run one training iteration."}
{"_id": "q_7962", "text": "Run one valid iteration, return true if to continue training."}
{"_id": "q_7963", "text": "Get specified split of data."}
{"_id": "q_7964", "text": "Report usage of training parameters."}
{"_id": "q_7965", "text": "An alias of deepy.tensor.var."}
{"_id": "q_7966", "text": "Create vars given a dataset and set test values.\n Useful when dataset is already defined."}
{"_id": "q_7967", "text": "Create a shared theano scalar value."}
{"_id": "q_7968", "text": "Stack encoding layers, this must be done before stacking decoding layers."}
{"_id": "q_7969", "text": "Stack decoding layers."}
{"_id": "q_7970", "text": "Encode given input."}
{"_id": "q_7971", "text": "Register the layer so that it's param will be trained.\n But the output of the layer will not be stacked."}
{"_id": "q_7972", "text": "Monitoring the outputs of each layer.\n Useful for troubleshooting convergence problems."}
{"_id": "q_7973", "text": "Return all parameters."}
{"_id": "q_7974", "text": "Set up variables."}
{"_id": "q_7975", "text": "Save parameters to file."}
{"_id": "q_7976", "text": "Load parameters from file."}
{"_id": "q_7977", "text": "Print network statistics."}
{"_id": "q_7978", "text": "Register parameters."}
{"_id": "q_7979", "text": "Register updates that will only be executed in training phase."}
{"_id": "q_7980", "text": "Register monitors they should be tuple of name and Theano variable."}
{"_id": "q_7981", "text": "Get the L2 norm of multiple tensors.\n This function is taken from blocks."}
{"_id": "q_7982", "text": "dumps one element to file_obj, a file opened in write mode"}
{"_id": "q_7983", "text": "Load parameters to the block."}
{"_id": "q_7984", "text": "Creates |oauth2| request elements."}
{"_id": "q_7985", "text": "We need to override this method to fix Facebooks naming deviation."}
{"_id": "q_7986", "text": "Google doesn't accept client ID and secret to be at the same time in\n request parameters and in the basic authorization header in the access\n token request."}
{"_id": "q_7987", "text": "Login handler, must accept both GET and POST to be able to use OpenID."}
{"_id": "q_7988", "text": "Replaces all values that are single-item iterables with the value of its\n index 0.\n\n :param dict dict_:\n Dictionary to normalize.\n\n :returns:\n Normalized dictionary."}
{"_id": "q_7989", "text": "Converts list of tuples to dictionary with duplicate keys converted to\n lists.\n\n :param list items:\n List of tuples.\n\n :returns:\n :class:`dict`"}
{"_id": "q_7990", "text": "Parses response body from JSON, XML or query string.\n\n :param body:\n string\n\n :returns:\n :class:`dict`, :class:`list` if input is JSON or query string,\n :class:`xml.etree.ElementTree.Element` if XML."}
{"_id": "q_7991", "text": "Returns a provider class.\n\n :param class_name: :class:`string` or\n :class:`authomatic.providers.BaseProvider` subclass."}
{"_id": "q_7992", "text": "Creates the value for ``Set-Cookie`` HTTP header.\n\n :param bool delete:\n If ``True`` the cookie value will be ``deleted`` and the\n Expires value will be ``Thu, 01-Jan-1970 00:00:01 GMT``."}
{"_id": "q_7993", "text": "Creates signature for the session."}
{"_id": "q_7994", "text": "Converts the value to a signed string with timestamp.\n\n :param value:\n Object to be serialized.\n\n :returns:\n Serialized value."}
{"_id": "q_7995", "text": "``True`` if credentials are valid, ``False`` if expired."}
{"_id": "q_7996", "text": "Returns ``True`` if credentials expire sooner than specified.\n\n :param int seconds:\n Number of seconds.\n\n :returns:\n ``True`` if credentials expire sooner than specified,\n else ``False``."}
{"_id": "q_7997", "text": "Converts the credentials to a percent encoded string to be stored for\n later use.\n\n :returns:\n :class:`string`"}
{"_id": "q_7998", "text": "Return true if string is binary data."}
{"_id": "q_7999", "text": "The whole response content."}
{"_id": "q_8000", "text": "Creates |oauth1| request elements."}
{"_id": "q_8001", "text": "Decorator for Flask view functions."}
{"_id": "q_8002", "text": "Launches the OpenID authentication procedure."}
{"_id": "q_8003", "text": "Logs a message with pre-formatted prefix.\n\n :param int level:\n Logging level as specified in the\n `login module <http://docs.python.org/2/library/logging.html>`_ of\n Python standard library.\n\n :param str msg:\n The actual message."}
{"_id": "q_8004", "text": "Splits given url to url base and params converted to list of tuples."}
{"_id": "q_8005", "text": "Deletes this worker's subscription."}
{"_id": "q_8006", "text": "Workers all share the same subscription so that tasks are\n distributed across all workers."}
{"_id": "q_8007", "text": "Enqueues a function for the task queue to execute."}
{"_id": "q_8008", "text": "Standalone PSQ worker.\n\n The queue argument must be the full importable path to a psq.Queue\n instance.\n\n Example usage:\n\n psqworker config.q\n\n psqworker --path /opt/app queues.fast"}
{"_id": "q_8009", "text": "Gets the result of the task.\n\n Arguments:\n timeout: Maximum seconds to wait for a result before raising a\n TimeoutError. If set to None, this will wait forever. If the\n queue doesn't store results and timeout is None, this call will\n never return."}
{"_id": "q_8010", "text": "This function is the decorator which is used to wrap a Sanic route with.\n In the simplest case, simply use the default parameters to allow all\n origins in what is the most permissive configuration. If this method\n modifies state or performs authentication which may be brute-forced, you\n should add some degree of protection, such as Cross Site Forgery\n Request protection.\n\n :param origins:\n The origin, or list of origins to allow requests from.\n The origin(s) may be regular expressions, case-sensitive strings,\n or else an asterisk\n\n Default : '*'\n :type origins: list, string or regex\n\n :param methods:\n The method or list of methods which the allowed origins are allowed to\n access for non-simple requests.\n\n Default : [GET, HEAD, POST, OPTIONS, PUT, PATCH, DELETE]\n :type methods: list or string\n\n :param expose_headers:\n The header or list which are safe to expose to the API of a CORS API\n specification.\n\n Default : None\n :type expose_headers: list or string\n\n :param allow_headers:\n The header or list of header field names which can be used when this\n resource is accessed by allowed origins. The header(s) may be regular\n expressions, case-sensitive strings, or else an asterisk.\n\n Default : '*', allow all headers\n :type allow_headers: list, string or regex\n\n :param supports_credentials:\n Allows users to make authenticated requests. If true, injects the\n `Access-Control-Allow-Credentials` header in responses. This allows\n cookies and credentials to be submitted across domains.\n\n :note: This option cannot be used in conjuction with a '*' origin\n\n Default : False\n :type supports_credentials: bool\n\n :param max_age:\n The maximum time for which this CORS request maybe cached. This value\n is set as the `Access-Control-Max-Age` header.\n\n Default : None\n :type max_age: timedelta, integer, string or None\n\n :param send_wildcard: If True, and the origins parameter is `*`, a wildcard\n `Access-Control-Allow-Origin` header is sent, rather than the\n request's `Origin` header.\n\n Default : False\n :type send_wildcard: bool\n\n :param vary_header:\n If True, the header Vary: Origin will be returned as per the W3\n implementation guidelines.\n\n Setting this header when the `Access-Control-Allow-Origin` is\n dynamically generated (e.g. when there is more than one allowed\n origin, and an Origin than '*' is returned) informs CDNs and other\n caches that the CORS headers are dynamic, and cannot be cached.\n\n If False, the Vary header will never be injected or altered.\n\n Default : True\n :type vary_header: bool\n\n :param automatic_options:\n Only applies to the `cross_origin` decorator. If True, Sanic-CORS will\n override Sanic's default OPTIONS handling to return CORS headers for\n OPTIONS requests.\n\n Default : True\n :type automatic_options: bool"}
{"_id": "q_8011", "text": "Performs the actual evaluation of Sanic-CORS options and actually\n modifies the response object.\n\n This function is used both in the decorator and the after_request\n callback\n :param sanic.request.Request req:"}
{"_id": "q_8012", "text": "Wraps scalars or string types as a list, or returns the iterable instance."}
{"_id": "q_8013", "text": "Python 3.4 does not have math.isclose, so we need to steal it and add it here."}
{"_id": "q_8014", "text": "Deprecator decorator."}
{"_id": "q_8015", "text": "Attempts to deserialize a bytestring into an audiosegment.\n\n :param bstr: The bytestring serialized via an audiosegment's serialize() method.\n :returns: An AudioSegment object deserialized from `bstr`."}
{"_id": "q_8016", "text": "Returns an AudioSegment created from the given numpy array.\n\n The numpy array must have shape = (num_samples, num_channels).\n\n :param nparr: The numpy array to create an AudioSegment from.\n :returns: An AudioSegment created from the given array."}
{"_id": "q_8017", "text": "Executes a Sox command in a platform-independent manner.\n\n `cmd` must be a format string that includes {inputfile} and {outputfile}."}
{"_id": "q_8018", "text": "Returns a copy of this AudioSegment, but whose silence has been removed.\n\n .. note:: This method requires that you have the program 'sox' installed.\n\n .. warning:: This method uses the program 'sox' to perform the task. While this is very fast for a single\n function call, the IO may add up for large numbers of AudioSegment objects.\n\n :param duration_s: The number of seconds of \"silence\" that must be present in a row to\n be stripped.\n :param threshold_percentage: Silence is defined as any samples whose absolute value is below\n `threshold_percentage * max(abs(samples in this segment))`.\n :param console_output: If True, will pipe all sox output to the console.\n :returns: A copy of this AudioSegment, but whose silence has been removed."}
{"_id": "q_8019", "text": "Transforms the indicated slice of the AudioSegment into the frequency domain and returns the bins\n and the values.\n\n If neither `start_s` or `start_sample` is specified, the first sample of the slice will be the first sample\n of the AudioSegment.\n\n If neither `duration_s` or `num_samples` is specified, the slice will be from the specified start\n to the end of the segment.\n\n .. code-block:: python\n\n # Example for plotting the FFT using this function\n import matplotlib.pyplot as plt\n import numpy as np\n\n seg = audiosegment.from_file(\"furelise.wav\")\n # Just take the first 3 seconds\n hist_bins, hist_vals = seg[1:3000].fft()\n hist_vals_real_normed = np.abs(hist_vals) / len(hist_vals)\n plt.plot(hist_bins / 1000, hist_vals_real_normed)\n plt.xlabel(\"kHz\")\n plt.ylabel(\"dB\")\n plt.show()\n\n .. image:: images/fft.png\n\n :param start_s: The start time in seconds. If this is specified, you cannot specify `start_sample`.\n :param duration_s: The duration of the slice in seconds. If this is specified, you cannot specify `num_samples`.\n :param start_sample: The zero-based index of the first sample to include in the slice.\n If this is specified, you cannot specify `start_s`.\n :param num_samples: The number of samples to include in the slice. If this is specified, you cannot\n specify `duration_s`.\n :param zero_pad: If True and the combination of start and duration result in running off the end of\n the AudioSegment, the end is zero padded to prevent this.\n :returns: np.ndarray of frequencies in Hz, np.ndarray of amount of each frequency\n :raises: ValueError If `start_s` and `start_sample` are both specified and/or if both `duration_s` and\n `num_samples` are specified."}
{"_id": "q_8020", "text": "Yields self's data in chunks of frame_duration_ms.\n\n This function adapted from pywebrtc's example [https://github.com/wiseman/py-webrtcvad/blob/master/example.py].\n\n :param frame_duration_ms: The length of each frame in ms.\n :param zero_pad: Whether or not to zero pad the end of the AudioSegment object to get all\n the audio data out as frames. If not, there may be a part at the end\n of the Segment that is cut off (the part will be <= `frame_duration_ms` in length).\n :returns: A Frame object with properties 'bytes (the data)', 'timestamp (start time)', and 'duration'."}
{"_id": "q_8021", "text": "Normalize the values in the AudioSegment so that its `spl` property\n gives `db`.\n\n .. note:: This method is currently broken - it returns an AudioSegment whose\n values are much smaller than reasonable, yet which yield an SPL value\n that equals the given `db`. Such an AudioSegment will not be serializable\n as a WAV file, which will also break any method that relies on SOX.\n I may remove this method in the future, since the SPL of an AudioSegment is\n pretty questionable to begin with.\n\n :param db: The decibels to normalize average to.\n :returns: A new AudioSegment object whose values are changed so that their\n average is `db`.\n :raises: ValueError if there are no samples in this AudioSegment."}
{"_id": "q_8022", "text": "Reduces others into this one by concatenating all the others onto this one and\n returning the result. Does not modify self, instead, makes a copy and returns that.\n\n :param others: The other AudioSegment objects to append to this one.\n :returns: The concatenated result."}
{"_id": "q_8023", "text": "Get the ID corresponding to the offset which occurs first after the given onset_front_id.\n By `first` I mean the front which contains the offset which is closest to the latest point\n in the onset front. By `after`, I mean that the offset must contain only offsets which\n occur after the latest onset in the onset front.\n\n If there is no appropriate offset front, the id returned is -1."}
{"_id": "q_8024", "text": "Gets an onset_front and an offset_front such that they both occupy at least some of the same\n frequency channels, then returns the portion of each that overlaps with the other."}
{"_id": "q_8025", "text": "Returns an updated segmentation mask such that the input `segmentation_mask` has been updated by segmenting between\n `onset_front_id` and `offset_front_id`, as found in `onset_fronts` and `offset_fronts`, respectively.\n\n This function also returns the onset_fronts and offset_fronts matrices, updated so that any fronts that are of\n less than 3 channels wide are removed.\n\n This function also returns a boolean value indicating whether the onset channel went to completion.\n\n Specifically, segments by doing the following:\n\n - Going across frequencies in the onset_front,\n - add the segment mask ID (the onset front ID) to all samples between the onset_front and the offset_front,\n if the offset_front is in that frequency.\n\n Possible scenarios:\n\n Fronts line up completely:\n\n ::\n\n | | S S S\n | | => S S S\n | | S S S\n | | S S S\n\n Onset front starts before offset front:\n\n ::\n\n | |\n | | S S S\n | | => S S S\n | | S S S\n\n Onset front ends after offset front:\n\n ::\n\n | | S S S\n | | => S S S\n | | S S S\n | |\n\n Onset front starts before and ends after offset front:\n\n ::\n\n | |\n | | => S S S\n | | S S S\n | |\n\n The above three options in reverse:\n\n ::\n\n | |S S| |\n |S S| |S S| |S S|\n |S S| |S S| |S S|\n |S S| | |\n\n There is one last scenario:\n\n ::\n\n | |\n \\ /\n \\ /\n / \\\n | |\n\n Where the offset and onset fronts cross one another. If this happens, we simply\n reverse the indices and accept:\n\n ::\n\n |sss|\n \\sss/\n \\s/\n /s\\\n |sss|\n\n The other option would be to destroy the offset front from the crossover point on, and\n then search for a new offset front for the rest of the onset front."}
{"_id": "q_8026", "text": "Removes all points in the fronts that overlap with the segmentation mask."}
{"_id": "q_8027", "text": "Removes all fronts from `fronts` which are strictly smaller than\n `size` consecutive frequencies in length."}
{"_id": "q_8028", "text": "For each onset front, for each frequency in that front, break the onset front if the signals\n between this frequency's onset and the next frequency's onset are not similar enough.\n\n Specifically:\n If we have the following two frequency channels, and the two O's are part of the same onset front,\n\n ::\n\n [ . O . . . . . . . . . . ]\n [ . . . . O . . . . . . . ]\n\n We compare the signals x and y:\n\n ::\n\n [ . x x x x . . . . . . . ]\n [ . y y y y . . . . . . . ]\n\n And if they are not sufficiently similar (via a DSP correlation algorithm), we break the onset\n front between these two channels.\n\n Once this is done, remove any onset fronts that are less than 3 channels wide."}
{"_id": "q_8029", "text": "Returns a list of segmentation masks each of the same dimension as the input one,\n but where they each have exactly one segment in them and all other samples in them\n are zeroed.\n\n Only bothers to return segments that are larger in total area than `threshold * mask.size`."}
{"_id": "q_8030", "text": "Worker for the ASA algorithm's multiprocessing step."}
{"_id": "q_8031", "text": "Does a lowpass filter over the given data.\n\n :param data: The data (numpy array) to be filtered.\n :param cutoff: The high cutoff in Hz.\n :param fs: The sample rate in Hz of the data.\n :param order: The order of the filter. The higher the order, the tighter the roll-off.\n :returns: Filtered data (numpy array)."}
{"_id": "q_8032", "text": "Launch a Process, return his pid"}
{"_id": "q_8033", "text": "Update the list of the running process and return the list"}
{"_id": "q_8034", "text": "Give an IP, maybe a date, get the ASN.\n This is the fastest command.\n\n :param ip: IP address to search for\n :param announce_date: Date of the announcement\n\n :rtype: String, ASN."}
{"_id": "q_8035", "text": "Get the full history of an IP. It takes time.\n\n :param ip: IP address to search for\n :param days_limit: Max amount of days to query. (None means no limit)\n\n :rtype: list. For each day in the database: day, asn, block"}
{"_id": "q_8036", "text": "Get the full history of an IP, aggregate the result instead of\n returning one line per day.\n\n :param ip: IP address to search for\n :param days_limit: Max amount of days to query. (None means no limit)\n\n :rtype: list. For each change: FirstDay, LastDay, ASN, Block"}
{"_id": "q_8037", "text": "Inconditianilly download the URL in a temporary directory.\n When finished, the file is moved in the real directory.\n Like this an other process will not attempt to extract an inclomplete file."}
{"_id": "q_8038", "text": "Verify that the file has not already been downloaded."}
{"_id": "q_8039", "text": "Separates the outcome feature from the data and creates the onehot vector for each row."}
{"_id": "q_8040", "text": "Used to check whether the two edge lists have the same edges \n when elements are neither hashable nor sortable."}
{"_id": "q_8041", "text": "Given a list of audit files, rank them using the `measurer` and\n return the features that never deviate more than `similarity_bound`\n across repairs."}
{"_id": "q_8042", "text": "Loads a confusion matrix in a two-level dictionary format.\n\n For example, the confusion matrix of a 75%-accurate model\n that predicted 15 values (and mis-classified 5) may look like:\n {\"A\": {\"A\":10, \"B\": 5}, \"B\": {\"B\":5}}\n\n Note that raw boolean values are translated into strings, such that\n a value that was the boolean True will be returned as the string \"True\"."}
{"_id": "q_8043", "text": "Separates the outcome feature from the data."}
{"_id": "q_8044", "text": "Renders a Page object as a Twitter Bootstrap styled pagination bar.\n Compatible with Bootstrap 3.x and 4.x only.\n\n Example::\n\n {% bootstrap_paginate page_obj range=10 %}\n\n\n Named Parameters::\n\n range - The size of the pagination bar (ie, if set to 10 then, at most,\n 10 page numbers will display at any given time) Defaults to\n None, which shows all pages.\n\n\n size - Accepts \"small\", and \"large\". Defaults to\n None which is the standard size.\n\n show_prev_next - Accepts \"true\" or \"false\". Determines whether or not\n to show the previous and next page links. Defaults to\n \"true\"\n\n\n show_first_last - Accepts \"true\" or \"false\". Determines whether or not\n to show the first and last page links. Defaults to\n \"false\"\n\n previous_label - The text to display for the previous page link.\n Defaults to \"&larr;\"\n\n next_label - The text to display for the next page link. Defaults to\n \"&rarr;\"\n\n first_label - The text to display for the first page link. Defaults to\n \"&laquo;\"\n\n last_label - The text to display for the last page link. Defaults to\n \"&raquo;\"\n\n url_view_name - The named URL to use. Defaults to None. If None, then the\n default template simply appends the url parameter as a\n relative URL link, eg: <a href=\"?page=1\">1</a>\n\n url_param_name - The name of the parameter to use in the URL. If\n url_view_name is set to None, this string is used as the\n parameter name in the relative URL path. If a URL\n name is specified, this string is used as the\n parameter name passed into the reverse() method for\n the URL.\n\n url_extra_args - This is used only in conjunction with url_view_name.\n When referencing a URL, additional arguments may be\n passed in as a list.\n\n url_extra_kwargs - This is used only in conjunction with url_view_name.\n When referencing a URL, additional named arguments\n may be passed in as a dictionary.\n\n url_get_params - The other get parameters to pass, only the page\n number will be overwritten. Use this to preserve\n filters.\n\n url_anchor - The anchor to use in URLs. Defaults to None.\n\n extra_pagination_classes - A space separated list of CSS class names\n that will be added to the top level <ul>\n HTML element. In particular, this can be\n utilized in Bootstrap 4 installatinos to\n add the appropriate alignment classes from\n Flexbox utilites, eg: justify-content-center"}
{"_id": "q_8045", "text": "Checks for alternative index-url in pip.conf"}
{"_id": "q_8046", "text": "For each package and target check if it is a regression.\n\n This is the case if the main repo contains a package version which is\n higher then in any of the other repos or if any of the other repos does not\n contain that package at all.\n\n :return: a dict indexed by package names containing\n dicts indexed by targets containing a boolean flag"}
{"_id": "q_8047", "text": "Remove trailing junk from the version number.\n\n >>> strip_version_suffix('')\n ''\n >>> strip_version_suffix('None')\n 'None'\n >>> strip_version_suffix('1.2.3-4trusty-20140131-1359-+0000')\n '1.2.3-4'\n >>> strip_version_suffix('1.2.3-foo')\n '1.2.3'"}
{"_id": "q_8048", "text": "For each package check if the version in one repo is equal for all targets.\n\n The version could be different in different repos though.\n\n :return: a dict indexed by package names containing a boolean flag"}
{"_id": "q_8049", "text": "Get the number of packages per target and repository.\n\n :return: a dict indexed by targets containing\n a list of integer values (one for each repo)"}
{"_id": "q_8050", "text": "Get the Jenkins job urls for each target.\n\n The placeholder {pkg} needs to be replaced with the ROS package name.\n\n :return: a dict indexed by targets containing a string"}
{"_id": "q_8051", "text": "Configure all Jenkins CI jobs."}
{"_id": "q_8052", "text": "Resolve all streams on the network.\n\n This function returns all currently available streams from any outlet on \n the network. The network is usually the subnet specified at the local \n router, but may also include a group of machines visible to each other via \n multicast packets (given that the network supports it), or list of \n hostnames. These details may optionally be customized by the experimenter \n in a configuration file (see Network Connectivity in the LSL wiki). \n \n Keyword arguments:\n wait_time -- The waiting time for the operation, in seconds, to search for \n streams. Warning: If this is too short (<0.5s) only a subset \n (or none) of the outlets that are present on the network may \n be returned. (default 1.0)\n \n Returns a list of StreamInfo objects (with empty desc field), any of which \n can subsequently be used to open an inlet. The full description can be\n retrieved from the inlet."}
{"_id": "q_8053", "text": "Resolve all streams with a specific value for a given property.\n\n If the goal is to resolve a specific stream, this method is preferred over \n resolving all streams and then selecting the desired one.\n \n Keyword arguments:\n prop -- The StreamInfo property that should have a specific value (e.g., \n \"name\", \"type\", \"source_id\", or \"desc/manufaturer\").\n value -- The string value that the property should have (e.g., \"EEG\" as \n the type property).\n minimum -- Return at least this many streams. (default 1)\n timeout -- Optionally a timeout of the operation, in seconds. If the \n timeout expires, less than the desired number of streams \n (possibly none) will be returned. (default FOREVER)\n \n Returns a list of matching StreamInfo objects (with empty desc field), any \n of which can subsequently be used to open an inlet.\n \n Example: results = resolve_Stream_byprop(\"type\",\"EEG\")"}
{"_id": "q_8054", "text": "Resolve all streams that match a given predicate.\n\n Advanced query that allows to impose more conditions on the retrieved \n streams; the given string is an XPath 1.0 predicate for the <description>\n node (omitting the surrounding []'s), see also\n http://en.wikipedia.org/w/index.php?title=XPath_1.0&oldid=474981951.\n \n Keyword arguments:\n predicate -- The predicate string, e.g. \"name='BioSemi'\" or \n \"type='EEG' and starts-with(name,'BioSemi') and \n count(description/desc/channels/channel)=32\"\n minimum -- Return at least this many streams. (default 1)\n timeout -- Optionally a timeout of the operation, in seconds. If the \n timeout expires, less than the desired number of streams \n (possibly none) will be returned. (default FOREVER)\n \n Returns a list of matching StreamInfo objects (with empty desc field), any \n of which can subsequently be used to open an inlet."}
{"_id": "q_8055", "text": "Error handler function. Translates an error code into an exception."}
{"_id": "q_8056", "text": "Get a child with a specified name."}
{"_id": "q_8057", "text": "Get the previous sibling in the children list of the parent node.\n\n If a name is provided, the previous sibling with the given name is\n returned."}
{"_id": "q_8058", "text": "Set the element's name. Returns False if the node is empty."}
{"_id": "q_8059", "text": "Set the element's value. Returns False if the node is empty."}
{"_id": "q_8060", "text": "Append a copy of the specified element as a child."}
{"_id": "q_8061", "text": "Remove a given child element, specified by name or as element."}
{"_id": "q_8062", "text": "Obtain the set of currently present streams on the network.\n\n Returns a list of matching StreamInfo objects (with empty desc\n field), any of which can subsequently be used to open an inlet."}
{"_id": "q_8063", "text": "See all token associated with a given token.\n PAIR lilas"}
{"_id": "q_8064", "text": "Shows autocomplete results for a given token."}
{"_id": "q_8065", "text": "Compute fuzzy extensions of word.\n FUZZY lilas"}
{"_id": "q_8066", "text": "Compute fuzzy extensions of word that exist in index.\n FUZZYINDEX lilas"}
{"_id": "q_8067", "text": "Try to extract the bigger group of interlinked tokens.\n\n Should generally be used at last in the collectors chain."}
{"_id": "q_8068", "text": "Display this help message."}
{"_id": "q_8069", "text": "Print some useful infos from Redis DB."}
{"_id": "q_8070", "text": "Print raw content of a DB key.\n DBKEY g|u09tyzfe"}
{"_id": "q_8071", "text": "Compute a geohash from latitude and longitude.\n GEOHASH 48.1234 2.9876"}
{"_id": "q_8072", "text": "Get index details for a document by its id.\n INDEX 772210180J"}
{"_id": "q_8073", "text": "Return document linked to word with higher score.\n BESTSCORE lilas"}
{"_id": "q_8074", "text": "Print the distance score between two strings. Use |\u00a0as separator.\n STRDISTANCE rue des lilas|porte des lilas"}
{"_id": "q_8075", "text": "Just sends the request using its send method and returns its response."}
{"_id": "q_8076", "text": "Concurrently converts a list of Requests to Responses.\n\n :param requests: a collection of Request objects.\n :param stream: If False, the content will not be downloaded immediately.\n :param size: Specifies the number of workers to run at a time. If 1, no parallel processing.\n :param exception_handler: Callback function, called when exception occured. Params: Request, Exception"}
{"_id": "q_8077", "text": "Removes PEM-encoding from a public key, private key or certificate. If the\n private key is encrypted, the password will be used to decrypt it.\n\n :param data:\n A byte string of the PEM-encoded data\n\n :param password:\n A byte string of the encryption password, or None\n\n :return:\n A 3-element tuple in the format: (key_type, algorithm, der_bytes). The\n key_type will be a unicode string of \"public key\", \"private key\" or\n \"certificate\". The algorithm will be a unicode string of \"rsa\", \"dsa\"\n or \"ec\"."}
{"_id": "q_8078", "text": "Decrypts encrypted ASN.1 data\n\n :param encryption_algorithm_info:\n An instance of asn1crypto.pkcs5.Pkcs5EncryptionAlgorithm\n\n :param encrypted_content:\n A byte string of the encrypted content\n\n :param password:\n A byte string of the encrypted content's password\n\n :return:\n A byte string of the decrypted plaintext"}
{"_id": "q_8079", "text": "Creates an EVP_CIPHER pointer object and determines the buffer size\n necessary for the parameter specified.\n\n :param evp_cipher_ctx:\n An EVP_CIPHER_CTX pointer\n\n :param cipher:\n A unicode string of \"aes128\", \"aes192\", \"aes256\", \"des\",\n \"tripledes_2key\", \"tripledes_3key\", \"rc2\", \"rc4\"\n\n :param key:\n The key byte string\n\n :param data:\n The plaintext or ciphertext as a byte string\n\n :param padding:\n If padding is to be used\n\n :return:\n A 2-element tuple with the first element being an EVP_CIPHER pointer\n and the second being an integer that is the required buffer size"}
{"_id": "q_8080", "text": "Takes a CryptoAPI RSA private key blob and converts it into the ASN.1\n structures for the public and private keys\n\n :param bit_size:\n The integer bit size of the key\n\n :param blob_struct:\n An instance of the advapi32.RSAPUBKEY struct\n\n :param blob:\n A byte string of the binary data after the header\n\n :return:\n A 2-element tuple of (asn1crypto.keys.PublicKeyInfo,\n asn1crypto.keys.PrivateKeyInfo)"}
{"_id": "q_8081", "text": "Takes a CryptoAPI DSS private key blob and converts it into the ASN.1\n structures for the public and private keys\n\n :param bit_size:\n The integer bit size of the key\n\n :param public_blob:\n A byte string of the binary data after the public key header\n\n :param private_blob:\n A byte string of the binary data after the private key header\n\n :return:\n A 2-element tuple of (asn1crypto.keys.PublicKeyInfo,\n asn1crypto.keys.PrivateKeyInfo)"}
{"_id": "q_8082", "text": "Generates a DSA signature\n\n :param private_key:\n The PrivateKey to generate the signature with\n\n :param data:\n A byte string of the data the signature is for\n\n :param hash_algorithm:\n A unicode string of \"md5\", \"sha1\", \"sha256\", \"sha384\" or \"sha512\"\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the signature"}
{"_id": "q_8083", "text": "Generates an ECDSA signature\n\n :param private_key:\n The PrivateKey to generate the signature with\n\n :param data:\n A byte string of the data the signature is for\n\n :param hash_algorithm:\n A unicode string of \"md5\", \"sha1\", \"sha256\", \"sha384\" or \"sha512\"\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the signature"}
{"_id": "q_8084", "text": "Generates an RSA, DSA or ECDSA signature via CryptoAPI\n\n :param private_key:\n The PrivateKey to generate the signature with\n\n :param data:\n A byte string of the data the signature is for\n\n :param hash_algorithm:\n A unicode string of \"md5\", \"sha1\", \"sha256\", \"sha384\", \"sha512\" or \"raw\"\n\n :param rsa_pss_padding:\n If PSS padding should be used for RSA keys\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the signature"}
{"_id": "q_8085", "text": "Generates an RSA, DSA or ECDSA signature via CNG\n\n :param private_key:\n The PrivateKey to generate the signature with\n\n :param data:\n A byte string of the data the signature is for\n\n :param hash_algorithm:\n A unicode string of \"md5\", \"sha1\", \"sha256\", \"sha384\", \"sha512\" or \"raw\"\n\n :param rsa_pss_padding:\n If PSS padding should be used for RSA keys\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the signature"}
{"_id": "q_8086", "text": "Encrypts a value using an RSA public key via CNG\n\n :param certificate_or_public_key:\n A Certificate or PublicKey instance to encrypt with\n\n :param data:\n A byte string of the data to encrypt\n\n :param rsa_oaep_padding:\n If OAEP padding should be used instead of PKCS#1 v1.5\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the ciphertext"}
{"_id": "q_8087", "text": "Encrypts a value using an RSA private key via CryptoAPI\n\n :param private_key:\n A PrivateKey instance to decrypt with\n\n :param ciphertext:\n A byte string of the data to decrypt\n\n :param rsa_oaep_padding:\n If OAEP padding should be used instead of PKCS#1 v1.5\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the plaintext"}
{"_id": "q_8088", "text": "Blocks until the socket is ready to be read from, or the timeout is hit\n\n :param timeout:\n A float - the period of time to wait for data to be read. None for\n no time limit.\n\n :return:\n A boolean - if data is ready to be read. Will only be False if\n timeout is not None."}
{"_id": "q_8089", "text": "Reads exactly the specified number of bytes from the socket\n\n :param num_bytes:\n An integer - the exact number of bytes to read\n\n :return:\n A byte string of the data that was read"}
{"_id": "q_8090", "text": "Reads data from the socket and writes it to the memory bio\n used by libssl to decrypt the data. Returns the unencrypted\n data for the purpose of debugging handshakes.\n\n :return:\n A byte string of ciphertext from the socket. Used for\n debugging the handshake only."}
{"_id": "q_8091", "text": "Takes ciphertext from the memory bio and writes it to the\n socket.\n\n :return:\n A byte string of ciphertext going to the socket. Used\n for debugging the handshake only."}
{"_id": "q_8092", "text": "Encrypts plaintext via CNG\n\n :param cipher:\n A unicode string of \"aes\", \"des\", \"tripledes_2key\", \"tripledes_3key\",\n \"rc2\", \"rc4\"\n\n :param key:\n The encryption key - a byte string 5-16 bytes long\n\n :param data:\n The plaintext - a byte string\n\n :param iv:\n The initialization vector - a byte string - unused for RC4\n\n :param padding:\n Boolean, if padding should be used - unused for RC4\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the ciphertext"}
{"_id": "q_8093", "text": "Checks if an error occured, and if so throws an OSError containing the\n last OpenSSL error message\n\n :param result:\n An integer result code - 1 or greater indicates success\n\n :param exception_class:\n The exception class to use for the exception if an error occurred\n\n :raises:\n OSError - when an OpenSSL error occurs"}
{"_id": "q_8094", "text": "Return the certificate and a hash of it\n\n :param cert_pointer:\n A SecCertificateRef\n\n :return:\n A 2-element tuple:\n - [0]: A byte string of the SHA1 hash of the cert\n - [1]: A byte string of the DER-encoded contents of the cert"}
{"_id": "q_8095", "text": "Extracts the last OS error message into a python unicode string\n\n :return:\n A unicode string error message"}
{"_id": "q_8096", "text": "Converts a CFDictionary object into a python dictionary\n\n :param dictionary:\n The CFDictionary to convert\n\n :return:\n A python dict"}
{"_id": "q_8097", "text": "Extracts the function signature and description of a Python function\n\n :param docstring:\n A unicode string of the docstring for the function\n\n :param def_lineno:\n An integer line number that function was defined on\n\n :param code_lines:\n A list of unicode string lines from the source file the function was\n defined in\n\n :param prefix:\n A prefix to prepend to all output lines\n\n :return:\n A 2-element tuple:\n\n - [0] A unicode string of the function signature with a docstring of\n parameter info\n - [1] A markdown snippet of the function description"}
{"_id": "q_8098", "text": "Walks through a CommonMark AST to find section headers that delineate\n content that should be updated by this script\n\n :param md_ast:\n The AST of the markdown document\n\n :param sections:\n A dict to store the start and end lines of a section. The key will be\n a two-element tuple of the section type (\"class\", \"function\",\n \"method\" or \"attribute\") and identifier. The values are a two-element\n tuple of the start and end line number in the markdown document of the\n section.\n\n :param last:\n A dict containing information about the last section header seen.\n Includes the keys \"type_name\", \"identifier\", \"start_line\".\n\n :param last_class:\n A unicode string of the name of the last class found - used when\n processing methods and attributes.\n\n :param total_lines:\n An integer of the total number of lines in the markdown document -\n used to work around a bug in the API of the Python port of CommonMark"}
{"_id": "q_8099", "text": "A callback used to walk the Python AST looking for classes, functions,\n methods and attributes. Generates chunks of markdown markup to replace\n the existing content.\n\n :param node:\n An _ast module node object\n\n :param code_lines:\n A list of unicode strings - the source lines of the Python file\n\n :param sections:\n A dict of markdown document sections that need to be updated. The key\n will be a two-element tuple of the section type (\"class\", \"function\",\n \"method\" or \"attribute\") and identifier. The values are a two-element\n tuple of the start and end line number in the markdown document of the\n section.\n\n :param md_chunks:\n A dict with keys from the sections param and the values being a unicode\n string containing a chunk of markdown markup."}
{"_id": "q_8100", "text": "Tries to find a CA certs bundle in common locations\n\n :raises:\n OSError - when no valid CA certs bundle was found on the filesystem\n\n :return:\n The full filesystem path to a CA certs bundle file"}
{"_id": "q_8101", "text": "Extracts trusted CA certs from the system CA cert bundle\n\n :param cert_callback:\n A callback that is called once for each certificate in the trust store.\n It should accept two parameters: an asn1crypto.x509.Certificate object,\n and a reason. The reason will be None if the certificate is being\n exported, otherwise it will be a unicode string of the reason it won't.\n\n :param callback_only_on_failure:\n A boolean - if the callback should only be called when a certificate is\n not exported.\n\n :return:\n A list of 3-element tuples:\n - 0: a byte string of a DER-encoded certificate\n - 1: a set of unicode strings that are OIDs of purposes to trust the\n certificate for\n - 2: a set of unicode strings that are OIDs of purposes to reject the\n certificate for"}
{"_id": "q_8102", "text": "Parse the TLS handshake from the client to the server to extract information\n including the cipher suite selected, if compression is enabled, the\n session id and if a new or reused session ticket exists.\n\n :param server_handshake_bytes:\n A byte string of the handshake data received from the server\n\n :param client_handshake_bytes:\n A byte string of the handshake data sent to the server\n\n :return:\n A dict with the following keys:\n - \"protocol\": unicode string\n - \"cipher_suite\": unicode string\n - \"compression\": boolean\n - \"session_id\": \"new\", \"reused\" or None\n - \"session_ticket: \"new\", \"reused\" or None"}
{"_id": "q_8103", "text": "Creates a generator returning tuples of information about each record\n in a byte string of data from a TLS client or server. Stops as soon as it\n find a ChangeCipherSpec message since all data from then on is encrypted.\n\n :param data:\n A byte string of TLS records\n\n :return:\n A generator that yields 3-element tuples:\n [0] Byte string of record type\n [1] Byte string of protocol version\n [2] Byte string of record data"}
{"_id": "q_8104", "text": "Creates a generator returning tuples of information about each message in\n a byte string of data from a TLS handshake record\n\n :param data:\n A byte string of a TLS handshake record data\n\n :return:\n A generator that yields 2-element tuples:\n [0] Byte string of message type\n [1] Byte string of message data"}
{"_id": "q_8105", "text": "Creates a generator returning tuples of information about each extension\n from a byte string of extension data contained in a ServerHello ores\n ClientHello message\n\n :param data:\n A byte string of a extension data from a TLS ServerHello or ClientHello\n message\n\n :return:\n A generator that yields 2-element tuples:\n [0] Byte string of extension type\n [1] Byte string of extension data"}
{"_id": "q_8106", "text": "Raises a TLSVerificationError due to a hostname mismatch\n\n :param certificate:\n An asn1crypto.x509.Certificate object\n\n :raises:\n TLSVerificationError"}
{"_id": "q_8107", "text": "Raises a TLSVerificationError due to certificate being expired, or not yet\n being valid\n\n :param certificate:\n An asn1crypto.x509.Certificate object\n\n :raises:\n TLSVerificationError"}
{"_id": "q_8108", "text": "Looks at the server handshake bytes to try and detect a different protocol\n\n :param server_handshake_bytes:\n A byte string of the handshake data received from the server\n\n :return:\n None, or a unicode string of \"ftp\", \"http\", \"imap\", \"pop3\", \"smtp\""}
{"_id": "q_8109", "text": "Reads everything available from the socket - used for debugging when there\n is a protocol error\n\n :param socket:\n The socket to read from\n\n :return:\n A byte string of the remaining data"}
{"_id": "q_8110", "text": "Takes a set of unicode string OIDs and converts vendor-specific OIDs into\n generics OIDs from RFCs.\n\n - 1.2.840.113635.100.1.3 (apple_ssl) -> 1.3.6.1.5.5.7.3.1 (server_auth)\n - 1.2.840.113635.100.1.3 (apple_ssl) -> 1.3.6.1.5.5.7.3.2 (client_auth)\n - 1.2.840.113635.100.1.8 (apple_smime) -> 1.3.6.1.5.5.7.3.4 (email_protection)\n - 1.2.840.113635.100.1.9 (apple_eap) -> 1.3.6.1.5.5.7.3.13 (eap_over_ppp)\n - 1.2.840.113635.100.1.9 (apple_eap) -> 1.3.6.1.5.5.7.3.14 (eap_over_lan)\n - 1.2.840.113635.100.1.11 (apple_ipsec) -> 1.3.6.1.5.5.7.3.5 (ipsec_end_system)\n - 1.2.840.113635.100.1.11 (apple_ipsec) -> 1.3.6.1.5.5.7.3.6 (ipsec_tunnel)\n - 1.2.840.113635.100.1.11 (apple_ipsec) -> 1.3.6.1.5.5.7.3.7 (ipsec_user)\n - 1.2.840.113635.100.1.11 (apple_ipsec) -> 1.3.6.1.5.5.7.3.17 (ipsec_ike)\n - 1.2.840.113635.100.1.16 (apple_code_signing) -> 1.3.6.1.5.5.7.3.3 (code_signing)\n - 1.2.840.113635.100.1.20 (apple_time_stamping) -> 1.3.6.1.5.5.7.3.8 (time_stamping)\n - 1.3.6.1.4.1.311.10.3.2 (microsoft_time_stamp_signing) -> 1.3.6.1.5.5.7.3.8 (time_stamping)\n\n :param oids:\n A set of unicode strings\n\n :return:\n The original set of OIDs with any mapped OIDs added"}
{"_id": "q_8111", "text": "Checks to see if a cache file needs to be refreshed\n\n :param ca_path:\n A unicode string of the path to the cache file\n\n :param cache_length:\n An integer representing the number of hours the cache is valid for\n\n :return:\n A boolean - True if the cache needs to be updated, False if the file\n is up-to-date"}
{"_id": "q_8112", "text": "Gets value of bits between selected range from memory\n\n :param start: bit address of start of bit of bits\n :param end: bit address of first bit behind bits\n :return: instance of BitsVal (derived from SimBits type) which contains\n copy of selected bits"}
{"_id": "q_8113", "text": "Cast HArray signal or value to signal or value of type Bits"}
{"_id": "q_8114", "text": "Hdl convertible in operator, check if any of items\n in \"iterable\" equals \"sigOrVal\""}
{"_id": "q_8115", "text": "Logical shift left"}
{"_id": "q_8116", "text": "Returns no of bits required to store x-1\n for example x=8 returns 3"}
{"_id": "q_8117", "text": "c-like case of switch statement"}
{"_id": "q_8118", "text": "c-like default of switch statement"}
{"_id": "q_8119", "text": "Register signals from interfaces for Interface or Unit instances"}
{"_id": "q_8120", "text": "This method is called for every value change of any signal."}
{"_id": "q_8121", "text": "Serialize HWProcess instance\n\n :param scope: name scope to prevent name collisions"}
{"_id": "q_8122", "text": "Walk all interfaces on unit and instantiate agent for every interface.\n\n :return: all monitor/driver functions which should be added to simulation\n as processes"}
{"_id": "q_8123", "text": "If interface has associated clk return it otherwise\n try to find clk on parent recursively"}
{"_id": "q_8124", "text": "same like itertools.groupby\n\n :note: This function does not needs initial sorting like itertools.groupby\n\n :attention: Order of pairs is not deterministic."}
{"_id": "q_8125", "text": "Flatten nested lists, tuples, generators and maps\n\n :param level: maximum depth of flattening"}
{"_id": "q_8126", "text": "If signal is not driving anything remove it"}
{"_id": "q_8127", "text": "Try merge procB into procA\n\n :raise IncompatibleStructure: if merge is not possible\n :attention: procA is now result if merge has succeed\n :return: procA which is now result of merge"}
{"_id": "q_8128", "text": "on writeReqRecieved in monitor mode"}
{"_id": "q_8129", "text": "Convert unit to RTL using specified serializer\n\n :param unitOrCls: unit instance or class, which should be converted\n :param name: name override of top unit (if is None name is derived\n form class name)\n :param serializer: serializer which should be used for to RTL conversion\n :param targetPlatform: metainformatins about target platform, distributed\n on every unit under _targetPlatform attribute\n before Unit._impl() is called\n :param saveTo: directory where files should be stored\n If None RTL is returned as string.\n :raturn: if saveTo returns RTL string else returns list of file names\n which were created"}
{"_id": "q_8130", "text": "Create new signal in this context\n\n :param clk: clk signal, if specified signal is synthesized\n as SyncSignal\n :param syncRst: synchronous reset signal"}
{"_id": "q_8131", "text": "Get maximum _instId from all assigments in statement"}
{"_id": "q_8132", "text": "get max statement id,\n used for sorting of processes in architecture"}
{"_id": "q_8133", "text": "write data to interface"}
{"_id": "q_8134", "text": "Note that this interface will be master\n\n :return: self"}
{"_id": "q_8135", "text": "load declaratoins from _declr method\n This function is called first for parent and then for children"}
{"_id": "q_8136", "text": "generate _sig for each interface which has no subinterface\n if already has _sig return it instead\n\n :param context: instance of RtlNetlist where signals should be created\n :param prefix: name prefix for created signals\n :param typeTransform: optional function (type) returns modified type\n for signal"}
{"_id": "q_8137", "text": "Get name in HDL"}
{"_id": "q_8138", "text": "Load all operands and process them by self._evalFn"}
{"_id": "q_8139", "text": "Cast signed-unsigned, to int or bool"}
{"_id": "q_8140", "text": "Reinterpret signal of type Bits to signal of type HStruct"}
{"_id": "q_8141", "text": "Group transaction parts splited on words to words\n\n :param transaction: TransTmpl instance which parts\n should be grupped into words\n :return: generator of tuples (wordIndex, list of transaction parts\n in this word)"}
{"_id": "q_8142", "text": "Pretty print interface"}
{"_id": "q_8143", "text": "Convert transaction template into FrameTmpls\n\n :param transaction: transaction template used which are FrameTmpls\n created from\n :param wordWidth: width of data signal in target interface\n where frames will be used\n :param maxFrameLen: maximum length of frame in bits,\n if exceeded another frame will be created\n :param maxPaddingWords: maximum of continual padding words in frame,\n if exceed frame is split and words are cut of\n :attention: if maxPaddingWords<inf trimPaddingWordsOnEnd\n or trimPaddingWordsOnStart has to be True\n to decide where padding should be trimmed\n :param trimPaddingWordsOnStart: trim padding from start of frame\n at word granularity\n :param trimPaddingWordsOnEnd: trim padding from end of frame\n at word granularity"}
{"_id": "q_8144", "text": "Walk enumerated words in this frame\n\n :attention: not all indexes has to be present, only words\n with items will be generated when not showPadding\n :param showPadding: padding TransParts are also present\n :return: generator of tuples (wordIndex, list of TransParts\n in this word)"}
{"_id": "q_8145", "text": "Pack data into list of BitsVal of specified dataWidth\n\n :param data: dict of values for struct fields {fieldName: value}\n\n :return: list of BitsVal which are representing values of words"}
{"_id": "q_8146", "text": "Clean informations about enclosure for outputs and sensitivity\n of this statement"}
{"_id": "q_8147", "text": "Discover sensitivity for list of signals"}
{"_id": "q_8148", "text": "get RtlNetlist context from signals"}
{"_id": "q_8149", "text": "Update signal IO after reuce atempt\n\n :param self_reduced: if True this object was reduced\n :param io_changed: if True IO of this object may changed\n and has to be updated\n :param result_statements: list of statements which are result\n of reduce operation on this statement"}
{"_id": "q_8150", "text": "After merging statements update IO, sensitivity and context\n\n :attention: rank is not updated"}
{"_id": "q_8151", "text": "Merge statements in list to remove duplicated if-then-else trees\n\n :return: tuple (list of merged statements, rank decrease due merging)\n :note: rank decrease is sum of ranks of reduced statements\n :attention: statement list has to me mergable"}
{"_id": "q_8152", "text": "Simplify statements in the list"}
{"_id": "q_8153", "text": "After parrent statement become event dependent\n propagate event dependency flag to child statements"}
{"_id": "q_8154", "text": "Append statements to this container under conditions specified\n by condSet"}
{"_id": "q_8155", "text": "Disconnect this statement from signals and delete it from RtlNetlist context\n\n :attention: signal endpoints/drivers will be altered\n that means they can not be used for iteration"}
{"_id": "q_8156", "text": "Create register in this unit\n\n :param defVal: default value of this register,\n if this value is specified reset of this component is used\n (unit has to have single interface of class Rst or Rst_n)\n :param clk: optional clok signal specification\n :param rst: optional reset signal specification\n :note: rst/rst_n resolution is done from signal type,\n if it is negated type it is rst_n\n :note: if clk or rst is not specifid default signal\n from parent unit will be used"}
{"_id": "q_8157", "text": "Create signal in this unit"}
{"_id": "q_8158", "text": "Walk all simple values in HStruct or HArray"}
{"_id": "q_8159", "text": "Convert signum, no bit manipulation just data are represented\n differently\n\n :param signed: if True value will be signed,\n if False value will be unsigned,\n if None value will be vector without any sign specification"}
{"_id": "q_8160", "text": "register sensitivity for process"}
{"_id": "q_8161", "text": "Evaluate list of values as condition"}
{"_id": "q_8162", "text": "Connect ports of simulation models by name"}
{"_id": "q_8163", "text": "Create value updater for simulation\n\n :param nextVal: instance of Value which will be asssiggned to signal\n :param invalidate: flag which tells if value has been compromised\n and if it should be invaidated\n :return: function(value) -> tuple(valueHasChangedFlag, nextVal)"}
{"_id": "q_8164", "text": "Create value updater for simulation for value of array type\n\n :param nextVal: instance of Value which will be asssiggned to signal\n :param indexes: tuple on indexes where value should be updated\n in target array\n\n :return: function(value) -> tuple(valueHasChangedFlag, nextVal)"}
{"_id": "q_8165", "text": "set value of this param"}
{"_id": "q_8166", "text": "Resolve ports of discovered memories"}
{"_id": "q_8167", "text": "Construct value of this type.\n Delegated on value class for this type"}
{"_id": "q_8168", "text": "Cast value or signal of this type to another type of same size.\n\n :param sigOrVal: instance of signal or value to cast\n :param toType: instance of HdlType to cast into"}
{"_id": "q_8169", "text": "Concatenate all signals to one big signal, recursively\n\n :param masterDirEqTo: only signals with this direction are packed\n :param exclude: sequence of signals/interfaces to exclude"}
{"_id": "q_8170", "text": "Return sig and val reduced by & operator or None\n if it is not possible to statically reduce expression"}
{"_id": "q_8171", "text": "Return sig and val reduced by ^ operator or None\n if it is not possible to statically reduce expression"}
{"_id": "q_8172", "text": "Get root of name space"}
{"_id": "q_8173", "text": "Decide if this unit should be serialized or not eventually fix name\n to fit same already serialized unit\n\n :param obj: object to serialize\n :param serializedClasses: dict {unitCls : unitobj}\n :param serializedConfiguredUnits: (unitCls, paramsValues) : unitObj\n where paramsValues are named tuple name:value"}
{"_id": "q_8174", "text": "Serialize HdlType instance"}
{"_id": "q_8175", "text": "Srialize IfContainer instance"}
{"_id": "q_8176", "text": "Get constant name for value\n name of constant is reused if same value was used before"}
{"_id": "q_8177", "text": "Cut off statements which are driver of specified signal"}
{"_id": "q_8178", "text": "Parse HArray type to this transaction template instance\n\n :return: address of it's end"}
{"_id": "q_8179", "text": "Only for transactions derived from HArray\n\n :return: width of item in original array"}
{"_id": "q_8180", "text": "Walk fields in instance of TransTmpl\n\n :param offset: optional offset for all children in this TransTmpl\n :param shouldEnterFn: function (transTmpl) which returns True\n when field should be split on it's children\n :param shouldEnterFn: function(transTmpl) which should return\n (shouldEnter, shouldUse) where shouldEnter is flag that means\n iterator should look inside of this actual object\n and shouldUse flag means that this field should be used\n (=generator should yield it)\n :return: generator of tuples ((startBitAddress, endBitAddress),\n TransTmpl instance)"}
{"_id": "q_8181", "text": "Merge other statement to this statement"}
{"_id": "q_8182", "text": "Cached indent getter function"}
{"_id": "q_8183", "text": "Check if not redefining property on obj"}
{"_id": "q_8184", "text": "Register interface object on interface level object"}
{"_id": "q_8185", "text": "Register array of items on interface level object"}
{"_id": "q_8186", "text": "Returns a first driver if signal has only one driver."}
{"_id": "q_8187", "text": "Recursively statistically evaluate result of this operator"}
{"_id": "q_8188", "text": "Create operator with result signal\n\n :ivar resT: data type of result signal\n :ivar outputs: iterable of singnals which are outputs\n from this operator"}
{"_id": "q_8189", "text": "Try connect src to interface of specified name on unit.\n Ignore if interface is not present or if it already has driver."}
{"_id": "q_8190", "text": "Propagate \"clk\" clock signal to all subcomponents"}
{"_id": "q_8191", "text": "Propagate \"clk\" clock and negative reset \"rst_n\" signal\n to all subcomponents"}
{"_id": "q_8192", "text": "Propagate reset \"rst\" signal\n to all subcomponents"}
{"_id": "q_8193", "text": "Iterate over bits in vector\n\n :param sigOrVal: signal or value to iterate over\n :param bitsInOne: number of bits in one part\n :param skipPadding: if true padding is skipped in dense types"}
{"_id": "q_8194", "text": "Always decide not to serialize obj\n\n :param priv: private data for this function first unit of this class\n :return: tuple (do serialize this object, next priv)"}
{"_id": "q_8195", "text": "Decide to serialize only first obj of it's class\n\n :param priv: private data for this function\n (first object with class == obj.__class__)\n\n :return: tuple (do serialize this object, next priv)\n where priv is private data for this function\n (first object with class == obj.__class__)"}
{"_id": "q_8196", "text": "Decide to serialize only objs with uniq parameters and class\n\n :param priv: private data for this function\n ({frozen_params: obj})\n\n :return: tuple (do serialize this object, next priv)"}
{"_id": "q_8197", "text": "Delegate _make_association on items\n\n :note: doc in :func:`~hwt.synthesizer.interfaceLevel.propDeclCollector._make_association`"}
{"_id": "q_8198", "text": "Create a simulation model for unit\n\n :param unit: interface level unit which you wont prepare for simulation\n :param targetPlatform: target platform for this synthes\n :param dumpModelIn: folder to where put sim model files\n (otherwise sim model will be constructed only in memory)"}
{"_id": "q_8199", "text": "Reconnect model signals to unit to run simulation with simulation model\n but use original unit interfaces for communication\n\n :param synthesisedUnitOrIntf: interface where should be signals\n replaced from signals from modelCls\n :param modelCls: simulation model form where signals\n for synthesisedUnitOrIntf should be taken"}
{"_id": "q_8200", "text": "Syntax sugar\n If outputFile is string try to open it as file\n\n :return: hdl simulator object"}
{"_id": "q_8201", "text": "Process for injecting of this callback loop into simulator"}
{"_id": "q_8202", "text": "Connect internal signal to port item,\n this connection is used by simulator and only output port items\n will be connected"}
{"_id": "q_8203", "text": "connet signal from internal side of of this component to this port"}
{"_id": "q_8204", "text": "return signal inside unit which has this port"}
{"_id": "q_8205", "text": "Schedule process on actual time with specified priority"}
{"_id": "q_8206", "text": "Add hdl process to execution queue\n\n :param trigger: instance of SimSignal\n :param proc: python generator function representing HDL process"}
{"_id": "q_8207", "text": "Schedule combUpdateDoneEv event to let agents know that current\n delta step is ending and values from combinational logic are stable"}
{"_id": "q_8208", "text": "Apply stashed values to signals"}
{"_id": "q_8209", "text": "This functions resolves write conflicts for signal\n\n :param actionSet: set of actions made by process"}
{"_id": "q_8210", "text": "Delta step for combinational processes"}
{"_id": "q_8211", "text": "Read value from signal or interface"}
{"_id": "q_8212", "text": "Write value to signal or interface."}
{"_id": "q_8213", "text": "Convert all ternary operators to IfContainers"}
{"_id": "q_8214", "text": "Create a new version under this service."}
{"_id": "q_8215", "text": "Create a new VCL under this version."}
{"_id": "q_8216", "text": "Converts the column to a dictionary representation accepted\n by the Citrination server.\n\n :return: Dictionary with basic options, plus any column type specific\n options held under the \"options\" key\n :rtype: dict"}
{"_id": "q_8217", "text": "Add a descriptor column.\n\n :param descriptor: A Descriptor instance (e.g., RealDescriptor, InorganicDescriptor, etc.)\n :param role: Specify a role (input, output, latentVariable, or ignore)\n :param group_by_key: Whether or not to group by this key during cross validation"}
{"_id": "q_8218", "text": "Checks to see that the query will not exceed the max query depth\n\n :param returning_query: The PIF system or Dataset query to execute.\n :type returning_query: :class:`PifSystemReturningQuery` or :class: `DatasetReturningQuery`"}
{"_id": "q_8219", "text": "Run each in a list of PIF queries against Citrination.\n\n :param multi_query: :class:`MultiQuery` object to execute.\n :return: :class:`PifMultiSearchResult` object with the results of the query."}
{"_id": "q_8220", "text": "Updates an existing data view from the search template and ml template given\n\n :param id: Identifier for the data view. This returned from the create method.\n :param configuration: Information to construct the data view from (eg descriptors, datasets etc)\n :param name: Name of the data view\n :param description: Description for the data view"}
{"_id": "q_8221", "text": "Gets basic information about a view\n\n :param data_view_id: Identifier of the data view\n :return: Metadata about the view as JSON"}
{"_id": "q_8222", "text": "Creates an ml configuration from dataset_ids and extract_as_keys\n\n :param dataset_ids: Array of dataset identifiers to make search template from\n :return: An identifier used to request the status of the builder job (get_ml_configuration_status)"}
{"_id": "q_8223", "text": "Utility function to turn the result object from the configuration builder endpoint into something that\n can be used directly as a configuration.\n\n :param result_blob: Nested dicts representing the possible descriptors\n :param dataset_ids: Array of dataset identifiers to make search template from\n :return: An object suitable to be used as a parameter to data view create"}
{"_id": "q_8224", "text": "After invoking the create_ml_configuration async method, you can use this method to\n check on the status of the builder job.\n\n :param job_id: The identifier returned from create_ml_configuration\n :return: Job status"}
{"_id": "q_8225", "text": "Get the t-SNE projection, including responses and tags.\n\n :param data_view_id: The ID of the data view to retrieve TSNE from\n :type data_view_id: int\n :return: The TSNE analysis\n :rtype: :class:`Tsne`"}
{"_id": "q_8226", "text": "Submits an async prediction request.\n\n :param data_view_id: The id returned from create\n :param candidates: Array of candidates\n :param prediction_source: 'scalar' or 'scalar_from_distribution'\n :param use_prior: True to use prior prediction, otherwise False\n :return: Predict request Id (used to check status)"}
{"_id": "q_8227", "text": "Returns a string indicating the status of the prediction job\n\n :param view_id: The data view id returned from data view create\n :param predict_request_id: The id returned from predict\n :return: Status data, also includes results if state is finished"}
{"_id": "q_8228", "text": "Submits a new experimental design run.\n\n :param data_view_id: The ID number of the data view to which the\n run belongs, as a string\n :type data_view_id: str\n :param num_candidates: The number of candidates to return\n :type num_candidates: int\n :param target: An :class:``Target`` instance representing\n the design run optimization target\n :type target: :class:``Target``\n :param constraints: An array of design constraints (instances of\n objects which extend :class:``BaseConstraint``)\n :type constraints: list of :class:``BaseConstraint``\n :param sampler: The name of the sampler to use during the design run:\n either \"Default\" or \"This view\"\n :type sampler: str\n :return: A :class:`DesignRun` instance containing the UID of the\n new run"}
{"_id": "q_8229", "text": "Retrieves a summary of information for a given data view\n - view id\n - name\n - description\n - columns\n\n :param data_view_id: The ID number of the data view to which the\n run belongs, as a string\n :type data_view_id: str"}
{"_id": "q_8230", "text": "Given a filepath, loads the file as a dictionary from YAML\n\n :param path: The path to a YAML file"}
{"_id": "q_8231", "text": "Extracts credentials from the yaml formatted credential filepath\n passed in. Uses the default profile if the CITRINATION_PROFILE env var\n is not set, otherwise looks for a profile with that name in the credentials file.\n\n :param filepath: The path of the credentials file"}
{"_id": "q_8232", "text": "Given an API key, a site url and a credentials file path, runs through a prioritized list of credential sources to find credentials.\n\n Specifically, this method ranks credential priority as follows:\n 1. Those passed in as the first two parameters to this method\n 2. Those found in the environment as variables\n 3. Those found in the credentials file at the profile specified\n by the profile environment variable\n 4. Those found in the default stanza in the credentials file\n\n :param api_key: A Citrination API Key or None\n :param site: A Citrination site URL or None\n :param cred_file: The path to a credentials file"}
{"_id": "q_8233", "text": "Returns the number of files matching a pattern in a dataset.\n\n :param dataset_id: The ID of the dataset to search for files.\n :type dataset_id: int\n :param glob: A pattern which will be matched against files in the dataset.\n :type glob: str\n :param is_dir: A boolean indicating whether or not the pattern should match against the beginning of paths in the dataset.\n :type is_dir: bool\n :return: The number of matching files\n :rtype: int"}
{"_id": "q_8234", "text": "Retrieves a PIF from a given dataset.\n\n :param dataset_id: The id of the dataset to retrieve PIF from\n :type dataset_id: int\n :param uid: The uid of the PIF to retrieve\n :type uid: str\n :param dataset_version: The dataset version to look for the PIF in. If nothing is supplied, the latest dataset version will be searched\n :type dataset_version: int\n :return: A :class:`Pif` object\n :rtype: :class:`Pif`"}
{"_id": "q_8235", "text": "Retrieves the set of columns from the combination of dataset ids given\n\n :param dataset_ids: The id of the dataset to retrieve columns from\n :type dataset_ids: list of int\n :return: A list of column names from the dataset ids given.\n :rtype: list of str"}
{"_id": "q_8236", "text": "Generates a default search templates from the available columns in the dataset ids given.\n\n :param dataset_ids: The id of the dataset to retrieve files from\n :type dataset_ids: list of int\n :return: A search template based on the columns in the datasets given"}
{"_id": "q_8237", "text": "Returns a new search template, but the new template has only the extract_as_keys given.\n\n :param extract_as_keys: List of extract as keys to keep\n :param search_template: The search template to prune\n :return: New search template with pruned columns"}
{"_id": "q_8238", "text": "Make a copy of a dictionary with all keys converted to camel case. This is just calls to_camel_case on each of the keys in the dictionary and returns a new dictionary.\n\n :param obj: Dictionary to convert keys to camel case.\n :return: Dictionary with the input values and all keys in camel case"}
{"_id": "q_8239", "text": "Runs the template against the validation endpoint, returns a message indicating status of the templte\n\n :param ml_template: Template to validate\n :return: OK or error message if validation failed"}
{"_id": "q_8240", "text": "Compute the hamming distance between two hashes"}
{"_id": "q_8241", "text": "Compute the average hash of the given image."}
{"_id": "q_8242", "text": "Set up the Vizio media player platform."}
{"_id": "q_8243", "text": "Retrieve latest state of the device."}
{"_id": "q_8244", "text": "Mute the volume."}
{"_id": "q_8245", "text": "Increasing volume of the device."}
{"_id": "q_8246", "text": "Decreasing volume of the device."}
{"_id": "q_8247", "text": "Restores the starting position."}
{"_id": "q_8248", "text": "Gets the piece at the given square."}
{"_id": "q_8249", "text": "Removes a piece from the given square if present."}
{"_id": "q_8250", "text": "Sets a piece at the given square. An existing piece is replaced."}
{"_id": "q_8251", "text": "Checks if the given move would move would leave the king in check or\n put it into check."}
{"_id": "q_8252", "text": "Checks if the king of the other side is attacked. Such a position is not\n valid and could only be reached by an illegal move."}
{"_id": "q_8253", "text": "Checks if the current position is a checkmate."}
{"_id": "q_8254", "text": "a game is ended if a position occurs for the fourth time\n on consecutive alternating moves."}
{"_id": "q_8255", "text": "Restores the previous position and returns the last move from the stack."}
{"_id": "q_8256", "text": "Gets an SFEN representation of the current position."}
{"_id": "q_8257", "text": "Parses a move in standard coordinate notation, makes the move and puts\n it on the the move stack.\n Raises `ValueError` if neither legal nor a null move.\n Returns the move."}
{"_id": "q_8258", "text": "Returns a Zobrist hash of the current position."}
{"_id": "q_8259", "text": "Gets the symbol `p`, `l`, `n`, etc."}
{"_id": "q_8260", "text": "Creates a piece instance from a piece symbol.\n Raises `ValueError` if the symbol is invalid."}
{"_id": "q_8261", "text": "Parses an USI string.\n Raises `ValueError` if the USI string is invalid."}
{"_id": "q_8262", "text": "Accept a string and parse it into many commits.\n Parse and yield each commit-dictionary.\n This function is a generator."}
{"_id": "q_8263", "text": "Accept a parsed single commit. Some of the named groups\n require further processing, so parse those groups.\n Return a dictionary representing the completely parsed\n commit."}
{"_id": "q_8264", "text": "Adds a organization-course link to the system"}
{"_id": "q_8265", "text": "Course key object validation"}
{"_id": "q_8266", "text": "Inactivates an activated organization as well as any active relationships"}
{"_id": "q_8267", "text": "Activates an inactive organization-course relationship"}
{"_id": "q_8268", "text": "Inactivates an active organization-course relationship"}
{"_id": "q_8269", "text": "Retrieves the set of courses currently linked to the specified organization"}
{"_id": "q_8270", "text": "Retrieves the organizations linked to the specified course"}
{"_id": "q_8271", "text": "Organization dict-to-object serialization"}
{"_id": "q_8272", "text": "Load's config then runs Django's execute_from_command_line"}
{"_id": "q_8273", "text": "Adds argument for config to existing argparser"}
{"_id": "q_8274", "text": "Find config file and set values"}
{"_id": "q_8275", "text": "Dumps initial config in YAML"}
{"_id": "q_8276", "text": "Documents values in markdown"}
{"_id": "q_8277", "text": "converts string to type requested by `cast_as`"}
{"_id": "q_8278", "text": "\\\n loop through all the images and find the ones\n that have the best bytez to even make them a candidate"}
{"_id": "q_8279", "text": "\\\n checks to see if we were able to\n find open link_src on this page"}
{"_id": "q_8280", "text": "\\\n returns the bytes of the image file on disk"}
{"_id": "q_8281", "text": "Create a video object from a video embed"}
{"_id": "q_8282", "text": "adds any siblings that may have a decent score to this node"}
{"_id": "q_8283", "text": "\\\n returns a list of nodes we want to search\n on like paragraphs and tables"}
{"_id": "q_8284", "text": "\\\n remove any divs that looks like non-content,\n clusters of links, or paras with no gusto"}
{"_id": "q_8285", "text": "\\\n Fetch the article title and analyze it"}
{"_id": "q_8286", "text": "if the article has meta canonical link set in the url"}
{"_id": "q_8287", "text": "Close the network connection and perform any other required cleanup\n\n Note:\n Auto closed when using goose as a context manager or when garbage collected"}
{"_id": "q_8288", "text": "Extract the most likely article content from the html page\n\n Args:\n url (str): URL to pull and parse\n raw_html (str): String representation of the HTML page\n Returns:\n Article: Representation of the article contents \\\n including other parsed and extracted metadata"}
{"_id": "q_8289", "text": "Returns a unicode object representing 's'. Treats bytestrings using the\n 'encoding' codec.\n\n If strings_only is True, don't convert (some) non-string-like objects."}
{"_id": "q_8290", "text": "Returns a bytestring version of 's', encoded as specified in 'encoding'.\n\n If strings_only is True, don't convert (some) non-string-like objects."}
{"_id": "q_8291", "text": "Add URLs needed to handle image uploads."}
{"_id": "q_8292", "text": "Handle file uploads from WYSIWYG."}
{"_id": "q_8293", "text": "Render the Quill WYSIWYG."}
{"_id": "q_8294", "text": "Get the form for field."}
{"_id": "q_8295", "text": "Resize an image for metadata tags, and return an absolute URL to it."}
{"_id": "q_8296", "text": "Check if ``mdrun`` finished successfully.\n\n Analyses the output from ``mdrun`` in *logfile*. Right now we are\n simply looking for the line \"Finished mdrun on node\" in the last 1kb of\n the file. (The file must be seeakable.)\n\n :Arguments:\n *logfile* : filename\n Logfile produced by ``mdrun``.\n\n :Returns: ``True`` if all ok, ``False`` if not finished, and\n ``None`` if the *logfile* cannot be opened"}
{"_id": "q_8297", "text": "Launch local smpd."}
{"_id": "q_8298", "text": "Find files from a continuation run"}
{"_id": "q_8299", "text": "Run ``gromacs.grompp`` and return the total charge of the system.\n\n :Arguments:\n The arguments are the ones one would pass to :func:`gromacs.grompp`.\n :Returns:\n The total charge as reported\n\n Some things to keep in mind:\n\n * The stdout output of grompp is only shown when an error occurs. For\n debugging, look at the log file or screen output and try running the\n normal :func:`gromacs.grompp` command and analyze the output if the\n debugging messages are not sufficient.\n\n * Check that ``qtot`` is correct. Because the function is based on pattern\n matching of the informative output of :program:`grompp` it can break when\n the output format changes. This version recognizes lines like ::\n\n ' System has non-zero total charge: -4.000001e+00'\n\n using the regular expression\n :regexp:`System has non-zero total charge: *(?P<qtot>[-+]?\\d*\\.\\d+([eE][-+]\\d+)?)`."}
{"_id": "q_8300", "text": "Create a processed topology.\n\n The processed (or portable) topology file does not contain any\n ``#include`` statements and hence can be easily copied around. It\n also makes it possible to re-grompp without having any special itp\n files available.\n\n :Arguments:\n *topol*\n topology file\n *struct*\n coordinat (structure) file\n\n :Keywords:\n *processed*\n name of the new topology file; if not set then it is named like\n *topol* but with ``pp_`` prepended\n *includes*\n path or list of paths of directories in which itp files are\n searched for\n *grompp_kwargs**\n other options for :program:`grompp` such as ``maxwarn=2`` can\n also be supplied\n\n :Returns: full path to the processed topology"}
{"_id": "q_8301", "text": "Primitive text file stream editor.\n\n This function can be used to edit free-form text files such as the\n topology file. By default it does an **in-place edit** of\n *filename*. If *newname* is supplied then the edited\n file is written to *newname*.\n\n :Arguments:\n *filename*\n input text file\n *substitutions*\n substitution commands (see below for format)\n *newname*\n output filename; if ``None`` then *filename* is changed in\n place [``None``]\n\n *substitutions* is a list of triplets; the first two elements are regular\n expression strings, the last is the substitution value. It mimics\n ``sed`` search and replace. The rules for *substitutions*:\n\n .. productionlist::\n substitutions: \"[\" search_replace_tuple, ... \"]\"\n search_replace_tuple: \"(\" line_match_RE \",\" search_RE \",\" replacement \")\"\n line_match_RE: regular expression that selects the line (uses match)\n search_RE: regular expression that is searched in the line\n replacement: replacement string for search_RE\n\n Running :func:`edit_txt` does pretty much what a simple ::\n\n sed /line_match_RE/s/search_RE/replacement/\n\n with repeated substitution commands does.\n\n Special replacement values:\n - ``None``: the rule is ignored\n - ``False``: the line is deleted (even if other rules match)\n\n .. note::\n\n * No sanity checks are performed and the substitutions must be supplied\n exactly as shown.\n * All substitutions are applied to a line; thus the order of the substitution\n commands may matter when one substitution generates a match for a subsequent rule.\n * If replacement is set to ``None`` then the whole expression is ignored and\n whatever is in the template is used. To unset values you must provided an\n empty string or similar.\n * Delete a matching line if replacement=``False``."}
{"_id": "q_8302", "text": "Delete all frames."}
{"_id": "q_8303", "text": "Returns resid in the Gromacs index by transforming with offset."}
{"_id": "q_8304", "text": "Combine individual groups into a single one and write output.\n\n :Keywords:\n name_all : string\n Name of the combined group, ``None`` generates a name. [``None``]\n out_ndx : filename\n Name of the output file that will contain the individual groups\n and the combined group. If ``None`` then default from the class\n constructor is used. [``None``]\n operation : character\n Logical operation that is used to generate the combined group from\n the individual groups: \"|\" (OR) or \"&\" (AND); if set to ``False``\n then no combined group is created and only the individual groups\n are written. [\"|\"]\n defaultgroups : bool\n ``True``: append everything to the default groups produced by\n :program:`make_ndx` (or rather, the groups provided in the ndx file on\n initialization --- if this was ``None`` then these are truly default groups);\n ``False``: only use the generated groups\n\n :Returns:\n ``(combinedgroup_name, output_ndx)``, a tuple showing the\n actual group name and the name of the file; useful when all names are autogenerated.\n\n .. Warning:: The order of the atom numbers in the combined group is\n *not* guaranteed to be the same as the selections on input because\n ``make_ndx`` sorts them ascending. Thus you should be careful when\n using these index files for calculations of angles and dihedrals.\n Use :class:`gromacs.formats.NDX` in these cases.\n\n .. SeeAlso:: :meth:`IndexBuilder.write`."}
{"_id": "q_8305", "text": "Concatenate input index files.\n\n Generate a new index file that contains the default Gromacs index\n groups (if a structure file was defined) and all index groups from the\n input index files.\n\n :Arguments:\n out_ndx : filename\n Name of the output index file; if ``None`` then use the default\n provided to the constructore. [``None``]."}
{"_id": "q_8306", "text": "Process ``make_ndx`` command and return name and temp index file."}
{"_id": "q_8307", "text": "Process a range selection.\n\n (\"S234\", \"A300\", \"CA\") --> selected all CA in this range\n (\"S234\", \"A300\") --> selected all atoms in this range\n\n .. Note:: Ignores residue type, only cares about the resid (but still required)"}
{"_id": "q_8308", "text": "Translate selection for a single res to make_ndx syntax."}
{"_id": "q_8309", "text": "Simple tests to flag problems with a ``make_ndx`` run."}
{"_id": "q_8310", "text": "Write compact xtc that is fitted to the tpr reference structure.\n\n See :func:`gromacs.cbook.trj_fitandcenter` for details and\n description of *kwargs* (including *input*, *input1*, *n* and\n *n1* for how to supply custom index groups). The most important ones are listed\n here but in most cases the defaults should work.\n\n :Keywords:\n *s*\n Input structure (typically the default tpr file but can be set to\n some other file with a different conformation for fitting)\n *n*\n Alternative index file.\n *o*\n Name of the output trajectory.\n *xy* : Boolean\n If ``True`` then only fit in xy-plane (useful for a membrane normal\n to z). The default is ``False``.\n *force*\n - ``True``: overwrite existing trajectories\n - ``False``: throw a IOError exception\n - ``None``: skip existing and log a warning [default]\n\n :Returns:\n dictionary with keys *tpr*, *xtc*, which are the names of the\n the new files"}
{"_id": "q_8311", "text": "Write xtc that is fitted to the tpr reference structure.\n\n Runs :class:`gromacs.tools.trjconv` with appropriate arguments\n for fitting. The most important *kwargs* are listed\n here but in most cases the defaults should work.\n\n Note that the default settings do *not* include centering or\n periodic boundary treatment as this often does not work well\n with fitting. It is better to do this as a separate step (see\n :meth:`center_fit` or :func:`gromacs.cbook.trj_fitandcenter`)\n\n :Keywords:\n *s*\n Input structure (typically the default tpr file but can be set to\n some other file with a different conformation for fitting)\n *n*\n Alternative index file.\n *o*\n Name of the output trajectory. A default name is created.\n If e.g. *dt* = 100 is one of the *kwargs* then the default name includes\n \"_dt100ps\".\n *xy* : boolean\n If ``True`` then only do a rot+trans fit in the xy plane\n (good for membrane simulations); default is ``False``.\n *force*\n ``True``: overwrite existing trajectories\n ``False``: throw a IOError exception\n ``None``: skip existing and log a warning [default]\n *fitgroup*\n index group to fit on [\"backbone\"]\n\n .. Note:: If keyword *input* is supplied then it will override\n *fitgroup*; *input* = ``[fitgroup, outgroup]``\n *kwargs*\n kwargs are passed to :func:`~gromacs.cbook.trj_xyfitted`\n\n :Returns:\n dictionary with keys *tpr*, *xtc*, which are the names of the\n the new files"}
{"_id": "q_8312", "text": "Create a top level logger.\n\n - The file logger logs everything (including DEBUG).\n - The console logger only logs INFO and above.\n\n Logging to a file and the console.\n \n See http://docs.python.org/library/logging.html?#logging-to-multiple-destinations\n \n The top level logger of the library is named 'gromacs'. Note that\n we are configuring this logger with console output. If the root\n logger also does this then we will get two output lines to the\n console. We'll live with this because this is a simple\n convenience library..."}
{"_id": "q_8313", "text": "Get tool names from all configured groups.\n\n :return: list of tool names"}
{"_id": "q_8314", "text": "Dict of variables that we make available as globals in the module.\n\n Can be used as ::\n\n globals().update(GMXConfigParser.configuration) # update configdir, templatesdir ..."}
{"_id": "q_8315", "text": "Return the textual representation of logging level 'option' or the number.\n\n Note that option is always interpreted as an UPPERCASE string\n and hence integer log levels will not be recognized.\n\n .. SeeAlso: :mod:`logging` and :func:`logging.getLevelName`"}
{"_id": "q_8316", "text": "Use .collection as extension unless provided"}
{"_id": "q_8317", "text": "Scale dihedral angles"}
{"_id": "q_8318", "text": "Scale improper dihedrals"}
{"_id": "q_8319", "text": "Convert string x to the most useful type, i.e. int, float or unicode string.\n\n If x is a quoted string (single or double quotes) then the quotes\n are stripped and the enclosed string returned.\n\n .. Note::\n\n Strings will be returned as Unicode strings (using :func:`to_unicode`).\n\n .. versionchanged:: 0.7.0\n removed `encoding keyword argument"}
{"_id": "q_8320", "text": "Return view of the recarray with all int32 cast to int64."}
{"_id": "q_8321", "text": "Parse colour specification"}
{"_id": "q_8322", "text": "Transform arguments and return them as a list suitable for Popen."}
{"_id": "q_8323", "text": "Print help; same as using ``?`` in ``ipython``. long=True also gives call signature."}
{"_id": "q_8324", "text": "Add switches as 'options' with value True to the options dict."}
{"_id": "q_8325", "text": "Extract standard gromacs doc\n\n Extract by running the program and chopping the header to keep from\n 'DESCRIPTION' onwards."}
{"_id": "q_8326", "text": "Convert input to a numerical type if possible.\n\n 1. A non-string object is returned as it is\n 2. Try conversion to int, float, str."}
{"_id": "q_8327", "text": "Remove legend for axes or gca.\n\n See http://osdir.com/ml/python.matplotlib.general/2005-07/msg00285.html"}
{"_id": "q_8328", "text": "If a file exists then continue with the action specified in ``resolve``.\n\n ``resolve`` must be one of\n\n \"ignore\"\n always return ``False``\n \"indicate\"\n return ``True`` if it exists\n \"warn\"\n indicate and issue a :exc:`UserWarning`\n \"exception\"\n raise :exc:`IOError` if it exists\n\n Alternatively, set *force* for the following behaviour (which\n ignores *resolve*):\n\n ``True``\n same as *resolve* = \"ignore\" (will allow overwriting of files)\n ``False``\n same as *resolve* = \"exception\" (will prevent overwriting of files)\n ``None``\n ignored, do whatever *resolve* says"}
{"_id": "q_8329", "text": "Load Gromacs 4.x tools automatically using some heuristic.\n\n Tries to load tools (1) in configured tool groups (2) and fails back to\n automatic detection from ``GMXBIN`` (3) then to a prefilled list.\n\n Also load any extra tool configured in ``~/.gromacswrapper.cfg``\n\n :return: dict mapping tool names to GromacsCommand classes"}
{"_id": "q_8330", "text": "Create a array which masks jumps >= threshold.\n\n Extra points are inserted between two subsequent values whose\n absolute difference differs by more than threshold (default is\n pi).\n\n Other can be a secondary array which is also masked according to\n *a*.\n\n Returns (*a_masked*, *other_masked*) (where *other_masked* can be\n ``None``)"}
{"_id": "q_8331", "text": "Correlation \"time\" of data.\n\n The 0-th column of the data is interpreted as a time and the\n decay of the data is computed from the autocorrelation\n function (using FFT).\n\n .. SeeAlso:: :func:`numkit.timeseries.tcorrel`"}
{"_id": "q_8332", "text": "Set and change the parameters for calculations with correlation functions.\n\n The parameters persist until explicitly changed.\n\n :Keywords:\n *nstep*\n only process every *nstep* data point to speed up the FFT; if\n left empty a default is chosen that produces roughly 25,000 data\n points (or whatever is set in *ncorrel*)\n *ncorrel*\n If no *nstep* is supplied, aim at using *ncorrel* data points for\n the FFT; sets :attr:`XVG.ncorrel` [25000]\n *force*\n force recalculating correlation data even if cached values are\n available\n *kwargs*\n see :func:`numkit.timeseries.tcorrel` for other options\n\n .. SeeAlso: :attr:`XVG.error` for details and references."}
{"_id": "q_8333", "text": "Read and cache the file as a numpy array.\n\n Store every *stride* line of data; if ``None`` then the class default is used.\n\n The array is returned with column-first indexing, i.e. for a data file with\n columns X Y1 Y2 Y3 ... the array a will be a[0] = X, a[1] = Y1, ... ."}
{"_id": "q_8334", "text": "Plot xvg file data.\n\n The first column of the data is always taken as the abscissa\n X. Additional columns are plotted as ordinates Y1, Y2, ...\n\n In the special case that there is only a single column then this column\n is plotted against the index, i.e. (N, Y).\n\n :Keywords:\n *columns* : list\n Select the columns of the data to be plotted; the list\n is used as a numpy.array extended slice. The default is\n to use all columns. Columns are selected *after* a transform.\n *transform* : function\n function ``transform(array) -> array`` which transforms\n the original array; must return a 2D numpy array of\n shape [X, Y1, Y2, ...] where X, Y1, ... are column\n vectors. By default the transformation is the\n identity [``lambda x: x``].\n *maxpoints* : int\n limit the total number of data points; matplotlib has issues processing\n png files with >100,000 points and pdfs take forever to display. Set to\n ``None`` if really all data should be displayed. At the moment we simply\n decimate the data at regular intervals. [10000]\n *method*\n method to decimate the data to *maxpoints*, see :meth:`XVG.decimate`\n for details\n *color*\n single color (used for all plots); sequence of colors\n (will be repeated as necessary); or a matplotlib\n colormap (e.g. \"jet\", see :mod:`matplotlib.cm`). The\n default is to use the :attr:`XVG.default_color_cycle`.\n *ax*\n plot into given axes or create new one if ``None`` [``None``]\n *kwargs*\n All other keyword arguments are passed on to :func:`matplotlib.pyplot.plot`.\n\n :Returns:\n *ax*\n axes instance"}
{"_id": "q_8335", "text": "Find vdwradii.dat and add special entries for lipids.\n\n See :data:`gromacs.setup.vdw_lipid_resnames` for lipid\n resnames. Add more if necessary."}
{"_id": "q_8336", "text": "Put protein into box, add water, add counter-ions.\n\n Currently this really only supports solutes in water. If you need\n to embedd a protein in a membrane then you will require more\n sophisticated approaches.\n\n However, you *can* supply a protein already inserted in a\n bilayer. In this case you will probably want to set *distance* =\n ``None`` and also enable *with_membrane* = ``True`` (using extra\n big vdw radii for typical lipids).\n\n .. Note:: The defaults are suitable for solvating a globular\n protein in a fairly tight (increase *distance*!) dodecahedral\n box.\n\n :Arguments:\n *struct* : filename\n pdb or gro input structure\n *top* : filename\n Gromacs topology\n *distance* : float\n When solvating with water, make the box big enough so that\n at least *distance* nm water are between the solute *struct*\n and the box boundary.\n Set *boxtype* to ``None`` in order to use a box size in the input\n file (gro or pdb).\n *boxtype* or *bt*: string\n Any of the box types supported by :class:`~gromacs.tools.Editconf`\n (triclinic, cubic, dodecahedron, octahedron). Set the box dimensions\n either with *distance* or the *box* and *angle* keywords.\n\n If set to ``None`` it will ignore *distance* and use the box\n inside the *struct* file.\n\n *bt* overrides the value of *boxtype*.\n *box*\n List of three box lengths [A,B,C] that are used by :class:`~gromacs.tools.Editconf`\n in combination with *boxtype* (``bt`` in :program:`editconf`) and *angles*.\n Setting *box* overrides *distance*.\n *angles*\n List of three angles (only necessary for triclinic boxes).\n *concentration* : float\n Concentration of the free ions in mol/l. Note that counter\n ions are added in excess of this concentration.\n *cation* and *anion* : string\n Molecule names of the ions. This depends on the chosen force field.\n *water* : string\n Name of the water model; one of \"spc\", \"spce\", \"tip3p\",\n \"tip4p\". This should be appropriate for the chosen force\n field. If an alternative solvent is required, simply supply the path to a box\n with solvent molecules (used by :func:`~gromacs.genbox`'s *cs* argument)\n and also supply the molecule name via *solvent_name*.\n *solvent_name*\n Name of the molecules that make up the solvent (as set in the itp/top).\n Typically needs to be changed when using non-standard/non-water solvents.\n [\"SOL\"]\n *with_membrane* : bool\n ``True``: use special ``vdwradii.dat`` with 0.1 nm-increased radii on\n lipids. Default is ``False``.\n *ndx* : filename\n How to name the index file that is produced by this function.\n *mainselection* : string\n A string that is fed to :class:`~gromacs.tools.Make_ndx` and\n which should select the solute.\n *dirname* : directory name\n Name of the directory in which all files for the solvation stage are stored.\n *includes*\n List of additional directories to add to the mdp include path\n *kwargs*\n Additional arguments are passed on to\n :class:`~gromacs.tools.Editconf` or are interpreted as parameters to be\n changed in the mdp file."}
{"_id": "q_8337", "text": "Run multiple energy minimizations one after each other.\n\n :Keywords:\n *integrators*\n list of integrators (from 'l-bfgs', 'cg', 'steep')\n [['bfgs', 'steep']]\n *nsteps*\n list of maximum number of steps; one for each integrator in\n in the *integrators* list [[100,1000]]\n *kwargs*\n mostly passed to :func:`gromacs.setup.energy_minimize`\n\n :Returns: dictionary with paths to final structure ('struct') and\n other files\n\n :Example:\n Conduct three minimizations:\n 1. low memory Broyden-Goldfarb-Fletcher-Shannon (BFGS) for 30 steps\n 2. steepest descent for 200 steps\n 3. finish with BFGS for another 30 steps\n We also do a multi-processor minimization when possible (i.e. for steep\n (and conjugate gradient) by using a :class:`gromacs.run.MDrunner` class\n for a :program:`mdrun` executable compiled for OpenMP in 64 bit (see\n :mod:`gromacs.run` for details)::\n\n import gromacs.run\n gromacs.setup.em_schedule(struct='solvate/ionized.gro',\n mdrunner=gromacs.run.MDrunnerOpenMP64,\n integrators=['l-bfgs', 'steep', 'l-bfgs'],\n nsteps=[50,200, 50])\n\n .. Note:: You might have to prepare the mdp file carefully because at the\n moment one can only modify the *nsteps* parameter on a\n per-minimizer basis."}
{"_id": "q_8338", "text": "Set up MD with position restraints.\n\n Additional itp files should be in the same directory as the top file.\n\n Many of the keyword arguments below already have sensible values. Note that\n setting *mainselection* = ``None`` will disable many of the automated\n choices and is often recommended when using your own mdp file.\n\n :Keywords:\n *dirname*\n set up under directory dirname [MD_POSRES]\n *struct*\n input structure (gro, pdb, ...) [em/em.pdb]\n *top*\n topology file [top/system.top]\n *mdp*\n mdp file (or use the template) [templates/md.mdp]\n *ndx*\n index file (supply when using a custom mdp)\n *includes*\n additional directories to search for itp files\n *mainselection*\n :program:`make_ndx` selection to select main group [\"Protein\"]\n (If ``None`` then no canonical index file is generated and\n it is the user's responsibility to set *tc_grps*,\n *tau_t*, and *ref_t* as keyword arguments, or provide the mdp template\n with all parameter pre-set in *mdp* and probably also your own *ndx*\n index file.)\n *deffnm*\n default filename for Gromacs run [md]\n *runtime*\n total length of the simulation in ps [1000]\n *dt*\n integration time step in ps [0.002]\n *qscript*\n script to submit to the queuing system; by default\n uses the template :data:`gromacs.config.qscript_template`, which can\n be manually set to another template from :data:`gromacs.config.templates`;\n can also be a list of template names.\n *qname*\n name to be used for the job in the queuing system [PR_GMX]\n *mdrun_opts*\n option flags for the :program:`mdrun` command in the queuing system\n scripts such as \"-stepout 100\". [\"\"]\n *kwargs*\n remaining key/value pairs that should be changed in the template mdp\n file, eg ``nstxtcout=250, nstfout=250`` or command line options for\n ``grompp` such as ``maxwarn=1``.\n\n In particular one can also set **define** and activate\n whichever position restraints have been coded into the itp\n and top file. For instance one could have\n\n *define* = \"-DPOSRES_MainChain -DPOSRES_LIGAND\"\n\n if these preprocessor constructs exist. Note that there\n **must not be any space between \"-D\" and the value.**\n\n By default *define* is set to \"-DPOSRES\".\n\n :Returns: a dict that can be fed into :func:`gromacs.setup.MD`\n (but check, just in case, especially if you want to\n change the ``define`` parameter in the mdp file)\n\n .. Note:: The output frequency is drastically reduced for position\n restraint runs by default. Set the corresponding ``nst*``\n variables if you require more output. The `pressure coupling`_\n option *refcoord_scaling* is set to \"com\" by default (but can\n be changed via *kwargs*) and the pressure coupling\n algorithm itself is set to *Pcoupl* = \"Berendsen\" to\n run a stable simulation.\n\n .. _`pressure coupling`: http://manual.gromacs.org/online/mdp_opt.html#pc"}
{"_id": "q_8339", "text": "Set up equilibrium MD.\n\n Additional itp files should be in the same directory as the top file.\n\n Many of the keyword arguments below already have sensible values. Note that\n setting *mainselection* = ``None`` will disable many of the automated\n choices and is often recommended when using your own mdp file.\n\n :Keywords:\n *dirname*\n set up under directory dirname [MD]\n *struct*\n input structure (gro, pdb, ...) [MD_POSRES/md_posres.pdb]\n *top*\n topology file [top/system.top]\n *mdp*\n mdp file (or use the template) [templates/md.mdp]\n *ndx*\n index file (supply when using a custom mdp)\n *includes*\n additional directories to search for itp files\n *mainselection*\n ``make_ndx`` selection to select main group [\"Protein\"]\n (If ``None`` then no canonical index file is generated and\n it is the user's responsibility to set *tc_grps*,\n *tau_t*, and *ref_t* as keyword arguments, or provide the mdp template\n with all parameter pre-set in *mdp* and probably also your own *ndx*\n index file.)\n *deffnm*\n default filename for Gromacs run [md]\n *runtime*\n total length of the simulation in ps [1000]\n *dt*\n integration time step in ps [0.002]\n *qscript*\n script to submit to the queuing system; by default\n uses the template :data:`gromacs.config.qscript_template`, which can\n be manually set to another template from :data:`gromacs.config.templates`;\n can also be a list of template names.\n *qname*\n name to be used for the job in the queuing system [MD_GMX]\n *mdrun_opts*\n option flags for the :program:`mdrun` command in the queuing system\n scripts such as \"-stepout 100 -dgdl\". [\"\"]\n *kwargs*\n remaining key/value pairs that should be changed in the template mdp\n file, e.g. ``nstxtcout=250, nstfout=250`` or command line options for\n :program`grompp` such as ``maxwarn=1``.\n\n :Returns: a dict that can be fed into :func:`gromacs.setup.MD`\n (but check, just in case, especially if you want to\n change the *define* parameter in the mdp file)"}
{"_id": "q_8340", "text": "Write scripts for queuing systems.\n\n\n This sets up queuing system run scripts with a simple search and replace in\n templates. See :func:`gromacs.cbook.edit_txt` for details. Shell scripts\n are made executable.\n\n :Arguments:\n *templates*\n Template file or list of template files. The \"files\" can also be names\n or symbolic names for templates in the templates directory. See\n :mod:`gromacs.config` for details and rules for writing templates.\n *prefix*\n Prefix for the final run script filename; by default the filename will be\n the same as the template. [None]\n *dirname*\n Directory in which to place the submit scripts. [.]\n *deffnm*\n Default filename prefix for :program:`mdrun` ``-deffnm`` [md]\n *jobname*\n Name of the job in the queuing system. [MD]\n *budget*\n Which budget to book the runtime on [None]\n *startdir*\n Explicit path on the remote system (for run scripts that need to `cd`\n into this directory at the beginning of execution) [None]\n *mdrun_opts*\n String of additional options for :program:`mdrun`.\n *walltime*\n Maximum runtime of the job in hours. [1]\n *npme*\n number of PME nodes\n *jobarray_string*\n Multi-line string that is spliced in for job array functionality\n (see :func:`gromacs.qsub.generate_submit_array`; do not use manually)\n *kwargs*\n all other kwargs are ignored\n\n :Returns: list of generated run scripts"}
{"_id": "q_8341", "text": "Primitive queuing system detection; only looks at suffix at the moment."}
{"_id": "q_8342", "text": "Returns all dates from first to last included."}
{"_id": "q_8343", "text": "Fill missing rates of a currency.\n\n This is done by linear interpolation of the two closest available rates.\n\n :param str currency: The currency to fill missing rates for."}
{"_id": "q_8344", "text": "Convert amount from a currency to another one.\n\n :param float amount: The amount of `currency` to convert.\n :param str currency: The currency to convert from.\n :param str new_currency: The currency to convert to.\n :param datetime.date date: Use the conversion rate of this date. If this\n is not given, the most recent rate is used.\n\n :return: The value of `amount` in `new_currency`.\n :rtype: float\n\n >>> from datetime import date\n >>> c = CurrencyConverter()\n >>> c.convert(100, 'EUR', 'USD', date=date(2014, 3, 28))\n 137.5...\n >>> c.convert(100, 'USD', date=date(2014, 3, 28))\n 72.67...\n >>> c.convert(100, 'BGN', date=date(2010, 11, 21))\n Traceback (most recent call last):\n RateNotFoundError: BGN has no rate for 2010-11-21"}
{"_id": "q_8345", "text": "Animate given frame for set number of iterations.\n\n Parameters\n ----------\n frames : list\n Frames for animating\n interval : float\n Interval between two frames\n name : str\n Name of animation\n iterations : int, optional\n Number of loops for animations"}
{"_id": "q_8346", "text": "Compute the total number of unmasked regular pixels in a masks."}
{"_id": "q_8347", "text": "Compute an annular masks from an input inner and outer masks radius and regular shape."}
{"_id": "q_8348", "text": "Compute a blurring masks from an input masks and psf shape.\n\n The blurring masks corresponds to all pixels which are outside of the masks but will have a fraction of their \\\n light blur into the masked region due to PSF convolution."}
{"_id": "q_8349", "text": "Compute a 1D array listing all edge pixel indexes in the masks. An edge pixel is a pixel which is not fully \\\n surrounding by False masks values i.e. it is on an edge."}
{"_id": "q_8350", "text": "Output the figure, either as an image on the screen or to the hard-disk as a .png or .fits file.\n\n Parameters\n -----------\n array : ndarray\n The 2D array of image to be output, required for outputting the image as a fits file.\n as_subplot : bool\n Whether the figure is part of subplot, in which case the figure is not output so that the entire subplot can \\\n be output instead using the *output_subplot_array* function.\n output_path : str\n The path on the hard-disk where the figure is output.\n output_filename : str\n The filename of the figure that is output.\n output_format : str\n The format the figue is output:\n 'show' - display on computer screen.\n 'png' - output to hard-disk as a png.\n 'fits' - output to hard-disk as a fits file.'"}
{"_id": "q_8351", "text": "Output a figure which consists of a set of subplot,, either as an image on the screen or to the hard-disk as a \\\n .png file.\n\n Parameters\n -----------\n output_path : str\n The path on the hard-disk where the figure is output.\n output_filename : str\n The filename of the figure that is output.\n output_format : str\n The format the figue is output:\n 'show' - display on computer screen.\n 'png' - output to hard-disk as a png."}
{"_id": "q_8352", "text": "Generate an image psf shape tag, to customize phase names based on size of the image PSF that the original PSF \\\n is trimmed to for faster run times.\n\n This changes the phase name 'phase_name' as follows:\n\n image_psf_shape = 1 -> phase_name\n image_psf_shape = 2 -> phase_name_image_psf_shape_2\n image_psf_shape = 2 -> phase_name_image_psf_shape_2"}
{"_id": "q_8353", "text": "Generate an inversion psf shape tag, to customize phase names based on size of the inversion PSF that the \\\n original PSF is trimmed to for faster run times.\n\n This changes the phase name 'phase_name' as follows:\n\n inversion_psf_shape = 1 -> phase_name\n inversion_psf_shape = 2 -> phase_name_inversion_psf_shape_2\n inversion_psf_shape = 2 -> phase_name_inversion_psf_shape_2"}
{"_id": "q_8354", "text": "This function determines whether the tracer should compute the deflections at the next plane.\n\n This is True if there is another plane after this plane, else it is False..\n\n Parameters\n -----------\n plane_index : int\n The index of the plane we are deciding if we should compute its deflections.\n total_planes : int\n The total number of planes."}
{"_id": "q_8355", "text": "Given a plane and scaling factor, compute a set of scaled deflections.\n\n Parameters\n -----------\n plane : plane.Plane\n The plane whose deflection stack is scaled.\n scaling_factor : float\n The factor the deflection angles are scaled by, which is typically the scaling factor between redshifts for \\\n multi-plane lensing."}
{"_id": "q_8356", "text": "From the pixel-neighbors, setup the regularization matrix using the constant regularization scheme.\n\n Parameters\n ----------\n coefficients : tuple\n The regularization coefficients which controls the degree of smoothing of the inversion reconstruction.\n pixel_neighbors : ndarray\n An array of length (total_pixels) which provides the index of all neighbors of every pixel in \\\n the Voronoi grid (entries of -1 correspond to no neighbor).\n pixel_neighbors_size : ndarrayy\n An array of length (total_pixels) which gives the number of neighbors of every pixel in the \\\n Voronoi grid."}
{"_id": "q_8357", "text": "Setup the colorbar of the figure, specifically its ticksize and the size is appears relative to the figure.\n\n Parameters\n -----------\n cb_ticksize : int\n The size of the tick labels on the colorbar.\n cb_fraction : float\n The fraction of the figure that the colorbar takes up, which resizes the colorbar relative to the figure.\n cb_pad : float\n Pads the color bar in the figure, which resizes the colorbar relative to the figure.\n cb_tick_values : [float]\n Manually specified values of where the colorbar tick labels appear on the colorbar.\n cb_tick_labels : [float]\n Manually specified labels of the color bar tick labels, which appear where specified by cb_tick_values."}
{"_id": "q_8358", "text": "Plot the mask of the array on the figure.\n\n Parameters\n -----------\n mask : ndarray of data.array.mask.Mask\n The mask applied to the array, the edge of which is plotted as a set of points over the plotted array.\n units : str\n The units of the y / x axis of the plots, in arc-seconds ('arcsec') or kiloparsecs ('kpc').\n kpc_per_arcsec : float or None\n The conversion factor between arc-seconds and kiloparsecs, required to plot the units in kpc.\n pointsize : int\n The size of the points plotted to show the mask."}
{"_id": "q_8359", "text": "Plot the borders of the mask or the array on the figure.\n\n Parameters\n -----------t.\n mask : ndarray of data.array.mask.Mask\n The mask applied to the array, the edge of which is plotted as a set of points over the plotted array.\n should_plot_border : bool\n If a mask is supplied, its borders pixels (e.g. the exterior edge) is plotted if this is *True*.\n units : str\n The units of the y / x axis of the plots, in arc-seconds ('arcsec') or kiloparsecs ('kpc').\n kpc_per_arcsec : float or None\n The conversion factor between arc-seconds and kiloparsecs, required to plot the units in kpc.\n border_pointsize : int\n The size of the points plotted to show the borders."}
{"_id": "q_8360", "text": "Plot a grid of points over the array of data on the figure.\n\n Parameters\n -----------.\n grid_arcsec : ndarray or data.array.grids.RegularGrid\n A grid of (y,x) coordinates in arc-seconds which may be plotted over the array.\n array : data.array.scaled_array.ScaledArray\n The 2D array of data which is plotted.\n units : str\n The units of the y / x axis of the plots, in arc-seconds ('arcsec') or kiloparsecs ('kpc').\n kpc_per_arcsec : float or None\n The conversion factor between arc-seconds and kiloparsecs, required to plot the units in kpc.\n grid_pointsize : int\n The size of the points plotted to show the grid."}
{"_id": "q_8361", "text": "The mapping matrix is a matrix representing the mapping between every unmasked pixel of a grid and \\\n the pixels of a pixelization. Non-zero entries signify a mapping, whereas zeros signify no mapping.\n\n For example, if the regular grid has 5 pixels and the pixelization 3 pixels, with the following mappings:\n\n regular pixel 0 -> pixelization pixel 0\n regular pixel 1 -> pixelization pixel 0\n regular pixel 2 -> pixelization pixel 1\n regular pixel 3 -> pixelization pixel 1\n regular pixel 4 -> pixelization pixel 2\n\n The mapping matrix (which is of dimensions regular_pixels x pixelization_pixels) would appear as follows:\n\n [1, 0, 0] [0->0]\n [1, 0, 0] [1->0]\n [0, 1, 0] [2->1]\n [0, 1, 0] [3->1]\n [0, 0, 1] [4->2]\n\n The mapping matrix is in fact built using the sub-grid of the grid-stack, whereby each regular-pixel is \\\n divided into a regular grid of sub-pixels which are all paired to pixels in the pixelization. The entires \\\n in the mapping matrix now become fractional values dependent on the sub-grid size. For example, for a 2x2 \\\n sub-grid in each pixel (which means the fraction value is 1.0/(2.0^2) = 0.25, if we have the following mappings:\n\n regular pixel 0 -> sub pixel 0 -> pixelization pixel 0\n regular pixel 0 -> sub pixel 1 -> pixelization pixel 1\n regular pixel 0 -> sub pixel 2 -> pixelization pixel 1\n regular pixel 0 -> sub pixel 3 -> pixelization pixel 1\n regular pixel 1 -> sub pixel 0 -> pixelization pixel 1\n regular pixel 1 -> sub pixel 1 -> pixelization pixel 1\n regular pixel 1 -> sub pixel 2 -> pixelization pixel 1\n regular pixel 1 -> sub pixel 3 -> pixelization pixel 1\n regular pixel 2 -> sub pixel 0 -> pixelization pixel 2\n regular pixel 2 -> sub pixel 1 -> pixelization pixel 2\n regular pixel 2 -> sub pixel 2 -> pixelization pixel 3\n regular pixel 2 -> sub pixel 3 -> pixelization pixel 3\n\n The mapping matrix (which is still of dimensions regular_pixels x source_pixels) would appear as follows:\n\n [0.25, 0.75, 0.0, 0.0] [1 sub-pixel maps to pixel 0, 3 map to pixel 1]\n [ 0.0, 1.0, 0.0, 0.0] [All sub-pixels map to pixel 1]\n [ 0.0, 0.0, 0.5, 0.5] [2 sub-pixels map to pixel 2, 2 map to pixel 3]"}
{"_id": "q_8362", "text": "Compute the mappings between a pixelization's pixels and the unmasked regular-grid pixels. These mappings \\\n are determined after the regular-grid is used to determine the pixelization.\n\n The pixelization's pixels map to different number of regular-grid pixels, thus a list of lists is used to \\\n represent these mappings"}
{"_id": "q_8363", "text": "The 1D index mappings between the regular pixels and Voronoi pixelization pixels."}
{"_id": "q_8364", "text": "The 1D index mappings between the sub pixels and Voronoi pixelization pixels."}
{"_id": "q_8365", "text": "Generate a two-dimensional poisson noise_maps-mappers from an image.\n\n Values are computed from a Poisson distribution using the image's input values in units of counts.\n\n Parameters\n ----------\n image : ndarray\n The 2D image, whose values in counts are used to draw Poisson noise_maps values.\n exposure_time_map : Union(ndarray, int)\n 2D array of the exposure time in each pixel used to convert to / from counts and electrons per second.\n seed : int\n The seed of the random number generator, used for the random noise_maps maps.\n\n Returns\n -------\n poisson_noise_map: ndarray\n An array describing simulated poisson noise_maps"}
{"_id": "q_8366", "text": "Factory for loading the background noise-map from a .fits file.\n\n This factory also includes a number of routines for converting the background noise-map from from other units (e.g. \\\n a weight map).\n\n Parameters\n ----------\n background_noise_map_path : str\n The path to the background_noise_map .fits file containing the background noise-map \\\n (e.g. '/path/to/background_noise_map.fits')\n background_noise_map_hdu : int\n The hdu the background_noise_map is contained in the .fits file specified by *background_noise_map_path*.\n pixel_scale : float\n The size of each pixel in arc seconds.\n convert_background_noise_map_from_weight_map : bool\n If True, the bacground noise-map loaded from the .fits file is converted from a weight-map to a noise-map (see \\\n *NoiseMap.from_weight_map).\n convert_background_noise_map_from_inverse_noise_map : bool\n If True, the background noise-map loaded from the .fits file is converted from an inverse noise-map to a \\\n noise-map (see *NoiseMap.from_inverse_noise_map)."}
{"_id": "q_8367", "text": "Factory for loading the psf from a .fits file.\n\n Parameters\n ----------\n psf_path : str\n The path to the psf .fits file containing the psf (e.g. '/path/to/psf.fits')\n psf_hdu : int\n The hdu the psf is contained in the .fits file specified by *psf_path*.\n pixel_scale : float\n The size of each pixel in arc seconds.\n renormalize : bool\n If True, the PSF is renoralized such that all elements sum to 1.0."}
{"_id": "q_8368", "text": "Factory for loading the exposure time map from a .fits file.\n\n This factory also includes a number of routines for computing the exposure-time map from other unblurred_image_1d \\\n (e.g. the background noise-map).\n\n Parameters\n ----------\n exposure_time_map_path : str\n The path to the exposure_time_map .fits file containing the exposure time map \\\n (e.g. '/path/to/exposure_time_map.fits')\n exposure_time_map_hdu : int\n The hdu the exposure_time_map is contained in the .fits file specified by *exposure_time_map_path*.\n pixel_scale : float\n The size of each pixel in arc seconds.\n shape : (int, int)\n The shape of the image, required if a single value is used to calculate the exposure time map.\n exposure_time : float\n The exposure-time used to compute the expsure-time map if only a single value is used.\n exposure_time_map_from_inverse_noise_map : bool\n If True, the exposure-time map is computed from the background noise_map map \\\n (see *ExposureTimeMap.from_background_noise_map*)\n inverse_noise_map : ndarray\n The background noise-map, which the Poisson noise-map can be calculated using."}
{"_id": "q_8369", "text": "Factory for loading the background sky from a .fits file.\n\n Parameters\n ----------\n background_sky_map_path : str\n The path to the background_sky_map .fits file containing the background sky map \\\n (e.g. '/path/to/background_sky_map.fits').\n background_sky_map_hdu : int\n The hdu the background_sky_map is contained in the .fits file specified by *background_sky_map_path*.\n pixel_scale : float\n The size of each pixel in arc seconds."}
{"_id": "q_8370", "text": "The estimated absolute_signal-to-noise_maps mappers of the image."}
{"_id": "q_8371", "text": "Simulate the PSF as an elliptical Gaussian profile."}
{"_id": "q_8372", "text": "Loads a PSF from fits and renormalizes it\n\n Parameters\n ----------\n pixel_scale\n file_path: String\n The path to the file containing the PSF\n hdu : int\n The HDU the PSF is stored in the .fits file.\n\n Returns\n -------\n psf: PSF\n A renormalized PSF instance"}
{"_id": "q_8373", "text": "Loads the PSF from a .fits file.\n\n Parameters\n ----------\n pixel_scale\n file_path: String\n The path to the file containing the PSF\n hdu : int\n The HDU the PSF is stored in the .fits file."}
{"_id": "q_8374", "text": "Renormalize the PSF such that its data_vector values sum to unity."}
{"_id": "q_8375", "text": "Convolve an array with this PSF\n\n Parameters\n ----------\n image : ndarray\n An array representing the image the PSF is convolved with.\n\n Returns\n -------\n convolved_image : ndarray\n An array representing the image after convolution.\n\n Raises\n ------\n KernelException if either PSF psf dimension is odd"}
{"_id": "q_8376", "text": "Compute the Voronoi grid of the pixelization, using the pixel centers.\n\n Parameters\n ----------\n pixel_centers : ndarray\n The (y,x) centre of every Voronoi pixel."}
{"_id": "q_8377", "text": "Compute the neighbors of every Voronoi pixel as an ndarray of the pixel index's each pixel shares a \\\n vertex with.\n\n The ridge points of the Voronoi grid are used to derive this.\n\n Parameters\n ----------\n ridge_points : scipy.spatial.Voronoi.ridge_points\n Each Voronoi-ridge (two indexes representing a pixel mapping_matrix)."}
{"_id": "q_8378", "text": "Set the x and y labels of the figure, and set the fontsize of those labels.\n\n The x and y labels are always the distance scales, thus the labels are either arc-seconds or kpc and depend on the \\\n units the figure is plotted in.\n\n Parameters\n -----------\n units : str\n The units of the y / x axis of the plots, in arc-seconds ('arcsec') or kiloparsecs ('kpc').\n kpc_per_arcsec : float\n The conversion factor between arc-seconds and kiloparsecs, required to plot the units in kpc.\n xlabelsize : int\n The fontsize of the x axes label.\n ylabelsize : int\n The fontsize of the y axes label.\n xyticksize : int\n The font size of the x and y ticks on the figure axes."}
{"_id": "q_8379", "text": "Decorate a profile method that accepts a coordinate grid and returns a data grid.\n\n If an interpolator attribute is associated with the input grid then that interpolator is used to down sample the\n coordinate grid prior to calling the function and up sample the result of the function.\n\n If no interpolator attribute is associated with the input grid then the function is called as normal.\n\n Parameters\n ----------\n func\n Some method that accepts a grid\n\n Returns\n -------\n decorated_function\n The function with optional interpolation"}
{"_id": "q_8380", "text": "For a padded grid-stack and psf, compute an unmasked blurred image from an unmasked unblurred image.\n\n This relies on using the lens data's padded-grid, which is a grid of (y,x) coordinates which extends over the \\\n entire image as opposed to just the masked region.\n\n Parameters\n ----------\n psf : ccd.PSF\n The PSF of the image used for convolution.\n unmasked_image_1d : ndarray\n The 1D unmasked image which is blurred."}
{"_id": "q_8381", "text": "Setup a grid-stack of grid_stack from a 2D array shape, a pixel scale and a sub-grid size.\n \n This grid corresponds to a fully unmasked 2D array.\n\n Parameters\n -----------\n shape : (int, int)\n The 2D shape of the array, where all pixels are used to generate the grid-stack's grid_stack.\n pixel_scale : float\n The size of each pixel in arc seconds. \n sub_grid_size : int\n The size of a sub-pixel's sub-grid (sub_grid_size x sub_grid_size)."}
{"_id": "q_8382", "text": "Setup a grid-stack of masked grid_stack from a mask, sub-grid size and psf-shape.\n\n Parameters\n -----------\n mask : Mask\n The mask whose masked pixels the grid-stack are setup using.\n sub_grid_size : int\n The size of a sub-pixels sub-grid (sub_grid_size x sub_grid_size).\n psf_shape : (int, int)\n The shape of the PSF used in the analysis, which defines the mask's blurring-region."}
{"_id": "q_8383", "text": "Compute the xticks labels of this grid, used for plotting the x-axis ticks when visualizing a regular"}
{"_id": "q_8384", "text": "For an input sub-gridded array, map its hyper-values from the sub-gridded values to a 1D regular grid of \\\n values by summing each set of each set of sub-pixels values and dividing by the total number of sub-pixels.\n\n Parameters\n -----------\n sub_array_1d : ndarray\n A 1D sub-gridded array of values (e.g. the intensities, surface-densities, potential) which is mapped to\n a 1d regular array."}
{"_id": "q_8385", "text": "The 1D index mappings between the regular-grid and masked sparse-grid."}
{"_id": "q_8386", "text": "Compute a 2D padded blurred image from a 1D padded image.\n\n Parameters\n ----------\n padded_image_1d : ndarray\n A 1D unmasked image which is blurred with the PSF.\n psf : ndarray\n An array describing the PSF kernel of the image."}
{"_id": "q_8387", "text": "Map a padded 1D array of values to its padded 2D array.\n\n Parameters\n -----------\n padded_array_1d : ndarray\n A 1D array of values which were computed using the *PaddedRegularGrid*."}
{"_id": "q_8388", "text": "Determine a set of relocated grid_stack from an input set of grid_stack, by relocating their pixels based on the \\\n borders.\n\n The blurring-grid does not have its coordinates relocated, as it is only used for computing analytic \\\n light-profiles and not inversion-grid_stack.\n\n Parameters\n -----------\n grid_stack : GridStack\n The grid-stack, whose grid_stack coordinates are relocated."}
{"_id": "q_8389", "text": "Run a fit for each galaxy from the previous phase.\n\n Parameters\n ----------\n data: LensData\n results: ResultsCollection\n Results from all previous phases\n mask: Mask\n The mask\n positions\n\n Returns\n -------\n results: HyperGalaxyResults\n A collection of results, with one item per a galaxy"}
{"_id": "q_8390", "text": "Determine the mapping between every masked pixelization-grid pixel and pixelization-grid pixel. This is\n performed by checking whether each pixelization-grid pixel is within the regular-masks, and mapping the indexes.\n\n Parameters\n -----------\n total_sparse_pixels : int\n The total number of pixels in the pixelization grid which fall within the regular-masks.\n mask : ccd.masks.Mask\n The regular-masks within which pixelization pixels must be inside\n unmasked_sparse_grid_pixel_centres : ndarray\n The centres of the unmasked pixelization grid pixels."}
{"_id": "q_8391", "text": "Use the central arc-second coordinate of every unmasked pixelization grid's pixels and mapping between each\n pixelization pixel and unmasked pixelization pixel to compute the central arc-second coordinate of every masked\n pixelization grid pixel.\n\n Parameters\n -----------\n unmasked_sparse_grid : ndarray\n The (y,x) arc-second centre of every unmasked pixelization grid pixel.\n sparse_to_unmasked_sparse : ndarray\n The index mapping between every pixelization pixel and masked pixelization pixel."}
{"_id": "q_8392", "text": "Resize an array to a new size around a central pixel.\n\n If the origin (e.g. the central pixel) of the resized array is not specified, the central pixel of the array is \\\n calculated automatically. For example, a (5,5) array's central pixel is (2,2). For even dimensions the central \\\n pixel is assumed to be the lower indexed value, e.g. a (6,4) array's central pixel is calculated as (2,1).\n\n The default origin is (-1, -1) because numba requires that the function input is the same type throughout the \\\n function, thus a default 'None' value cannot be used.\n\n Parameters\n ----------\n array_2d : ndarray\n The 2D array that is resized.\n resized_shape : (int, int)\n The (y,x) new pixel dimension of the trimmed array.\n origin : (int, int)\n The oigin of the resized array, e.g. the central pixel around which the array is extracted.\n\n Returns\n -------\n ndarray\n The resized 2D array from the input 2D array.\n\n Examples\n --------\n array_2d = np.ones((5,5))\n resize_array = resize_array_2d(array_2d=array_2d, new_shape=(2,2), origin=(2, 2))"}
{"_id": "q_8393", "text": "Bin up an array to coarser resolution, by binning up groups of pixels and using their mean value to determine \\\n the value of the new pixel.\n\n If an array of shape (8,8) is input and the bin up size is 2, this would return a new array of size (4,4) where \\\n every pixel was the mean of each collection of 2x2 pixels on the (8,8) array.\n\n If binning up the array leads to an edge being cut (e.g. a (9,9) array binned up by 2), an array is first \\\n extracted around the centre of that array.\n\n\n Parameters\n ----------\n array_2d : ndarray\n The 2D array that is resized.\n new_shape : (int, int)\n The (y,x) new pixel dimension of the trimmed array.\n origin : (int, int)\n The oigin of the resized array, e.g. the central pixel around which the array is extracted.\n\n Returns\n -------\n ndarray\n The resized 2D array from the input 2D array.\n\n Examples\n --------\n array_2d = np.ones((5,5))\n resize_array = resize_array_2d(array_2d=array_2d, new_shape=(2,2), origin=(2, 2))"}
{"_id": "q_8394", "text": "For a given inversion mapping matrix, convolve every pixel's mapped regular with the PSF kernel.\n\n A mapping matrix provides non-zero entries in all elements which map two pixels to one another\n (see *inversions.mappers*).\n\n For example, lets take an regular which is masked using a 'cross' of 5 pixels:\n\n [[ True, False, True]],\n [[False, False, False]],\n [[ True, False, True]]\n\n As example mapping matrix of this cross is as follows (5 regular pixels x 3 source pixels):\n\n [1, 0, 0] [0->0]\n [1, 0, 0] [1->0]\n [0, 1, 0] [2->1]\n [0, 1, 0] [3->1]\n [0, 0, 1] [4->2]\n\n For each source-pixel, we can create an regular of its unit-surface brightnesses by mapping the non-zero\n entries back to masks. For example, doing this for source pixel 1 gives:\n\n [[0.0, 1.0, 0.0]],\n [[1.0, 0.0, 0.0]]\n [[0.0, 0.0, 0.0]]\n\n And source pixel 2:\n\n [[0.0, 0.0, 0.0]],\n [[0.0, 1.0, 1.0]]\n [[0.0, 0.0, 0.0]]\n\n We then convolve each of these regular with our PSF kernel, in 2 dimensions, like we would a normal regular. For\n example, using the kernel below:\n\n kernel:\n\n [[0.0, 0.1, 0.0]]\n [[0.1, 0.6, 0.1]]\n [[0.0, 0.1, 0.0]]\n\n Blurred Source Pixel 1 (we don't need to perform the convolution into masked pixels):\n\n [[0.0, 0.6, 0.0]],\n [[0.6, 0.0, 0.0]],\n [[0.0, 0.0, 0.0]]\n\n Blurred Source pixel 2:\n\n [[0.0, 0.0, 0.0]],\n [[0.0, 0.7, 0.7]],\n [[0.0, 0.0, 0.0]]\n\n Finally, we map each of these blurred regular back to a blurred mapping matrix, which is analogous to the\n mapping matrix.\n\n [0.6, 0.0, 0.0] [0->0]\n [0.6, 0.0, 0.0] [1->0]\n [0.0, 0.7, 0.0] [2->1]\n [0.0, 0.7, 0.0] [3->1]\n [0.0, 0.0, 0.6] [4->2]\n\n If the mapping matrix is sub-gridded, we perform the convolution on the fractional surface brightnesses in an\n identical fashion to above.\n\n Parameters\n -----------\n mapping_matrix : ndarray\n The 2D mapping matix describing how every inversion pixel maps to an datas_ pixel."}
{"_id": "q_8395", "text": "Integrate the mass profiles's convergence profile to compute the total angular mass within an ellipse of \\\n specified major axis. This is centred on the mass profile.\n\n The following units for mass can be specified and output:\n\n - Dimensionless angular units (default) - 'angular'.\n - Solar masses - 'angular' (multiplies the angular mass by the critical surface mass density)\n\n Parameters\n ----------\n major_axis : float\n The major-axis radius of the ellipse.\n unit_mass : str\n The units the mass is returned in (angular | angular).\n critical_surface_density : float or None\n The critical surface mass density of the strong lens configuration, which converts mass from angular \\\n units to phsical units (e.g. solar masses)."}
{"_id": "q_8396", "text": "Routine to integrate an elliptical light profiles - set axis ratio to 1 to compute the luminosity within a \\\n circle"}
{"_id": "q_8397", "text": "Calculate the mass between two circular annuli and compute the density by dividing by the annuli surface\n area.\n\n The value returned by the mass integral is dimensionless, therefore the density between annuli is returned in \\\n units of inverse radius squared. A conversion factor can be specified to convert this to a physical value \\\n (e.g. the critical surface mass density).\n\n Parameters\n -----------\n inner_annuli_radius : float\n The radius of the inner annulus outside of which the density are estimated.\n outer_annuli_radius : float\n The radius of the outer annulus inside of which the density is estimated."}
{"_id": "q_8398", "text": "Rescale the einstein radius by slope and axis_ratio, to reduce its degeneracy with other mass-profiles\n parameters"}
{"_id": "q_8399", "text": "Calculate the projected convergence at a given set of arc-second gridded coordinates.\n\n Parameters\n ----------\n grid : grids.RegularGrid\n The grid of (y,x) arc-second coordinates the surface density is computed on."}
{"_id": "q_8400", "text": "Tabulate an integral over the surface density of deflection potential of a mass profile. This is used in \\\n the GeneralizedNFW profile classes to speed up the integration procedure.\n\n Parameters\n -----------\n grid : grids.RegularGrid\n The grid of (y,x) arc-second coordinates the potential / deflection_stacks are computed on.\n tabulate_bins : int\n The number of bins to tabulate the inner integral of this profile."}
{"_id": "q_8401", "text": "Compute the intensity of the profile at a given radius.\n\n Parameters\n ----------\n radius : float\n The distance from the centre of the profile."}
{"_id": "q_8402", "text": "Compute the total luminosity of the galaxy's light profiles within a circle of specified radius.\n\n See *light_profiles.luminosity_within_circle* for details of how this is performed.\n\n Parameters\n ----------\n radius : float\n The radius of the circle to compute the dimensionless mass within.\n unit_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."}
{"_id": "q_8403", "text": "Compute the total angular mass of the galaxy's mass profiles within a circle of specified radius.\n\n See *profiles.mass_profiles.mass_within_circle* for details of how this is performed.\n\n Parameters\n ----------\n radius : float\n The radius of the circle to compute the dimensionless mass within.\n unit_mass : str\n The units the mass is returned in (angular | solMass).\n critical_surface_density : float\n The critical surface mass density of the strong lens configuration, which converts mass from angulalr \\\n units to physical units (e.g. solar masses)."}
{"_id": "q_8404", "text": "The Einstein Mass of this galaxy, which is the sum of Einstein Radii of its mass profiles.\n\n If the galaxy is composed of multiple ellipitcal profiles with different axis-ratios, this Einstein Mass \\\n may be inaccurate. This is because the differently oriented ellipses of each mass profile"}
{"_id": "q_8405", "text": "Compute a scaled galaxy hyper noise-map from a baseline noise-map.\n\n This uses the galaxy contribution map and the *noise_factor* and *noise_power* hyper-parameters.\n\n Parameters\n -----------\n noise_map : ndarray\n The observed noise-map (before scaling).\n contributions : ndarray\n The galaxy contribution map."}
{"_id": "q_8406", "text": "For a given 1D regular array and blurring array, convolve the two using this convolver.\n\n Parameters\n -----------\n image_array : ndarray\n 1D array of the regular values which are to be blurred with the convolver's PSF.\n blurring_array : ndarray\n 1D array of the blurring regular values which blur into the regular-array after PSF convolution."}
{"_id": "q_8407", "text": "Compute the intensities of a list of galaxies from an input grid, by summing the individual intensities \\\n of each galaxy's light profile.\n\n If the input grid is a *grids.SubGrid*, the intensites is calculated on the sub-grid and binned-up to the \\\n original regular grid by taking the mean value of every set of sub-pixels.\n\n If no galaxies are entered into the function, an array of all zeros is returned.\n\n Parameters\n -----------\n grid : RegularGrid\n The grid (regular or sub) of (y,x) arc-second coordinates at the centre of every unmasked pixel which the \\\n intensities are calculated on.\n galaxies : [galaxy.Galaxy]\n The galaxies whose light profiles are used to compute the surface densities."}
{"_id": "q_8408", "text": "Compute the convergence of a list of galaxies from an input grid, by summing the individual convergence \\\n of each galaxy's mass profile.\n\n If the input grid is a *grids.SubGrid*, the convergence is calculated on the sub-grid and binned-up to the \\\n original regular grid by taking the mean value of every set of sub-pixels.\n\n If no galaxies are entered into the function, an array of all zeros is returned.\n\n Parameters\n -----------\n grid : RegularGrid\n The grid (regular or sub) of (y,x) arc-second coordinates at the centre of every unmasked pixel which the \\\n convergence is calculated on.\n galaxies : [galaxy.Galaxy]\n The galaxies whose mass profiles are used to compute the convergence."}
{"_id": "q_8409", "text": "Compute the potential of a list of galaxies from an input grid, by summing the individual potential \\\n of each galaxy's mass profile.\n\n If the input grid is a *grids.SubGrid*, the surface-density is calculated on the sub-grid and binned-up to the \\\n original regular grid by taking the mean value of every set of sub-pixels.\n\n If no galaxies are entered into the function, an array of all zeros is returned.\n\n Parameters\n -----------\n grid : RegularGrid\n The grid (regular or sub) of (y,x) arc-second coordinates at the centre of every unmasked pixel which the \\\n potential is calculated on.\n galaxies : [galaxy.Galaxy]\n The galaxies whose mass profiles are used to compute the surface densities."}
{"_id": "q_8410", "text": "Compute the deflections of a list of galaxies from an input sub-grid, by summing the individual deflections \\\n of each galaxy's mass profile.\n\n The deflections are calculated on the sub-grid and binned-up to the original regular grid by taking the mean value \\\n of every set of sub-pixels.\n\n If no galaxies are entered into the function, an array of all zeros is returned.\n\n Parameters\n -----------\n sub_grid : RegularGrid\n The grid (regular or sub) of (y,x) arc-second coordinates at the centre of every unmasked pixel which the \\\n deflections is calculated on.\n galaxies : [galaxy.Galaxy]\n The galaxies whose mass profiles are used to compute the surface densities."}
{"_id": "q_8411", "text": "For a fitting hyper_galaxy_image, hyper_galaxy model image, list of hyper galaxies images and model hyper galaxies, compute\n their contribution maps, which are used to compute a scaled-noise_map map. All quantities are masked 1D arrays.\n\n The reason this is separate from the *contributions_from_fitting_hyper_images_and_hyper_galaxies* function is that\n each hyper_galaxy image has a list of hyper galaxies images and associated hyper galaxies (one for each galaxy). Thus,\n this function breaks down the calculation of each 1D masked contribution map and returns them in the same datas\n structure (2 lists with indexes [image_index][contribution_map_index].\n\n Parameters\n ----------\n hyper_model_image_1d : ndarray\n The best-fit model image to the datas (e.g. from a previous analysis phase).\n hyper_galaxy_images_1d : [ndarray]\n The best-fit model image of each hyper galaxy to the datas (e.g. from a previous analysis phase).\n hyper_galaxies : [galaxy.Galaxy]\n The hyper galaxies which represent the model components used to scale the noise_map, which correspond to\n individual galaxies in the image.\n hyper_minimum_values : [float]\n The minimum value of each hyper_galaxy-image contribution map, which ensure zero's don't impact the scaled noise-map."}
{"_id": "q_8412", "text": "For a contribution map and noise-map, use the model hyper galaxies to compute a scaled noise-map.\n\n Parameters\n -----------\n contribution_maps : ndarray\n The image's list of 1D masked contribution maps (e.g. one for each hyper galaxy)\n hyper_galaxies : [galaxy.Galaxy]\n The hyper galaxies which represent the model components used to scale the noise_map, which correspond to\n individual galaxies in the image.\n noise_map : ccd.NoiseMap or ndarray\n An array describing the RMS standard deviation error in each pixel, preferably in units of electrons per\n second."}
{"_id": "q_8413", "text": "Wrap the function in a function that checks whether the coordinates have been transformed. If they have not \\ \n been transformed then they are transformed.\n\n Parameters\n ----------\n func : (profiles, *args, **kwargs) -> Object\n A function that requires transformed coordinates\n\n Returns\n -------\n A function that can except cartesian or transformed coordinates"}
{"_id": "q_8414", "text": "Caches results of a call to a grid function. If a grid that evaluates to the same byte value is passed into the same\n function of the same instance as previously then the cached result is returned.\n\n Parameters\n ----------\n func\n Some instance method that takes a grid as its argument\n\n Returns\n -------\n result\n Some result, either newly calculated or recovered from the cache"}
{"_id": "q_8415", "text": "Determine the sin and cosine of the angle between the profile's ellipse and the positive x-axis, \\\n counter-clockwise."}
{"_id": "q_8416", "text": "The angle between each angle theta on the grid and the profile, in radians.\n\n Parameters\n -----------\n grid_thetas : ndarray\n The angle theta counter-clockwise from the positive x-axis to each coordinate in radians."}
{"_id": "q_8417", "text": "Compute the mappings between a set of regular-grid pixels and pixelization pixels, using information on \\\n how regular pixels map to their closest pixelization pixel on the image-plane pix-grid and the pixelization's \\\n pixel centres.\n\n To determine the complete set of regular-pixel to pixelization pixel mappings, we must pair every regular-pixel to \\\n its nearest pixel. Using a full nearest neighbor search to do this is slow, thus the pixel neighbors (derived via \\\n the Voronoi grid) are used to localize each nearest neighbor search via a graph search.\n\n Parameters\n ----------\n regular_grid : RegularGrid\n The grid of (y,x) arc-second coordinates at the centre of every unmasked pixel, which has been traced to \\\n to an irregular grid via lens.\n regular_to_nearest_pix : ndarray\n A 1D array that maps every regular-grid pixel to its nearest pix-grid pixel (as determined on the unlensed \\\n 2D array).\n pixel_centres : ndarray\n The (y,x) centre of every Voronoi pixel in arc-seconds.\n pixel_neighbors : ndarray\n An array of length (voronoi_pixels) which provides the index of all neighbors of every pixel in \\\n the Voronoi grid (entries of -1 correspond to no neighbor).\n pixel_neighbors_size : ndarray\n An array of length (voronoi_pixels) which gives the number of neighbors of every pixel in the \\\n Voronoi grid."}
{"_id": "q_8418", "text": "Compute the mappings between a set of sub-grid pixels and pixelization pixels, using information on \\\n how the regular pixels hosting each sub-pixel map to their closest pixelization pixel on the image-plane pix-grid \\\n and the pixelization's pixel centres.\n\n To determine the complete set of sub-pixel to pixelization pixel mappings, we must pair every sub-pixel to \\\n its nearest pixel. Using a full nearest neighbor search to do this is slow, thus the pixel neighbors (derived via \\\n the Voronoi grid) are used to localize each nearest neighbor search by using a graph search.\n\n Parameters\n ----------\n regular_grid : RegularGrid\n The grid of (y,x) arc-second coordinates at the centre of every unmasked pixel, which has been traced to \\\n to an irregular grid via lens.\n regular_to_nearest_pix : ndarray\n A 1D array that maps every regular-grid pixel to its nearest pix-grid pixel (as determined on the unlensed \\\n 2D array).\n pixel_centres : (float, float)\n The (y,x) centre of every Voronoi pixel in arc-seconds.\n pixel_neighbors : ndarray\n An array of length (voronoi_pixels) which provides the index of all neighbors of every pixel in \\\n the Voronoi grid (entries of -1 correspond to no neighbor).\n pixel_neighbors_size : ndarray\n An array of length (voronoi_pixels) which gives the number of neighbors of every pixel in the \\\n Voronoi grid."}
{"_id": "q_8419", "text": "Integrate the light profile to compute the total luminosity within a circle of specified radius. This is \\\n centred on the light profile's centre.\n\n The following units for mass can be specified and output:\n\n - Electrons per second (default) - 'eps'.\n - Counts - 'counts' (multiplies the luminosity in electrons per second by the exposure time).\n\n Parameters\n ----------\n radius : float\n The radius of the circle to compute the dimensionless mass within.\n unit_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float or None\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."}
{"_id": "q_8420", "text": "Integrate the light profiles to compute the total luminosity within an ellipse of specified major axis. \\\n This is centred on the light profile's centre.\n\n The following units for mass can be specified and output:\n\n - Electrons per second (default) - 'eps'.\n - Counts - 'counts' (multiplies the luminosity in electrons per second by the exposure time).\n\n Parameters\n ----------\n major_axis : float\n The major-axis radius of the ellipse.\n unit_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float or None\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."}
{"_id": "q_8421", "text": "Routine to integrate the luminosity of an elliptical light profile.\n\n The axis ratio is set to 1.0 for computing the luminosity within a circle"}
{"_id": "q_8422", "text": "Calculate the intensity of the Gaussian light profile on a grid of radial coordinates.\n\n Parameters\n ----------\n grid_radii : float\n The radial distance from the centre of the profile. for each coordinate on the grid."}
{"_id": "q_8423", "text": "Compute the total luminosity of all galaxies in this plane within a circle of specified radius.\n\n See *galaxy.light_within_circle* and *light_profiles.light_within_circle* for details \\\n of how this is performed.\n\n Parameters\n ----------\n radius : float\n The radius of the circle to compute the dimensionless mass within.\n units_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."}
{"_id": "q_8424", "text": "Compute the total luminosity of all galaxies in this plane within a ellipse of specified major-axis.\n\n The value returned by this integral is dimensionless, and a conversion factor can be specified to convert it \\\n to a physical value (e.g. the photometric zeropoint).\n\n See *galaxy.light_within_ellipse* and *light_profiles.light_within_ellipse* for details\n of how this is performed.\n\n Parameters\n ----------\n major_axis : float\n The major-axis radius of the ellipse.\n units_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."}
{"_id": "q_8425", "text": "Compute the total mass of all galaxies in this plane within a circle of specified radius.\n\n See *galaxy.angular_mass_within_circle* and *mass_profiles.angular_mass_within_circle* for details\n of how this is performed.\n\n Parameters\n ----------\n radius : float\n The radius of the circle to compute the dimensionless mass within.\n units_mass : str\n The units the mass is returned in (angular | solMass).\n critical_surface_density : float\n The critical surface mass density of the strong lens configuration, which converts mass from angulalr \\\n units to physical units (e.g. solar masses)."}
{"_id": "q_8426", "text": "Compute the total mass of all galaxies in this plane within a ellipse of specified major-axis.\n\n See *galaxy.angular_mass_within_ellipse* and *mass_profiles.angular_mass_within_ellipse* for details \\\n of how this is performed.\n\n Parameters\n ----------\n major_axis : float\n The major-axis radius of the ellipse.\n units_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."}
{"_id": "q_8427", "text": "Compute the xticks labels of this grid_stack, used for plotting the x-axis ticks when visualizing an \\\n image"}
{"_id": "q_8428", "text": "This is a utility function for the function above, which performs the iteration over each plane's galaxies \\\n and computes each galaxy's unmasked blurred image.\n\n Parameters\n ----------\n padded_grid_stack\n psf : ccd.PSF\n The PSF of the image used for convolution."}
{"_id": "q_8429", "text": "Trace the positions to the next plane."}
{"_id": "q_8430", "text": "Creates an instance of Array and fills it with a single value\n\n Parameters\n ----------\n value: float\n The value with which the array should be filled\n shape: (int, int)\n The shape of the array\n pixel_scale: float\n The scale of a pixel in arc seconds\n\n Returns\n -------\n array: ScaledSquarePixelArray\n An array filled with a single value"}
{"_id": "q_8431", "text": "Extract the 2D region of an array corresponding to the rectangle encompassing all unmasked values.\n\n This is used to extract and visualize only the region of an image that is used in an analysis.\n\n Parameters\n ----------\n mask : mask.Mask\n The mask around which the scaled array is extracted.\n buffer : int\n The buffer of pixels around the extraction."}
{"_id": "q_8432", "text": "resized the array to a new shape and at a new origin.\n\n Parameters\n -----------\n new_shape : (int, int)\n The new two-dimensional shape of the array."}
{"_id": "q_8433", "text": "Fit lens data with a normal tracer and sensitivity tracer, to determine our sensitivity to a selection of \\ \n galaxy components. This factory automatically determines the type of fit based on the properties of the galaxies \\\n in the tracers.\n\n Parameters\n -----------\n lens_data : lens_data.LensData or lens_data.LensDataHyper\n The lens-images that is fitted.\n tracer_normal : ray_tracing.AbstractTracer\n A tracer whose galaxies have the same model components (e.g. light profiles, mass profiles) as the \\\n lens data that we are fitting.\n tracer_sensitive : ray_tracing.AbstractTracerNonStack\n A tracer whose galaxies have the same model components (e.g. light profiles, mass profiles) as the \\\n lens data that we are fitting, but also addition components (e.g. mass clumps) which we measure \\\n how sensitive we are too."}
{"_id": "q_8434", "text": "Setup a mask where unmasked pixels are within a circle of an input arc second radius and centre.\n\n Parameters\n ----------\n shape: (int, int)\n The (y,x) shape of the mask in units of pixels.\n pixel_scale: float\n The arc-second to pixel conversion factor of each pixel.\n radius_arcsec : float\n The radius (in arc seconds) of the circle within which pixels unmasked.\n centre: (float, float)\n The centre of the circle used to mask pixels."}
{"_id": "q_8435", "text": "Setup a mask where unmasked pixels are within an annulus of input inner and outer arc second radii and \\\n centre.\n\n Parameters\n ----------\n shape : (int, int)\n The (y,x) shape of the mask in units of pixels.\n pixel_scale: float\n The arc-second to pixel conversion factor of each pixel.\n inner_radius_arcsec : float\n The radius (in arc seconds) of the inner circle outside of which pixels are unmasked.\n outer_radius_arcsec : float\n The radius (in arc seconds) of the outer circle within which pixels are unmasked.\n centre: (float, float)\n The centre of the annulus used to mask pixels."}
{"_id": "q_8436", "text": "Setup a mask where unmasked pixels are within an ellipse of an input arc second major-axis and centre.\n\n Parameters\n ----------\n shape: (int, int)\n The (y,x) shape of the mask in units of pixels.\n pixel_scale: float\n The arc-second to pixel conversion factor of each pixel.\n major_axis_radius_arcsec : float\n The major-axis (in arc seconds) of the ellipse within which pixels are unmasked.\n axis_ratio : float\n The axis-ratio of the ellipse within which pixels are unmasked.\n phi : float\n The rotation angle of the ellipse within which pixels are unmasked, (counter-clockwise from the positive \\\n x-axis).\n centre: (float, float)\n The centre of the ellipse used to mask pixels."}
{"_id": "q_8437", "text": "Setup a mask where unmasked pixels are within an elliptical annulus of input inner and outer arc second \\\n major-axis and centre.\n\n Parameters\n ----------\n shape: (int, int)\n The (y,x) shape of the mask in units of pixels.\n pixel_scale: float\n The arc-second to pixel conversion factor of each pixel.\n inner_major_axis_radius_arcsec : float\n The major-axis (in arc seconds) of the inner ellipse within which pixels are masked.\n inner_axis_ratio : float\n The axis-ratio of the inner ellipse within which pixels are masked.\n inner_phi : float\n The rotation angle of the inner ellipse within which pixels are masked, (counter-clockwise from the \\\n positive x-axis).\n outer_major_axis_radius_arcsec : float\n The major-axis (in arc seconds) of the outer ellipse within which pixels are unmasked.\n outer_axis_ratio : float\n The axis-ratio of the outer ellipse within which pixels are unmasked.\n outer_phi : float\n The rotation angle of the outer ellipse within which pixels are unmasked, (counter-clockwise from the \\\n positive x-axis).\n centre: (float, float)\n The centre of the elliptical annuli used to mask pixels."}
{"_id": "q_8438", "text": "The zoomed rectangular region corresponding to the square encompassing all unmasked values.\n\n This is used to zoom in on the region of an image that is used in an analysis for visualization."}
{"_id": "q_8439", "text": "Create an instance of the associated class for a set of arguments\n\n Parameters\n ----------\n arguments: {Prior: value}\n Dictionary mapping_matrix priors to attribute analysis_path and value pairs\n\n Returns\n -------\n An instance of the class"}
{"_id": "q_8440", "text": "Create a new galaxy prior from a set of arguments, replacing the priors of some of this galaxy prior's prior\n models with new arguments.\n\n Parameters\n ----------\n arguments: dict\n A dictionary mapping_matrix between old priors and their replacements.\n\n Returns\n -------\n new_model: GalaxyModel\n A model with some or all priors replaced."}
{"_id": "q_8441", "text": "Plot the observed image of the ccd data.\n\n Set *autolens.data.array.plotters.array_plotters* for a description of all input parameters not described below.\n\n Parameters\n -----------\n image : ScaledSquarePixelArray\n The image of the data.\n plot_origin : True\n If true, the origin of the data's coordinate system is plotted as a 'x'.\n image_plane_pix_grid : ndarray or data.array.grid_stacks.PixGrid\n If an adaptive pixelization whose pixels are formed by tracing pixels from the data, this plots those pixels \\\n over the immage."}
{"_id": "q_8442", "text": "Write `data` to file record `n`; records are indexed from 1."}
{"_id": "q_8443", "text": "Return a memory-map of the elements `start` through `end`.\n\n The memory map will offer the 8-byte double-precision floats\n (\"elements\") in the file from index `start` through to the index\n `end`, inclusive, both counting the first float as element 1.\n Memory maps must begin on a page boundary, so `skip` returns the\n number of extra bytes at the beginning of the return value."}
{"_id": "q_8444", "text": "Return the text inside the comment area of the file."}
{"_id": "q_8445", "text": "Compute the component values for the time `tdb` plus `tdb2`."}
{"_id": "q_8446", "text": "Close this file."}
{"_id": "q_8447", "text": "Map the coefficients into memory using a NumPy array."}
{"_id": "q_8448", "text": "Generate angles and derivatives for time `tdb` plus `tdb2`.\n\n If ``derivative`` is true, return a tuple containing both the\n angle and its derivative; otherwise simply return the angles."}
{"_id": "q_8449", "text": "Normalise and check a backend path.\n\n Ensure that the requested backend path is specified as a relative path,\n and resolves to a location under the given source tree.\n\n Return an absolute version of the requested path."}
{"_id": "q_8450", "text": "Visit a function call.\n\n We expect every logging statement and string format to be a function call."}
{"_id": "q_8451", "text": "Process binary operations while processing the first logging argument."}
{"_id": "q_8452", "text": "Process keyword arguments."}
{"_id": "q_8453", "text": "Helper to get the exception name from an ExceptHandler node in both py2 and py3."}
{"_id": "q_8454", "text": "Check if value has id attribute and return it.\n\n :param value: The value to get id from.\n :return: The value.id."}
{"_id": "q_8455", "text": "Checks if the node is a bare exception name from an except block."}
{"_id": "q_8456", "text": "Reports a violation if exc_info keyword is used with logging.error or logging.exception."}
{"_id": "q_8457", "text": "Get a json dict of the attributes of this object."}
{"_id": "q_8458", "text": "Convenience method to create a file from a string.\n\n This file object's metadata will have the id 'inlined_input'.\n\n Inputs\n ------\n content -- the content of the file (a string).\n position -- (default 1) rank among all files of the model while parsing\n see FileMetadata\n file_id -- (default 'inlined_input') the file_id that will be used by\n kappa."}
{"_id": "q_8459", "text": "Convience method to create a kappa file object from a file on disk\n\n Inputs\n ------\n fpath -- path to the file on disk\n position -- (default 1) rank among all files of the model while parsing\n see FileMetadata\n file_id -- (default = fpath) the file_id that will be used by kappa."}
{"_id": "q_8460", "text": "Add a kappa model given in a string to the project."}
{"_id": "q_8461", "text": "Add a kappa model from a file at given path to the project."}
{"_id": "q_8462", "text": "Delete file from database only if needed.\n\n When editing and the filefield is a new file,\n deletes the previous file (if any) from the database.\n Call this function immediately BEFORE saving the instance."}
{"_id": "q_8463", "text": "Edit the download-link inner text."}
{"_id": "q_8464", "text": "Checks the input and output to see if they are valid"}
{"_id": "q_8465", "text": "Append new samples to the data_capture array and increment the sample counter\r\n If length reaches Tcapture, then the newest samples will be kept. If Tcapture = 0 \r\n then new values are not appended to the data_capture array."}
{"_id": "q_8466", "text": "Append new samples to the data_capture_left array and the data_capture_right\r\n array and increment the sample counter. If length reaches Tcapture, then the \r\n newest samples will be kept. If Tcapture = 0 then new values are not appended \r\n to the data_capture array."}
{"_id": "q_8467", "text": "Add new tic time to the DSP_tic list. Will not be called if\r\n Tcapture = 0."}
{"_id": "q_8468", "text": "Add new toc time to the DSP_toc list. Will not be called if\r\n Tcapture = 0."}
{"_id": "q_8469", "text": "The average of the root values is used when multiplicity \r\n is greater than one.\r\n\r\n Mark Wickert October 2016"}
{"_id": "q_8470", "text": "Cruise control with PI controller and hill disturbance.\n\n This function returns various system function configurations\n for a the cruise control Case Study example found in \n the supplementary article. The plant model is obtained by the\n linearizing the equations of motion and the controller contains a\n proportional and integral gain term set via the closed-loop parameters\n natuarl frequency wn (rad/s) and damping zeta.\n\n Parameters\n ----------\n wn : closed-loop natural frequency in rad/s, nominally 0.1\n zeta : closed-loop damping factor, nominally 1.0\n T : vehicle time constant, nominally 10 s\n vcruise : cruise velocity set point, nominally 75 mph\n vmax : maximum vehicle velocity, nominally 120 mph\n tf_mode : 'H', 'HE', 'HVW', or 'HED' controls the system function returned by the function \n 'H' : closed-loop system function V(s)/R(s)\n 'HE' : closed-loop system function E(s)/R(s)\n 'HVW' : closed-loop system function V(s)/W(s)\n 'HED' : closed-loop system function E(s)/D(s), where D is the hill disturbance input\n\n Returns\n -------\n b : numerator coefficient ndarray\n a : denominator coefficient ndarray \n\n Examples\n --------\n >>> # return the closed-loop system function output/input velocity\n >>> b,a = cruise_control(wn,zeta,T,vcruise,vmax,tf_mode='H')\n >>> # return the closed-loop system function loop error/hill disturbance\n >>> b,a = cruise_control(wn,zeta,T,vcruise,vmax,tf_mode='HED')"}
{"_id": "q_8471", "text": "Stereo demod from complex baseband at sampling rate fs.\r\n Assume fs is 2400 ksps\r\n \r\n Mark Wickert July 2017"}
{"_id": "q_8472", "text": "Write IIR SOS Header Files\r\n File format is compatible with CMSIS-DSP IIR \r\n Directform II Filter Functions\r\n \r\n Mark Wickert March 2015-October 2016"}
{"_id": "q_8473", "text": "Eye pattern plot of a baseband digital communications waveform.\n\n The signal must be real, but can be multivalued in terms of the underlying\n modulation scheme. Used for BPSK eye plots in the Case Study article.\n\n Parameters\n ----------\n x : ndarray of the real input data vector/array\n L : display length in samples (usually two symbols)\n S : start index\n\n Returns\n -------\n None : A plot window opens containing the eye plot\n \n Notes\n -----\n Increase S to eliminate filter transients.\n \n Examples\n --------\n 1000 bits at 10 samples per bit with 'rc' shaping.\n\n >>> import matplotlib.pyplot as plt\n >>> from sk_dsp_comm import digitalcom as dc\n >>> x,b, data = dc.NRZ_bits(1000,10,'rc')\n >>> dc.eye_plot(x,20,60)\n >>> plt.show()"}
{"_id": "q_8474", "text": "Sample a baseband digital communications waveform at the symbol spacing.\n\n Parameters\n ----------\n x : ndarray of the input digital comm signal\n Ns : number of samples per symbol (bit)\n start : the array index to start the sampling\n\n Returns\n -------\n xI : ndarray of the real part of x following sampling\n xQ : ndarray of the imaginary part of x following sampling\n\n Notes\n -----\n Normally the signal is complex, so the scatter plot contains \n clusters at point in the complex plane. For a binary signal \n such as BPSK, the point centers are nominally +/-1 on the real\n axis. Start is used to eliminate transients from the FIR\n pulse shaping filters from appearing in the scatter plot.\n\n Examples\n --------\n >>> import matplotlib.pyplot as plt\n >>> from sk_dsp_comm import digitalcom as dc\n >>> x,b, data = dc.NRZ_bits(1000,10,'rc')\n\n Add some noise so points are now scattered about +/-1.\n\n >>> y = dc.cpx_AWGN(x,20,10)\n >>> yI,yQ = dc.scatter(y,10,60)\n >>> plt.plot(yI,yQ,'.')\n >>> plt.grid()\n >>> plt.xlabel('In-Phase')\n >>> plt.ylabel('Quadrature')\n >>> plt.axis('equal')\n >>> plt.show()"}
{"_id": "q_8475", "text": "This function generates"}
{"_id": "q_8476", "text": "A truncated square root raised cosine pulse used in digital communications.\n\n The pulse shaping factor :math:`0 < \\\\alpha < 1` is required as well as the\n truncation factor M which sets the pulse duration to be :math:`2*M*T_{symbol}`.\n \n\n Parameters\n ----------\n Ns : number of samples per symbol\n alpha : excess bandwidth factor on (0, 1), e.g., 0.35\n M : equals RC one-sided symbol truncation factor\n\n Returns\n -------\n b : ndarray containing the pulse shape\n\n Notes\n -----\n The pulse shape b is typically used as the FIR filter coefficients\n when forming a pulse shaped digital communications waveform. When \n square root raised cosine (SRC) pulse is used to generate Tx signals and\n at the receiver used as a matched filter (receiver FIR filter), the \n received signal is now raised cosine shaped, thus having zero\n intersymbol interference and the optimum removal of additive white \n noise if present at the receiver input.\n\n Examples\n --------\n Ten samples per symbol and :math:`\\\\alpha = 0.35`.\n\n >>> import matplotlib.pyplot as plt\n >>> from numpy import arange\n >>> from sk_dsp_comm.digitalcom import sqrt_rc_imp\n >>> b = sqrt_rc_imp(10,0.35)\n >>> n = arange(-10*6,10*6+1)\n >>> plt.stem(n,b)\n >>> plt.show()"}
{"_id": "q_8477", "text": "Convert an unsigned integer to a numpy binary array with the first\n element the MSB and the last element the LSB."}
{"_id": "q_8478", "text": "Convert binary array back a nonnegative integer. The array length is \n the bit width. The first input index holds the MSB and the last holds the LSB."}
{"_id": "q_8479", "text": "Filter the signal"}
{"_id": "q_8480", "text": "Filter the signal using second-order sections"}
{"_id": "q_8481", "text": "Celery task decorator. Forces the task to have only one running instance at a time.\n\n Use with binded tasks (@celery.task(bind=True)).\n\n Modeled after:\n http://loose-bits.com/2010/10/distributed-task-locking-in-celery.html\n http://blogs.it.ox.ac.uk/inapickle/2012/01/05/python-decorators-with-optional-arguments/\n\n Written by @Robpol86.\n\n :raise OtherInstanceError: If another instance is already running.\n\n :param function func: The function to decorate, must be also decorated by @celery.task.\n :param int lock_timeout: Lock timeout in seconds plus five more seconds, in-case the task crashes and fails to\n release the lock. If not specified, the values of the task's soft/hard limits are used. If all else fails,\n timeout will be 5 minutes.\n :param bool include_args: Include the md5 checksum of the arguments passed to the task in the Redis key. This allows\n the same task to run with different arguments, only stopping a task from running if another instance of it is\n running with the same arguments."}
{"_id": "q_8482", "text": "Removed the lock regardless of timeout."}
{"_id": "q_8483", "text": "Iterator used to iterate in chunks over an array of size `num_samples`.\n At each iteration returns `chunksize` except for the last iteration."}
{"_id": "q_8484", "text": "Reduce with `func`, chunk by chunk, the passed pytable `array`."}
{"_id": "q_8485", "text": "Load the array `data` in the .mat file `fname`."}
{"_id": "q_8486", "text": "Check whether the git executable is found."}
{"_id": "q_8487", "text": "Get the Git version."}
{"_id": "q_8488", "text": "Returns whether there are uncommitted changes in the working dir."}
{"_id": "q_8489", "text": "Get one-line description of HEAD commit for repository in current dir."}
{"_id": "q_8490", "text": "Get the HEAD commit SHA1 of repository in current dir."}
{"_id": "q_8491", "text": "Print the last commit line and eventual uncommitted changes."}
{"_id": "q_8492", "text": "Store parameters in `params` in `h5file.root.parameters`.\n\n `nparams` (dict)\n A dict as returned by `get_params()` in `ParticlesSimulation()`\n The format is:\n keys:\n used as parameter name\n values: (2-elements tuple)\n first element is the parameter value\n second element is a string used as \"title\" (description)\n `attr_params` (dict)\n A dict whole items are stored as attributes in '/parameters'"}
{"_id": "q_8493", "text": "Return pathlib.Path for a data-file with given hash and prefix."}
{"_id": "q_8494", "text": "Return a RandomState, equal to the input unless rs is None.\n\n When rs is None, try to get the random state from the\n 'last_random_state' attribute in `group`. When not available,\n use `seed` to generate a random state. When seed is None the returned\n random state will have a random seed."}
{"_id": "q_8495", "text": "Compact representation of all simulation parameters"}
{"_id": "q_8496", "text": "A dict containing all the simulation numeric-parameters.\n\n The values are 2-element tuples: first element is the value and\n second element is a string describing the parameter (metadata)."}
{"_id": "q_8497", "text": "Print on-disk array sizes required for current set of parameters."}
{"_id": "q_8498", "text": "Simulate Brownian motion trajectories and emission rates.\n\n This method performs the Brownian motion simulation using the current\n set of parameters. Before running this method you can check the\n disk-space requirements using :method:`print_sizes`.\n\n Results are stored to disk in HDF5 format and are accessible in\n in `self.emission`, `self.emission_tot` and `self.position` as\n pytables arrays.\n\n Arguments:\n save_pos (bool): if True, save the particles 3D trajectories\n total_emission (bool): if True, store only the total emission array\n containing the sum of emission of all the particles.\n rs (RandomState object): random state object used as random number\n generator. If None, use a random state initialized from seed.\n seed (uint): when `rs` is None, `seed` is used to initialize the\n random state, otherwise is ignored.\n wrap_func (function): the function used to apply the boundary\n condition (use :func:`wrap_periodic` or :func:`wrap_mirror`).\n path (string): a folder where simulation data is saved.\n verbose (bool): if False, prints no output."}
{"_id": "q_8499", "text": "Simulate timestamps from emission trajectories.\n\n Uses attributes: `.t_step`.\n\n Returns:\n A tuple of two arrays: timestamps and particles."}
{"_id": "q_8500", "text": "Compute one timestamps array for a mixture of N populations.\n\n Timestamp data are saved to disk and accessible as pytables arrays in\n `._timestamps` and `._tparticles`.\n The background generated timestamps are assigned a\n conventional particle number (last particle index + 1).\n\n Arguments:\n max_rates (list): list of the peak max emission rate for each\n population.\n populations (list of slices): slices to `self.particles`\n defining each population.\n bg_rate (float, cps): rate for a Poisson background process\n rs (RandomState object): random state object used as random number\n generator. If None, use a random state initialized from seed.\n seed (uint): when `rs` is None, `seed` is used to initialize the\n random state, otherwise is ignored.\n chunksize (int): chunk size used for the on-disk timestamp array\n comp_filter (tables.Filter or None): compression filter to use\n for the on-disk `timestamps` and `tparticles` arrays.\n If None use default compression.\n overwrite (bool): if True, overwrite any pre-existing timestamps\n array. If False, never overwrite. The outcome of simulating an\n existing array is controlled by `skip_existing` flag.\n skip_existing (bool): if True, skip simulation if the same\n timestamps array is already present.\n scale (int): `self.t_step` is multiplied by `scale` to obtain the\n timestamps units in seconds.\n path (string): folder where to save the data.\n timeslice (float or None): timestamps are simulated until\n `timeslice` seconds. If None, simulate until `self.t_max`."}
{"_id": "q_8501", "text": "Merge donor and acceptor timestamps and particle arrays.\n\n Parameters:\n ts_d (array): donor timestamp array\n ts_par_d (array): donor particles array\n ts_a (array): acceptor timestamp array\n ts_par_a (array): acceptor particles array\n\n Returns:\n Arrays: timestamps, acceptor bool mask, timestamp particle"}
{"_id": "q_8502", "text": "Diffusion coefficients of the two specified populations."}
{"_id": "q_8503", "text": "2-tuple of slices for selection of two populations."}
{"_id": "q_8504", "text": "Compute hash of D and A timestamps for single-step D+A case."}
{"_id": "q_8505", "text": "Merge donor and acceptor timestamps, computes `ts`, `a_ch`, `part`."}
{"_id": "q_8506", "text": "Create a smFRET Photon-HDF5 file with current timestamps."}
{"_id": "q_8507", "text": "Print the HDF5 attributes for `node_name`.\n\n Parameters:\n data_file (pytables HDF5 file object): the data file to print\n node_name (string): name of the path inside the file to be printed.\n Can be either a group or a leaf-node. Default: '/', the root node.\n which (string): Valid values are 'user' for user-defined attributes,\n 'sys' for pytables-specific attributes and 'all' to print both\n groups of attributes. Default 'user'.\n compress (bool): if True displays at most a line for each attribute.\n Default False."}
{"_id": "q_8508", "text": "Print all the sub-groups in `group` and leaf-nodes children of `group`.\n\n Parameters:\n data_file (pytables HDF5 file object): the data file to print\n group (string): path name of the group to be printed.\n Default: '/', the root node."}
{"_id": "q_8509", "text": "Train model on given training examples and return the list of costs after each minibatch is processed.\n\n Args:\n trX (list) -- Inputs\n trY (list) -- Outputs\n batch_size (int, optional) -- number of examples in a minibatch (default 64)\n n_epochs (int, optional) -- number of epochs to train for (default 1)\n len_filter (object, optional) -- object to filter training example by length (default LenFilter())\n snapshot_freq (int, optional) -- number of epochs between saving model snapshots (default 1)\n path (str, optional) -- prefix of path where model snapshots are saved.\n If None, no snapshots are saved (default None)\n\n Returns:\n list -- costs of model after processing each minibatch"}
{"_id": "q_8510", "text": "Sets defaults for ``class Meta`` declarations.\n\n Arguments can either be extracted from a `module` (in that case\n all attributes starting from `prefix` are used):\n\n >>> import foo\n >>> configure(foo)\n\n or passed explicictly as keyword arguments:\n\n >>> configure(database='foo')\n\n .. warning:: Current implementation is by no means thread-safe --\n use it wisely."}
{"_id": "q_8511", "text": "Converts a given string from CamelCase to under_score.\n\n >>> to_underscore('FooBar')\n 'foo_bar'"}
{"_id": "q_8512", "text": "Generates a plane on the xz axis of a specific size and resolution.\n Normals and texture coordinates are also included.\n\n Args:\n size: (x, y) tuple\n resolution: (x, y) tuple\n\n Returns:\n A :py:class:`demosys.opengl.vao.VAO` instance"}
{"_id": "q_8513", "text": "Deferred loading of the scene\n\n :param scene: The scene object\n :param file: Resolved path if changed by finder"}
{"_id": "q_8514", "text": "Loads a binary gltf file"}
{"_id": "q_8515", "text": "Pre-parse buffer mappings for each VBO to detect interleaved data for a primitive"}
{"_id": "q_8516", "text": "Does the buffer interleave with this one?"}
{"_id": "q_8517", "text": "Create the VBO"}
{"_id": "q_8518", "text": "Set the 3D position of the camera\n\n :param x: float\n :param y: float\n :param z: float"}
{"_id": "q_8519", "text": "Look at a specific point\n\n :param vec: Vector3 position\n :param pos: python list [x, y, x]\n :return: Camera matrix"}
{"_id": "q_8520", "text": "The standard lookAt method\n\n :param pos: current position\n :param target: target position to look at\n :param up: direction up"}
{"_id": "q_8521", "text": "Set the camera position move state\n\n :param direction: What direction to update\n :param activate: Start or stop moving in the direction"}
{"_id": "q_8522", "text": "Translate string into character texture positions"}
{"_id": "q_8523", "text": "Initialize, load and run\n\n :param manager: The effect manager to use"}
{"_id": "q_8524", "text": "Draw scene and mesh bounding boxes"}
{"_id": "q_8525", "text": "Applies mesh programs to meshes"}
{"_id": "q_8526", "text": "Calculate scene bbox"}
{"_id": "q_8527", "text": "Generates random positions inside a confied box.\n\n Args:\n count (int): Number of points to generate\n\n Keyword Args:\n range_x (tuple): min-max range for x axis: Example (-10.0. 10.0)\n range_y (tuple): min-max range for y axis: Example (-10.0. 10.0)\n range_z (tuple): min-max range for z axis: Example (-10.0. 10.0)\n seed (int): The random seed\n\n Returns:\n A :py:class:`demosys.opengl.vao.VAO` instance"}
{"_id": "q_8528", "text": "Play the music"}
{"_id": "q_8529", "text": "Draw framebuffers for debug purposes.\n We need to supply near and far plane so the depth buffer can be linearized when visualizing.\n\n :param near: Projection near value\n :param far: Projection far value"}
{"_id": "q_8530", "text": "Render light volumes"}
{"_id": "q_8531", "text": "Render outlines of light volumes"}
{"_id": "q_8532", "text": "Load a single shader"}
{"_id": "q_8533", "text": "Load a texture array"}
{"_id": "q_8534", "text": "Draw the mesh using the assigned mesh program\n\n :param projection_matrix: projection_matrix (bytes)\n :param view_matrix: view_matrix (bytes)\n :param camera_matrix: camera_matrix (bytes)"}
{"_id": "q_8535", "text": "Set the current time jumping in the timeline.\n\n Args:\n value (float): The new time"}
{"_id": "q_8536", "text": "Draw function called by the system every frame when the effect is active.\n This method raises ``NotImplementedError`` unless implemented.\n\n Args:\n time (float): The current time in seconds.\n frametime (float): The time the previous frame used to render in seconds.\n target (``moderngl.Framebuffer``): The target FBO for the effect."}
{"_id": "q_8537", "text": "Get a program by its label\n\n Args:\n label (str): The label for the program\n\n Returns: py:class:`moderngl.Program` instance"}
{"_id": "q_8538", "text": "Create a projection matrix with the following parameters.\n When ``aspect_ratio`` is not provided the configured aspect\n ratio for the window will be used.\n\n Args:\n fov (float): Field of view (float)\n near (float): Camera near value\n far (float): Camrea far value\n\n Keyword Args:\n aspect_ratio (float): Aspect ratio of the viewport\n\n Returns:\n The projection matrix as a float32 :py:class:`numpy.array`"}
{"_id": "q_8539", "text": "Creates a transformation matrix woth rotations and translation.\n\n Args:\n rotation: 3 component vector as a list, tuple, or :py:class:`pyrr.Vector3`\n translation: 3 component vector as a list, tuple, or :py:class:`pyrr.Vector3`\n\n Returns:\n A 4x4 matrix as a :py:class:`numpy.array`"}
{"_id": "q_8540", "text": "Creates a normal matrix from modelview matrix\n\n Args:\n modelview: The modelview matrix\n\n Returns:\n A 3x3 Normal matrix as a :py:class:`numpy.array`"}
{"_id": "q_8541", "text": "Scan for available templates in effect_templates"}
{"_id": "q_8542", "text": "Get the absolute path to the root of the demosys package"}
{"_id": "q_8543", "text": "Load a file in text mode"}
{"_id": "q_8544", "text": "Get a finder class from an import path.\n Raises ``demosys.core.exceptions.ImproperlyConfigured`` if the finder is not found.\n This function uses an lru cache.\n\n :param import_path: string representing an import path\n :return: An instance of the finder"}
{"_id": "q_8545", "text": "Find a file in the path. The file may exist in multiple\n paths. The last found file will be returned.\n\n :param path: The path to find\n :return: The absolute path to the file or None if not found"}
{"_id": "q_8546", "text": "Update the internal projection matrix based on current values\n or values passed in if specified.\n\n :param aspect_ratio: New aspect ratio\n :param fov: New field of view\n :param near: New near value\n :param far: New far value"}
{"_id": "q_8547", "text": "Swaps buffers, incement the framecounter and pull events."}
{"_id": "q_8548", "text": "Ensure glfw library version is compatible"}
{"_id": "q_8549", "text": "Translate the buffer format"}
{"_id": "q_8550", "text": "Set the current time. This can be used to jump in the timeline.\n\n Args:\n value (float): The new time"}
{"_id": "q_8551", "text": "Resolve scene loader based on file extension"}
{"_id": "q_8552", "text": "Pyglet specific callback for window resize events."}
{"_id": "q_8553", "text": "Swap buffers, increment frame counter and pull events"}
{"_id": "q_8554", "text": "Creates a sphere.\n\n Keyword Args:\n radius (float): Radius or the sphere\n rings (int): number or horizontal rings\n sectors (int): number of vertical segments\n\n Returns:\n A :py:class:`demosys.opengl.vao.VAO` instance"}
{"_id": "q_8555", "text": "Attempts to assign a loader class to a resource description\n\n :param meta: The resource description instance"}
{"_id": "q_8556", "text": "Attempts to get a loader\n\n :param meta: The resource description instance\n :param raise_on_error: Raise ImproperlyConfigured if the loader cannot be resolved\n :returns: The requested loader class"}
{"_id": "q_8557", "text": "Pyqt specific resize callback."}
{"_id": "q_8558", "text": "Draws a frame. Internally it calls the\n configured timeline's draw method.\n\n Args:\n current_time (float): The current time (preferrably always from the configured timer class)\n frame_time (float): The duration of the previous frame in seconds"}
{"_id": "q_8559", "text": "Sets the clear values for the window buffer.\n\n Args:\n red (float): red compoent\n green (float): green compoent\n blue (float): blue compoent\n alpha (float): alpha compoent\n depth (float): depth value"}
{"_id": "q_8560", "text": "Handles the standard keyboard events such as camera movements,\n taking a screenshot, closing the window etc.\n\n Can be overriden add new keyboard events. Ensure this method\n is also called if you want to keep the standard features.\n\n Arguments:\n key: The key that was pressed or released\n action: The key action. Can be `ACTION_PRESS` or `ACTION_RELEASE`\n modifier: Modifiers such as holding shift or ctrl"}
{"_id": "q_8561", "text": "The standard mouse movement event method.\n Can be overriden to add new functionality.\n By default this feeds the system camera with new values.\n\n Args:\n x: The current mouse x position\n y: The current mouse y position\n dx: Delta x postion (x position difference from the previous event)\n dy: Delta y postion (y position difference from the previous event)"}
{"_id": "q_8562", "text": "Start the timer"}
{"_id": "q_8563", "text": "Toggle pause mode"}
{"_id": "q_8564", "text": "Check if the loader has a supported file extension"}
{"_id": "q_8565", "text": "Get or create a Track object.\n\n :param name: Name of the track\n :return: Track object"}
{"_id": "q_8566", "text": "Get all command names in the a folder\n\n :return: List of commands names"}
{"_id": "q_8567", "text": "Override settings values"}
{"_id": "q_8568", "text": "Hack in program directory"}
{"_id": "q_8569", "text": "Hack in texture directory"}
{"_id": "q_8570", "text": "Render the VAO.\n\n Args:\n program: The ``moderngl.Program``\n\n Keyword Args:\n mode: Override the draw mode (``TRIANGLES`` etc)\n vertices (int): The number of vertices to transform\n first (int): The index of the first vertex to start with\n instances (int): The number of instances"}
{"_id": "q_8571", "text": "Obtain the ``moderngl.VertexArray`` instance for the program.\n The instance is only created once and cached internally.\n\n Returns: ``moderngl.VertexArray`` instance"}
{"_id": "q_8572", "text": "Draw code for the mesh. Should be overriden.\n\n :param projection_matrix: projection_matrix (bytes)\n :param view_matrix: view_matrix (bytes)\n :param camera_matrix: camera_matrix (bytes)\n :param time: The current time"}
{"_id": "q_8573", "text": "Parse the effect package string.\n Can contain the package python path or path to effect class in an effect package.\n\n Examples::\n\n # Path to effect pacakge\n examples.cubes\n\n # Path to effect class\n examples.cubes.Cubes\n\n Args:\n path: python path to effect package. May also include effect class name.\n\n Returns:\n tuple: (package_path, effect_class)"}
{"_id": "q_8574", "text": "Get all resources registed in effect packages.\n These are typically located in ``resources.py``"}
{"_id": "q_8575", "text": "Registers a single package\n\n :param name: (str) The effect package to add"}
{"_id": "q_8576", "text": "Get a package by python path. Can also contain path to an effect.\n\n Args:\n name (str): Path to effect package or effect\n\n Returns:\n The requested EffectPackage\n\n Raises:\n EffectError when no package is found"}
{"_id": "q_8577", "text": "Returns the runnable effect in the package"}
{"_id": "q_8578", "text": "FInd the effect package"}
{"_id": "q_8579", "text": "Iterate the module attributes picking out effects"}
{"_id": "q_8580", "text": "Fetch the resource list"}
{"_id": "q_8581", "text": "Fetch track value for every runnable effect.\r\n If the value is > 0.5 we draw it."}
{"_id": "q_8582", "text": "Load a 2d texture"}
{"_id": "q_8583", "text": "Initialize a single glsl string containing all shaders"}
{"_id": "q_8584", "text": "Initialize multiple shader strings"}
{"_id": "q_8585", "text": "Loads this project instance"}
{"_id": "q_8586", "text": "Reload all shader programs with the reloadable flag set"}
{"_id": "q_8587", "text": "Get components and bytes for an image"}
{"_id": "q_8588", "text": "Write manage.py in the current directory"}
{"_id": "q_8589", "text": "Returns the absolute path to template directory"}
{"_id": "q_8590", "text": "Resolve program loader"}
{"_id": "q_8591", "text": "Encode a text using arithmetic coding with the provided probabilities.\n\n This is a wrapper for :py:meth:`Arithmetic.encode`.\n\n Parameters\n ----------\n text : str\n A string to encode\n probs : dict\n A probability statistics dictionary generated by\n :py:meth:`Arithmetic.train`\n\n Returns\n -------\n tuple\n The arithmetically coded text\n\n Example\n -------\n >>> pr = ac_train('the quick brown fox jumped over the lazy dog')\n >>> ac_encode('align', pr)\n (16720586181, 34)"}
{"_id": "q_8592", "text": "r\"\"\"Generate a probability dict from the provided text.\n\n Text to 0-order probability statistics as a dict\n\n Parameters\n ----------\n text : str\n The text data over which to calculate probability statistics. This\n must not contain the NUL (0x00) character because that is used to\n indicate the end of data.\n\n Example\n -------\n >>> ac = Arithmetic()\n >>> ac.train('the quick brown fox jumped over the lazy dog')\n >>> ac.get_probs()\n {' ': (Fraction(0, 1), Fraction(8, 45)),\n 'o': (Fraction(8, 45), Fraction(4, 15)),\n 'e': (Fraction(4, 15), Fraction(16, 45)),\n 'u': (Fraction(16, 45), Fraction(2, 5)),\n 't': (Fraction(2, 5), Fraction(4, 9)),\n 'r': (Fraction(4, 9), Fraction(22, 45)),\n 'h': (Fraction(22, 45), Fraction(8, 15)),\n 'd': (Fraction(8, 15), Fraction(26, 45)),\n 'z': (Fraction(26, 45), Fraction(3, 5)),\n 'y': (Fraction(3, 5), Fraction(28, 45)),\n 'x': (Fraction(28, 45), Fraction(29, 45)),\n 'w': (Fraction(29, 45), Fraction(2, 3)),\n 'v': (Fraction(2, 3), Fraction(31, 45)),\n 'q': (Fraction(31, 45), Fraction(32, 45)),\n 'p': (Fraction(32, 45), Fraction(11, 15)),\n 'n': (Fraction(11, 15), Fraction(34, 45)),\n 'm': (Fraction(34, 45), Fraction(7, 9)),\n 'l': (Fraction(7, 9), Fraction(4, 5)),\n 'k': (Fraction(4, 5), Fraction(37, 45)),\n 'j': (Fraction(37, 45), Fraction(38, 45)),\n 'i': (Fraction(38, 45), Fraction(13, 15)),\n 'g': (Fraction(13, 15), Fraction(8, 9)),\n 'f': (Fraction(8, 9), Fraction(41, 45)),\n 'c': (Fraction(41, 45), Fraction(14, 15)),\n 'b': (Fraction(14, 15), Fraction(43, 45)),\n 'a': (Fraction(43, 45), Fraction(44, 45)),\n '\\x00': (Fraction(44, 45), Fraction(1, 1))}"}
{"_id": "q_8593", "text": "r\"\"\"Fill in self.ngcorpus from a Corpus argument.\n\n Parameters\n ----------\n corpus :Corpus\n The Corpus from which to initialize the n-gram corpus\n n_val : int\n Maximum n value for n-grams\n bos : str\n String to insert as an indicator of beginning of sentence\n eos : str\n String to insert as an indicator of end of sentence\n\n Raises\n ------\n TypeError\n Corpus argument of the Corpus class required.\n\n Example\n -------\n >>> tqbf = 'The quick brown fox jumped over the lazy dog.\\n'\n >>> tqbf += 'And then it slept.\\n And the dog ran off.'\n >>> ngcorp = NGramCorpus()\n >>> ngcorp.corpus_importer(Corpus(tqbf))"}
{"_id": "q_8594", "text": "Build up a corpus entry recursively.\n\n Parameters\n ----------\n corpus : Corpus\n The corpus\n words : [str]\n Words to add to the corpus\n count : int\n Count of words"}
{"_id": "q_8595", "text": "Fill in self.ngcorpus from a Google NGram corpus file.\n\n Parameters\n ----------\n corpus_file : file\n The Google NGram file from which to initialize the n-gram corpus"}
{"_id": "q_8596", "text": "r\"\"\"Return term frequency.\n\n Parameters\n ----------\n term : str\n The term for which to calculate tf\n\n Returns\n -------\n float\n The term frequency (tf)\n\n Raises\n ------\n ValueError\n tf can only calculate the frequency of individual words\n\n Examples\n --------\n >>> tqbf = 'The quick brown fox jumped over the lazy dog.\\n'\n >>> tqbf += 'And then it slept.\\n And the dog ran off.'\n >>> ngcorp = NGramCorpus(Corpus(tqbf))\n >>> NGramCorpus(Corpus(tqbf)).tf('the')\n 1.3010299956639813\n >>> NGramCorpus(Corpus(tqbf)).tf('fox')\n 1.0"}
{"_id": "q_8597", "text": "r\"\"\"Return a word decoded from BWT form.\n\n Parameters\n ----------\n code : str\n The word to transform from BWT form\n terminator : str\n A character added to signal the end of the string\n\n Returns\n -------\n str\n Word decoded by BWT\n\n Raises\n ------\n ValueError\n Specified terminator absent from code.\n\n Examples\n --------\n >>> bwt = BWT()\n >>> bwt.decode('n\\x00ilag')\n 'align'\n >>> bwt.decode('annb\\x00aa')\n 'banana'\n >>> bwt.decode('annb@aa', '@')\n 'banana'"}
{"_id": "q_8598", "text": "Return the indel distance between two strings.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n int\n Indel distance\n\n Examples\n --------\n >>> cmp = Indel()\n >>> cmp.dist_abs('cat', 'hat')\n 2\n >>> cmp.dist_abs('Niall', 'Neil')\n 3\n >>> cmp.dist_abs('Colin', 'Cuilen')\n 5\n >>> cmp.dist_abs('ATCG', 'TAGC')\n 4"}
{"_id": "q_8599", "text": "Return the normalized indel distance between two strings.\n\n This is equivalent to normalized Levenshtein distance, when only\n inserts and deletes are possible.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Normalized indel distance\n\n Examples\n --------\n >>> cmp = Indel()\n >>> round(cmp.dist('cat', 'hat'), 12)\n 0.333333333333\n >>> round(cmp.dist('Niall', 'Neil'), 12)\n 0.333333333333\n >>> round(cmp.dist('Colin', 'Cuilen'), 12)\n 0.454545454545\n >>> cmp.dist('ATCG', 'TAGC')\n 0.5"}
{"_id": "q_8600", "text": "Return similarity.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n *args\n Variable length argument list.\n **kwargs\n Arbitrary keyword arguments.\n\n Returns\n -------\n float\n Similarity"}
{"_id": "q_8601", "text": "Return the Tversky distance between two strings.\n\n This is a wrapper for :py:meth:`Tversky.dist`.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n alpha : float\n Tversky index parameter as described above\n beta : float\n Tversky index parameter as described above\n bias : float\n The symmetric Tversky index bias parameter\n\n Returns\n -------\n float\n Tversky distance\n\n Examples\n --------\n >>> dist_tversky('cat', 'hat')\n 0.6666666666666667\n >>> dist_tversky('Niall', 'Neil')\n 0.7777777777777778\n >>> dist_tversky('aluminum', 'Catalan')\n 0.9375\n >>> dist_tversky('ATCG', 'TAGC')\n 1.0"}
{"_id": "q_8602", "text": "Return the longest common subsequence of two strings.\n\n Based on the dynamic programming algorithm from\n http://rosettacode.org/wiki/Longest_common_subsequence\n :cite:`rosettacode:2018b`. This is licensed GFDL 1.2.\n\n Modifications include:\n conversion to a numpy array in place of a list of lists\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n str\n The longest common subsequence\n\n Examples\n --------\n >>> sseq = LCSseq()\n >>> sseq.lcsseq('cat', 'hat')\n 'at'\n >>> sseq.lcsseq('Niall', 'Neil')\n 'Nil'\n >>> sseq.lcsseq('aluminum', 'Catalan')\n 'aln'\n >>> sseq.lcsseq('ATCG', 'TAGC')\n 'AC'"}
{"_id": "q_8603", "text": "Return the prefix similarity of two strings.\n\n Prefix similarity is the ratio of the length of the shorter term that\n exactly matches the longer term to the length of the shorter term,\n beginning at the start of both terms.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Prefix similarity\n\n Examples\n --------\n >>> cmp = Prefix()\n >>> cmp.sim('cat', 'hat')\n 0.0\n >>> cmp.sim('Niall', 'Neil')\n 0.25\n >>> cmp.sim('aluminum', 'Catalan')\n 0.0\n >>> cmp.sim('ATCG', 'TAGC')\n 0.0"}
{"_id": "q_8604", "text": "r\"\"\"Return the raw corpus.\n\n This is reconstructed by joining sub-components with the corpus' split\n characters\n\n Returns\n -------\n str\n The raw corpus\n\n Example\n -------\n >>> tqbf = 'The quick brown fox jumped over the lazy dog.\\n'\n >>> tqbf += 'And then it slept.\\n And the dog ran off.'\n >>> corp = Corpus(tqbf)\n >>> print(corp.raw())\n The quick brown fox jumped over the lazy dog.\n And then it slept.\n And the dog ran off.\n >>> len(corp.raw())\n 85"}
{"_id": "q_8605", "text": "Return the best guess language ID for the word and language choices.\n\n Parameters\n ----------\n name : str\n The term to guess the language of\n name_mode : str\n The name mode of the algorithm: ``gen`` (default),\n ``ash`` (Ashkenazi), or ``sep`` (Sephardic)\n\n Returns\n -------\n int\n Language ID"}
{"_id": "q_8606", "text": "Reassess the language of the terms and call the phonetic encoder.\n\n Uses a split multi-word term.\n\n Parameters\n ----------\n term : str\n The term to encode via Beider-Morse\n name_mode : str\n The name mode of the algorithm: ``gen`` (default),\n ``ash`` (Ashkenazi), or ``sep`` (Sephardic)\n rules : tuple\n The set of initial phonetic transform regexps\n final_rules1 : tuple\n The common set of final phonetic transform regexps\n final_rules2 : tuple\n The specific set of final phonetic transform regexps\n concat : bool\n A flag to indicate concatenation\n\n Returns\n -------\n str\n A Beider-Morse phonetic code"}
{"_id": "q_8607", "text": "Apply a set of final rules to the phonetic encoding.\n\n Parameters\n ----------\n phonetic : str\n The term to which to apply the final rules\n final_rules : tuple\n The set of final phonetic transform regexps\n language_arg : int\n An integer representing the target language of the phonetic\n encoding\n strip : bool\n Flag to indicate whether to normalize the language attributes\n\n Returns\n -------\n str\n A Beider-Morse phonetic code"}
{"_id": "q_8608", "text": "Expand phonetic alternates separated by |s.\n\n Parameters\n ----------\n phonetic : str\n A Beider-Morse phonetic encoding\n\n Returns\n -------\n str\n A Beider-Morse phonetic code"}
{"_id": "q_8609", "text": "Remove duplicates from a phonetic encoding list.\n\n Parameters\n ----------\n phonetic : str\n A Beider-Morse phonetic encoding\n\n Returns\n -------\n str\n A Beider-Morse phonetic code"}
{"_id": "q_8610", "text": "Remove embedded bracketed attributes.\n\n This (potentially) bitwise-ands bracketed attributes together and adds\n to the end.\n This is applied to a single alternative at a time -- not to a\n parenthesized list.\n It removes all embedded bracketed attributes, logically-ands them\n together, and places them at the end.\n However if strip is true, this can indeed remove embedded bracketed\n attributes from a parenthesized list.\n\n Parameters\n ----------\n text : str\n A Beider-Morse phonetic encoding (in progress)\n strip : bool\n Remove the bracketed attributes (and throw away)\n\n Returns\n -------\n str\n A Beider-Morse phonetic code\n\n Raises\n ------\n ValueError\n No closing square bracket"}
{"_id": "q_8611", "text": "Apply a phonetic regex if compatible.\n\n tests for compatible language rules\n\n to do so, apply the rule, expand the results, and detect alternatives\n with incompatible attributes\n\n then drop each alternative that has incompatible attributes and keep\n those that are compatible\n\n if there are no compatible alternatives left, return false\n\n otherwise return the compatible alternatives\n\n apply the rule\n\n Parameters\n ----------\n phonetic : str\n The Beider-Morse phonetic encoding (so far)\n target : str\n A proposed addition to the phonetic encoding\n language_arg : int\n An integer representing the target language of the phonetic\n encoding\n\n Returns\n -------\n str\n A candidate encoding"}
{"_id": "q_8612", "text": "Return the index value for a language code.\n\n This returns l_any if more than one code is specified or the code is\n out of bounds.\n\n Parameters\n ----------\n code : int\n The language code to interpret\n name_mode : str\n The name mode of the algorithm: ``gen`` (default),\n ``ash`` (Ashkenazi), or ``sep`` (Sephardic)\n\n Returns\n -------\n int\n Language code index"}
{"_id": "q_8613", "text": "Return the strcmp95 distance between two strings.\n\n This is a wrapper for :py:meth:`Strcmp95.dist`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n long_strings : bool\n Set to True to increase the probability of a match when the number of\n matched characters is large. This option allows for a little more\n tolerance when the strings are large. It is not an appropriate test\n when comparing fixed length fields such as phone and social security\n numbers.\n\n Returns\n -------\n float\n Strcmp95 distance\n\n Examples\n --------\n >>> round(dist_strcmp95('cat', 'hat'), 12)\n 0.222222222222\n >>> round(dist_strcmp95('Niall', 'Neil'), 12)\n 0.1545\n >>> round(dist_strcmp95('aluminum', 'Catalan'), 12)\n 0.345238095238\n >>> round(dist_strcmp95('ATCG', 'TAGC'), 12)\n 0.166666666667"}
{"_id": "q_8614", "text": "Return the Naval Research Laboratory phonetic encoding of a word.\n\n Parameters\n ----------\n word : str\n The word to transform\n\n Returns\n -------\n str\n The NRL phonetic encoding\n\n Examples\n --------\n >>> pe = NRL()\n >>> pe.encode('the')\n 'DHAX'\n >>> pe.encode('round')\n 'rAWnd'\n >>> pe.encode('quick')\n 'kwIHk'\n >>> pe.encode('eaten')\n 'IYtEHn'\n >>> pe.encode('Smith')\n 'smIHTH'\n >>> pe.encode('Larsen')\n 'lAArsEHn'"}
{"_id": "q_8615", "text": "Return the longest common substring of two strings.\n\n Longest common substring (LCSstr).\n\n Based on the code from\n https://en.wikibooks.org/wiki/Algorithm_Implementation/Strings/Longest_common_substring\n :cite:`Wikibooks:2018`.\n This is licensed Creative Commons: Attribution-ShareAlike 3.0.\n\n Modifications include:\n\n - conversion to a numpy array in place of a list of lists\n - conversion to Python 2/3-safe range from xrange via six\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n str\n The longest common substring\n\n Examples\n --------\n >>> sstr = LCSstr()\n >>> sstr.lcsstr('cat', 'hat')\n 'at'\n >>> sstr.lcsstr('Niall', 'Neil')\n 'N'\n >>> sstr.lcsstr('aluminum', 'Catalan')\n 'al'\n >>> sstr.lcsstr('ATCG', 'TAGC')\n 'A'"}
{"_id": "q_8616", "text": "r\"\"\"Return the longest common substring similarity of two strings.\n\n Longest common substring similarity (:math:`sim_{LCSstr}`).\n\n This employs the LCS function to derive a similarity metric:\n :math:`sim_{LCSstr}(s,t) = \\frac{|LCSstr(s,t)|}{max(|s|, |t|)}`\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n LCSstr similarity\n\n Examples\n --------\n >>> sim_lcsstr('cat', 'hat')\n 0.6666666666666666\n >>> sim_lcsstr('Niall', 'Neil')\n 0.2\n >>> sim_lcsstr('aluminum', 'Catalan')\n 0.25\n >>> sim_lcsstr('ATCG', 'TAGC')\n 0.25"}
{"_id": "q_8617", "text": "Return the Needleman-Wunsch score of two strings.\n\n This is a wrapper for :py:meth:`NeedlemanWunsch.dist_abs`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n gap_cost : float\n The cost of an alignment gap (1 by default)\n sim_func : function\n A function that returns the similarity of two characters (identity\n similarity by default)\n\n Returns\n -------\n float\n Needleman-Wunsch score\n\n Examples\n --------\n >>> needleman_wunsch('cat', 'hat')\n 2.0\n >>> needleman_wunsch('Niall', 'Neil')\n 1.0\n >>> needleman_wunsch('aluminum', 'Catalan')\n -1.0\n >>> needleman_wunsch('ATCG', 'TAGC')\n 0.0"}
{"_id": "q_8618", "text": "Return the matrix similarity of two strings.\n\n With the default parameters, this is identical to sim_ident.\n It is possible for sim_matrix to return values outside of the range\n :math:`[0, 1]`, if values outside that range are present in mat,\n mismatch_cost, or match_cost.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n mat : dict\n A dict mapping tuples to costs; the tuples are (src, tar) pairs of\n symbols from the alphabet parameter\n mismatch_cost : float\n The value returned if (src, tar) is absent from mat when src does\n not equal tar\n match_cost : float\n The value returned if (src, tar) is absent from mat when src equals\n tar\n symmetric : bool\n True if the cost of src not matching tar is identical to the cost\n of tar not matching src; in this case, the values in mat need only\n contain (src, tar) or (tar, src), not both\n alphabet : str\n A collection of tokens from which src and tar are drawn; if this is\n defined a ValueError is raised if either tar or src is not found in\n alphabet\n\n Returns\n -------\n float\n Matrix similarity\n\n Raises\n ------\n ValueError\n src value not in alphabet\n ValueError\n tar value not in alphabet\n\n Examples\n --------\n >>> NeedlemanWunsch.sim_matrix('cat', 'hat')\n 0\n >>> NeedlemanWunsch.sim_matrix('hat', 'hat')\n 1"}
{"_id": "q_8619", "text": "Return the NCD between two strings using BWT plus RLE.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Compression distance\n\n Examples\n --------\n >>> cmp = NCDbwtrle()\n >>> cmp.dist('cat', 'hat')\n 0.75\n >>> cmp.dist('Niall', 'Neil')\n 0.8333333333333334\n >>> cmp.dist('aluminum', 'Catalan')\n 1.0\n >>> cmp.dist('ATCG', 'TAGC')\n 0.8"}
{"_id": "q_8620", "text": "Cast to tuple.\n\n Returns\n -------\n tuple\n The confusion table as a 4-tuple (tp, tn, fp, fn)\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.to_tuple()\n (120, 60, 20, 30)"}
{"_id": "q_8621", "text": "Cast to dict.\n\n Returns\n -------\n dict\n The confusion table as a dict\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> import pprint\n >>> pprint.pprint(ct.to_dict())\n {'fn': 30, 'fp': 20, 'tn': 60, 'tp': 120}"}
{"_id": "q_8622", "text": "Return population, N.\n\n Returns\n -------\n int\n The population (N) of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.population()\n 230"}
{"_id": "q_8623", "text": "r\"\"\"Return precision.\n\n Precision is defined as :math:`\\frac{tp}{tp + fp}`\n\n AKA positive predictive value (PPV)\n\n Cf. https://en.wikipedia.org/wiki/Precision_and_recall\n\n Cf. https://en.wikipedia.org/wiki/Information_retrieval#Precision\n\n Returns\n -------\n float\n The precision of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.precision()\n 0.8571428571428571"}
{"_id": "q_8624", "text": "r\"\"\"Return gain in precision.\n\n The gain in precision is defined as:\n :math:`G(precision) = \\frac{precision}{random~ precision}`\n\n Cf. https://en.wikipedia.org/wiki/Gain_(information_retrieval)\n\n Returns\n -------\n float\n The gain in precision of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.precision_gain()\n 1.3142857142857143"}
{"_id": "q_8625", "text": "r\"\"\"Return recall.\n\n Recall is defined as :math:`\\frac{tp}{tp + fn}`\n\n AKA sensitivity\n\n AKA true positive rate (TPR)\n\n Cf. https://en.wikipedia.org/wiki/Precision_and_recall\n\n Cf. https://en.wikipedia.org/wiki/Sensitivity_(test)\n\n Cf. https://en.wikipedia.org/wiki/Information_retrieval#Recall\n\n Returns\n -------\n float\n The recall of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.recall()\n 0.8"}
{"_id": "q_8626", "text": "r\"\"\"Return accuracy.\n\n Accuracy is defined as :math:`\\frac{tp + tn}{population}`\n\n Cf. https://en.wikipedia.org/wiki/Accuracy\n\n Returns\n -------\n float\n The accuracy of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.accuracy()\n 0.782608695652174"}
{"_id": "q_8627", "text": "r\"\"\"Return gain in accuracy.\n\n The gain in accuracy is defined as:\n :math:`G(accuracy) = \\frac{accuracy}{random~ accuracy}`\n\n Cf. https://en.wikipedia.org/wiki/Gain_(information_retrieval)\n\n Returns\n -------\n float\n The gain in accuracy of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.accuracy_gain()\n 1.4325259515570934"}
{"_id": "q_8628", "text": "r\"\"\"Return logarithmic mean of precision & recall.\n\n The logarithmic mean is:\n 0 if either precision or recall is 0,\n the precision if they are equal,\n otherwise :math:`\\frac{precision - recall}\n {ln(precision) - ln(recall)}`\n\n Cf. https://en.wikipedia.org/wiki/Logarithmic_mean\n\n Returns\n -------\n float\n The logarithmic mean of the confusion table's precision & recall\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.pr_lmean()\n 0.8282429171492667"}
{"_id": "q_8629", "text": "Return CLEF German stem.\n\n Parameters\n ----------\n word : str\n The word to stem\n\n Returns\n -------\n str\n Word stem\n\n Examples\n --------\n >>> stmr = CLEFGerman()\n >>> stmr.stem('lesen')\n 'lese'\n >>> stmr.stem('graues')\n 'grau'\n >>> stmr.stem('buchstabieren')\n 'buchstabier'"}
{"_id": "q_8630", "text": "Return the \"simplest\" Sift4 distance between two terms.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n max_offset : int\n The number of characters to search for matching letters\n\n Returns\n -------\n int\n The Sift4 distance according to the simplest formula\n\n Examples\n --------\n >>> cmp = Sift4Simplest()\n >>> cmp.dist_abs('cat', 'hat')\n 1\n >>> cmp.dist_abs('Niall', 'Neil')\n 2\n >>> cmp.dist_abs('Colin', 'Cuilen')\n 3\n >>> cmp.dist_abs('ATCG', 'TAGC')\n 2"}
{"_id": "q_8631", "text": "Return the normalized typo similarity between two strings.\n\n This is a wrapper for :py:meth:`Typo.sim`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n metric : str\n Supported values include: ``euclidean``, ``manhattan``,\n ``log-euclidean``, and ``log-manhattan``\n cost : tuple\n A 4-tuple representing the cost of the four possible edits: inserts,\n deletes, substitutions, and shift, respectively (by default:\n (1, 1, 0.5, 0.5)) The substitution & shift costs should be\n significantly less than the cost of an insertion & deletion unless a\n log metric is used.\n layout : str\n Name of the keyboard layout to use (Currently supported:\n ``QWERTY``, ``Dvorak``, ``AZERTY``, ``QWERTZ``)\n\n Returns\n -------\n float\n Normalized typo similarity\n\n Examples\n --------\n >>> round(sim_typo('cat', 'hat'), 12)\n 0.472953716914\n >>> round(sim_typo('Niall', 'Neil'), 12)\n 0.434971857071\n >>> round(sim_typo('Colin', 'Cuilen'), 12)\n 0.430964390437\n >>> sim_typo('ATCG', 'TAGC')\n 0.375"}
{"_id": "q_8632", "text": "Return the normalized Manhattan distance between two strings.\n\n This is a wrapper for :py:meth:`Manhattan.dist`.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n alphabet : collection or int\n The values or size of the alphabet\n\n Returns\n -------\n float\n The normalized Manhattan distance\n\n Examples\n --------\n >>> dist_manhattan('cat', 'hat')\n 0.5\n >>> round(dist_manhattan('Niall', 'Neil'), 12)\n 0.636363636364\n >>> round(dist_manhattan('Colin', 'Cuilen'), 12)\n 0.692307692308\n >>> dist_manhattan('ATCG', 'TAGC')\n 1.0"}
{"_id": "q_8633", "text": "Return the normalized Manhattan similarity of two strings.\n\n This is a wrapper for :py:meth:`Manhattan.sim`.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n alphabet : collection or int\n The values or size of the alphabet\n\n Returns\n -------\n float\n The normalized Manhattan similarity\n\n Examples\n --------\n >>> sim_manhattan('cat', 'hat')\n 0.5\n >>> round(sim_manhattan('Niall', 'Neil'), 12)\n 0.363636363636\n >>> round(sim_manhattan('Colin', 'Cuilen'), 12)\n 0.307692307692\n >>> sim_manhattan('ATCG', 'TAGC')\n 0.0"}
{"_id": "q_8634", "text": "Return the skeleton key.\n\n Parameters\n ----------\n word : str\n The word to transform into its skeleton key\n\n Returns\n -------\n str\n The skeleton key\n\n Examples\n --------\n >>> sk = SkeletonKey()\n >>> sk.fingerprint('The quick brown fox jumped over the lazy dog.')\n 'THQCKBRWNFXJMPDVLZYGEUIOA'\n >>> sk.fingerprint('Christopher')\n 'CHRSTPIOE'\n >>> sk.fingerprint('Niall')\n 'NLIA'"}
{"_id": "q_8635", "text": "Calculate the pairwise similarity statistics a collection of strings.\n\n Calculate pairwise similarities among members of two collections,\n returning the maximum, minimum, mean (according to a supplied function,\n arithmetic mean, by default), and (population) standard deviation\n of those similarities.\n\n Parameters\n ----------\n src_collection : list\n A collection of terms or a string that can be split\n tar_collection : list\n A collection of terms or a string that can be split\n metric : function\n A similarity metric function\n mean_func : function\n A mean function that takes a list of values and returns a float\n symmetric : bool\n Set to True if all pairwise similarities should be calculated in both\n directions\n\n Returns\n -------\n tuple\n The max, min, mean, and standard deviation of similarities\n\n Raises\n ------\n ValueError\n mean_func must be a function\n ValueError\n metric must be a function\n ValueError\n src_collection is neither a string nor iterable\n ValueError\n tar_collection is neither a string nor iterable\n\n Example\n -------\n >>> tuple(round(_, 12) for _ in pairwise_similarity_statistics(\n ... ['Christopher', 'Kristof', 'Christobal'], ['Niall', 'Neal', 'Neil']))\n (0.2, 0.0, 0.118614718615, 0.075070477184)"}
{"_id": "q_8636", "text": "Return the R2 region, as defined in the Porter2 specification.\n\n Parameters\n ----------\n term : str\n The term to examine\n r1_prefixes : set\n Prefixes to consider\n\n Returns\n -------\n int\n Length of the R1 region"}
{"_id": "q_8637", "text": "Return True iff term ends in a short syllable.\n\n (...according to the Porter2 specification.)\n\n NB: This is akin to the CVC test from the Porter stemmer. The\n description is unfortunately poor/ambiguous.\n\n Parameters\n ----------\n term : str\n The term to examine\n\n Returns\n -------\n bool\n True iff term ends in a short syllable"}
{"_id": "q_8638", "text": "Return True iff term is a short word.\n\n (...according to the Porter2 specification.)\n\n Parameters\n ----------\n term : str\n The term to examine\n r1_prefixes : set\n Prefixes to consider\n\n Returns\n -------\n bool\n True iff term is a short word"}
{"_id": "q_8639", "text": "Return the eudex phonetic hash of a word.\n\n Parameters\n ----------\n word : str\n The word to transform\n max_length : int\n The length in bits of the code returned (default 8)\n\n Returns\n -------\n int\n The eudex hash\n\n Examples\n --------\n >>> pe = Eudex()\n >>> pe.encode('Colin')\n 432345564238053650\n >>> pe.encode('Christopher')\n 433648490138894409\n >>> pe.encode('Niall')\n 648518346341351840\n >>> pe.encode('Smith')\n 720575940412906756\n >>> pe.encode('Schmidt')\n 720589151732307997"}
{"_id": "q_8640", "text": "Return the Q-Grams in src & tar.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n skip : int\n The number of characters to skip (only works when src and tar are\n strings)\n\n Returns\n -------\n tuple of Counters\n Q-Grams\n\n Examples\n --------\n >>> pe = _TokenDistance()\n >>> pe._get_qgrams('AT', 'TT', qval=2)\n (QGrams({'$A': 1, 'AT': 1, 'T#': 1}),\n QGrams({'$T': 1, 'TT': 1, 'T#': 1}))"}
{"_id": "q_8641", "text": "Return the Levenshtein similarity of two strings.\n\n This is a wrapper of :py:meth:`Levenshtein.sim`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n mode : str\n Specifies a mode for computing the Levenshtein distance:\n\n - ``lev`` (default) computes the ordinary Levenshtein distance, in\n which edits may include inserts, deletes, and substitutions\n - ``osa`` computes the Optimal String Alignment distance, in which\n edits may include inserts, deletes, substitutions, and\n transpositions but substrings may only be edited once\n\n cost : tuple\n A 4-tuple representing the cost of the four possible edits: inserts,\n deletes, substitutions, and transpositions, respectively (by default:\n (1, 1, 1, 1))\n\n Returns\n -------\n float\n The Levenshtein similarity between src & tar\n\n Examples\n --------\n >>> round(sim_levenshtein('cat', 'hat'), 12)\n 0.666666666667\n >>> round(sim_levenshtein('Niall', 'Neil'), 12)\n 0.4\n >>> sim_levenshtein('aluminum', 'Catalan')\n 0.125\n >>> sim_levenshtein('ATCG', 'TAGC')\n 0.25"}
{"_id": "q_8642", "text": "Return the omission key.\n\n Parameters\n ----------\n word : str\n The word to transform into its omission key\n\n Returns\n -------\n str\n The omission key\n\n Examples\n --------\n >>> ok = OmissionKey()\n >>> ok.fingerprint('The quick brown fox jumped over the lazy dog.')\n 'JKQXZVWYBFMGPDHCLNTREUIOA'\n >>> ok.fingerprint('Christopher')\n 'PHCTSRIOE'\n >>> ok.fingerprint('Niall')\n 'LNIA'"}
{"_id": "q_8643", "text": "Return the Monge-Elkan distance between two strings.\n\n This is a wrapper for :py:meth:`MongeElkan.dist`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n sim_func : function\n The internal similarity metric to employ\n symmetric : bool\n Return a symmetric similarity measure\n\n Returns\n -------\n float\n Monge-Elkan distance\n\n Examples\n --------\n >>> dist_monge_elkan('cat', 'hat')\n 0.25\n >>> round(dist_monge_elkan('Niall', 'Neil'), 12)\n 0.333333333333\n >>> round(dist_monge_elkan('aluminum', 'Catalan'), 12)\n 0.611111111111\n >>> dist_monge_elkan('ATCG', 'TAGC')\n 0.5"}
{"_id": "q_8644", "text": "Return the Phonem code for a word.\n\n Parameters\n ----------\n word : str\n The word to transform\n\n Returns\n -------\n str\n The Phonem value\n\n Examples\n --------\n >>> pe = Phonem()\n >>> pe.encode('Christopher')\n 'CRYSDOVR'\n >>> pe.encode('Niall')\n 'NYAL'\n >>> pe.encode('Smith')\n 'SMYD'\n >>> pe.encode('Schmidt')\n 'CMYD'"}
{"_id": "q_8645", "text": "Return CLEF Swedish stem.\n\n Parameters\n ----------\n word : str\n The word to stem\n\n Returns\n -------\n str\n Word stem\n\n Examples\n --------\n >>> clef_swedish('undervisa')\n 'undervis'\n >>> clef_swedish('suspension')\n 'suspensio'\n >>> clef_swedish('visshet')\n 'viss'"}
{"_id": "q_8646", "text": "Undouble endings -kk, -dd, and -tt.\n\n Parameters\n ----------\n word : str\n The word to stem\n\n Returns\n -------\n str\n The word with doubled endings undoubled"}
{"_id": "q_8647", "text": "Convert IPA to features.\n\n This translates an IPA string of one or more phones to a list of ints\n representing the features of the string.\n\n Parameters\n ----------\n ipa : str\n The IPA representation of a phone or series of phones\n\n Returns\n -------\n list of ints\n A representation of the features of the input string\n\n Examples\n --------\n >>> ipa_to_features('mut')\n [2709662981243185770, 1825831513894594986, 2783230754502126250]\n >>> ipa_to_features('fon')\n [2781702983095331242, 1825831531074464170, 2711173160463936106]\n >>> ipa_to_features('telz')\n [2783230754502126250, 1826957430176000426, 2693158761954453926,\n 2783230754501863834]"}
{"_id": "q_8648", "text": "Get a feature vector.\n\n This returns a list of ints, equal in length to the vector input,\n representing presence/absence/neutrality with respect to a particular\n phonetic feature.\n\n Parameters\n ----------\n vector : list\n A tuple or list of ints representing the phonetic features of a phone\n or series of phones (such as is returned by the ipa_to_features\n function)\n feature : str\n A feature name from the set:\n\n - ``consonantal``\n - ``sonorant``\n - ``syllabic``\n - ``labial``\n - ``round``\n - ``coronal``\n - ``anterior``\n - ``distributed``\n - ``dorsal``\n - ``high``\n - ``low``\n - ``back``\n - ``tense``\n - ``pharyngeal``\n - ``ATR``\n - ``voice``\n - ``spread_glottis``\n - ``constricted_glottis``\n - ``continuant``\n - ``strident``\n - ``lateral``\n - ``delayed_release``\n - ``nasal``\n\n Returns\n -------\n list of ints\n A list indicating presence/absence/neutrality with respect to the\n feature\n\n Raises\n ------\n AttributeError\n feature must be one of ...\n\n Examples\n --------\n >>> tails = ipa_to_features('telz')\n >>> get_feature(tails, 'consonantal')\n [1, -1, 1, 1]\n >>> get_feature(tails, 'sonorant')\n [-1, 1, 1, -1]\n >>> get_feature(tails, 'nasal')\n [-1, -1, -1, -1]\n >>> get_feature(tails, 'coronal')\n [1, -1, 1, 1]"}
{"_id": "q_8649", "text": "Compare features.\n\n This returns a number in the range [0, 1] representing a comparison of two\n feature bundles.\n\n If one of the bundles is negative, -1 is returned (for unknown values)\n\n If the bundles are identical, 1 is returned.\n\n If they are inverses of one another, 0 is returned.\n\n Otherwise, a float representing their similarity is returned.\n\n Parameters\n ----------\n feat1 : int\n A feature bundle\n feat2 : int\n A feature bundle\n\n Returns\n -------\n float\n A comparison of the feature bundles\n\n Examples\n --------\n >>> cmp_features(ipa_to_features('l')[0], ipa_to_features('l')[0])\n 1.0\n >>> cmp_features(ipa_to_features('l')[0], ipa_to_features('n')[0])\n 0.8709677419354839\n >>> cmp_features(ipa_to_features('l')[0], ipa_to_features('z')[0])\n 0.8709677419354839\n >>> cmp_features(ipa_to_features('l')[0], ipa_to_features('i')[0])\n 0.564516129032258"}
{"_id": "q_8650", "text": "Return the length similarity of two strings.\n\n Length similarity is the ratio of the length of the shorter string to\n the longer.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Length similarity\n\n Examples\n --------\n >>> cmp = Length()\n >>> cmp.sim('cat', 'hat')\n 1.0\n >>> cmp.sim('Niall', 'Neil')\n 0.8\n >>> cmp.sim('aluminum', 'Catalan')\n 0.875\n >>> cmp.sim('ATCG', 'TAGC')\n 1.0"}
{"_id": "q_8651", "text": "r\"\"\"Return harmonic mean.\n\n The harmonic mean is defined as:\n :math:`\\frac{|nums|}{\\sum\\limits_{i}\\frac{1}{nums_i}}`\n\n Following the behavior of Wolfram|Alpha:\n - If one of the values in nums is 0, return 0.\n - If more than one value in nums is 0, return NaN.\n\n Cf. https://en.wikipedia.org/wiki/Harmonic_mean\n\n Parameters\n ----------\n nums : list\n A series of numbers\n\n Returns\n -------\n float\n The harmonic mean of nums\n\n Raises\n ------\n AttributeError\n hmean requires at least one value\n\n Examples\n --------\n >>> hmean([1, 2, 3, 4])\n 1.9200000000000004\n >>> hmean([1, 2])\n 1.3333333333333333\n >>> hmean([0, 5, 1000])\n 0"}
{"_id": "q_8652", "text": "r\"\"\"Return Seiffert's mean.\n\n Seiffert's mean of two numbers x and y is:\n :math:`\\frac{x - y}{4 \\cdot arctan \\sqrt{\\frac{x}{y}} - \\pi}`\n\n It is defined in :cite:`Seiffert:1993`.\n\n Parameters\n ----------\n nums : list\n A series of numbers\n\n Returns\n -------\n float\n Sieffert's mean of nums\n\n Raises\n ------\n AttributeError\n seiffert_mean supports no more than two values\n\n Examples\n --------\n >>> seiffert_mean([1, 2])\n 1.4712939827611637\n >>> seiffert_mean([1, 0])\n 0.3183098861837907\n >>> seiffert_mean([2, 4])\n 2.9425879655223275\n >>> seiffert_mean([2, 1000])\n 336.84053300118825"}
{"_id": "q_8653", "text": "r\"\"\"Return Lehmer mean.\n\n The Lehmer mean is:\n :math:`\\frac{\\sum\\limits_i{x_i^p}}{\\sum\\limits_i{x_i^(p-1)}}`\n\n Cf. https://en.wikipedia.org/wiki/Lehmer_mean\n\n Parameters\n ----------\n nums : list\n A series of numbers\n exp : numeric\n The exponent of the Lehmer mean\n\n Returns\n -------\n float\n The Lehmer mean of nums for the given exponent\n\n Examples\n --------\n >>> lehmer_mean([1, 2, 3, 4])\n 3.0\n >>> lehmer_mean([1, 2])\n 1.6666666666666667\n >>> lehmer_mean([0, 5, 1000])\n 995.0497512437811"}
{"_id": "q_8654", "text": "Return geometric-harmonic mean.\n\n Iterates between geometric & harmonic means until they converge to\n a single value (rounded to 12 digits).\n\n Cf. https://en.wikipedia.org/wiki/Geometric-harmonic_mean\n\n Parameters\n ----------\n nums : list\n A series of numbers\n\n Returns\n -------\n float\n The geometric-harmonic mean of nums\n\n Examples\n --------\n >>> ghmean([1, 2, 3, 4])\n 2.058868154613003\n >>> ghmean([1, 2])\n 1.3728805006183502\n >>> ghmean([0, 5, 1000])\n 0.0\n\n >>> ghmean([0, 0])\n 0.0\n >>> ghmean([0, 0, 5])\n nan"}
{"_id": "q_8655", "text": "Return arithmetic-geometric-harmonic mean.\n\n Iterates over arithmetic, geometric, & harmonic means until they\n converge to a single value (rounded to 12 digits), following the\n method described in :cite:`Raissouli:2009`.\n\n Parameters\n ----------\n nums : list\n A series of numbers\n\n Returns\n -------\n float\n The arithmetic-geometric-harmonic mean of nums\n\n Examples\n --------\n >>> aghmean([1, 2, 3, 4])\n 2.198327159900212\n >>> aghmean([1, 2])\n 1.4142135623731884\n >>> aghmean([0, 5, 1000])\n 335.0"}
{"_id": "q_8656", "text": "Return a word with punctuation stripped out.\n\n Parameters\n ----------\n word : str\n A word to strip punctuation from\n\n Returns\n -------\n str\n The word stripped of punctuation\n\n Examples\n --------\n >>> pe = Synoname()\n >>> pe._synoname_strip_punct('AB;CD EF-GH$IJ')\n 'ABCD EFGHIJ'"}
{"_id": "q_8657", "text": "Return the normalized Synoname distance between two words.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n word_approx_min : float\n The minimum word approximation value to signal a 'word_approx'\n match\n char_approx_min : float\n The minimum character approximation value to signal a 'char_approx'\n match\n tests : int or Iterable\n Either an integer indicating tests to perform or a list of test\n names to perform (defaults to performing all tests)\n\n Returns\n -------\n float\n Normalized Synoname distance"}
{"_id": "q_8658", "text": "Return the NCD between two strings using bzip2 compression.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Compression distance\n\n Examples\n --------\n >>> cmp = NCDbz2()\n >>> cmp.dist('cat', 'hat')\n 0.06666666666666667\n >>> cmp.dist('Niall', 'Neil')\n 0.03125\n >>> cmp.dist('aluminum', 'Catalan')\n 0.17647058823529413\n >>> cmp.dist('ATCG', 'TAGC')\n 0.03125"}
{"_id": "q_8659", "text": "Return the MetaSoundex code for a word.\n\n Parameters\n ----------\n word : str\n The word to transform\n lang : str\n Either ``en`` for English or ``es`` for Spanish\n\n Returns\n -------\n str\n The MetaSoundex code\n\n Examples\n --------\n >>> pe = MetaSoundex()\n >>> pe.encode('Smith')\n '4500'\n >>> pe.encode('Waters')\n '7362'\n >>> pe.encode('James')\n '1520'\n >>> pe.encode('Schmidt')\n '4530'\n >>> pe.encode('Ashcroft')\n '0261'\n >>> pe.encode('Perez', lang='es')\n '094'\n >>> pe.encode('Martinez', lang='es')\n '69364'\n >>> pe.encode('Gutierrez', lang='es')\n '83994'\n >>> pe.encode('Santiago', lang='es')\n '4638'\n >>> pe.encode('Nicol\u00e1s', lang='es')\n '6754'"}
{"_id": "q_8660", "text": "Return the Ratcliff-Obershelp similarity of two strings.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Ratcliff-Obershelp similarity\n\n Examples\n --------\n >>> cmp = RatcliffObershelp()\n >>> round(cmp.sim('cat', 'hat'), 12)\n 0.666666666667\n >>> round(cmp.sim('Niall', 'Neil'), 12)\n 0.666666666667\n >>> round(cmp.sim('aluminum', 'Catalan'), 12)\n 0.4\n >>> cmp.sim('ATCG', 'TAGC')\n 0.5"}
{"_id": "q_8661", "text": "Return the Parmar-Kumbharana encoding of a word.\n\n Parameters\n ----------\n word : str\n The word to transform\n\n Returns\n -------\n str\n The Parmar-Kumbharana encoding\n\n Examples\n --------\n >>> pe = ParmarKumbharana()\n >>> pe.encode('Gough')\n 'GF'\n >>> pe.encode('pneuma')\n 'NM'\n >>> pe.encode('knight')\n 'NT'\n >>> pe.encode('trice')\n 'TRS'\n >>> pe.encode('judge')\n 'JJ'"}
{"_id": "q_8662", "text": "Calculate the Hamming distance between the Eudex hashes of two terms.\n\n This is a wrapper for :py:meth:`Eudex.eudex_hamming`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n weights : str, iterable, or generator function\n The weights or weights generator function\n max_length : int\n The number of characters to encode as a eudex hash\n normalized : bool\n Normalizes to [0, 1] if True\n\n Returns\n -------\n int\n The Eudex Hamming distance\n\n Examples\n --------\n >>> eudex_hamming('cat', 'hat')\n 128\n >>> eudex_hamming('Niall', 'Neil')\n 2\n >>> eudex_hamming('Colin', 'Cuilen')\n 10\n >>> eudex_hamming('ATCG', 'TAGC')\n 403\n\n >>> eudex_hamming('cat', 'hat', weights='fibonacci')\n 34\n >>> eudex_hamming('Niall', 'Neil', weights='fibonacci')\n 2\n >>> eudex_hamming('Colin', 'Cuilen', weights='fibonacci')\n 7\n >>> eudex_hamming('ATCG', 'TAGC', weights='fibonacci')\n 117\n\n >>> eudex_hamming('cat', 'hat', weights=None)\n 1\n >>> eudex_hamming('Niall', 'Neil', weights=None)\n 1\n >>> eudex_hamming('Colin', 'Cuilen', weights=None)\n 2\n >>> eudex_hamming('ATCG', 'TAGC', weights=None)\n 9\n\n >>> # Using the OEIS A000142:\n >>> eudex_hamming('cat', 'hat', [1, 1, 2, 6, 24, 120, 720, 5040])\n 1\n >>> eudex_hamming('Niall', 'Neil', [1, 1, 2, 6, 24, 120, 720, 5040])\n 720\n >>> eudex_hamming('Colin', 'Cuilen', [1, 1, 2, 6, 24, 120, 720, 5040])\n 744\n >>> eudex_hamming('ATCG', 'TAGC', [1, 1, 2, 6, 24, 120, 720, 5040])\n 6243"}
{"_id": "q_8663", "text": "Return normalized Hamming distance between Eudex hashes of two terms.\n\n This is a wrapper for :py:meth:`Eudex.dist`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n weights : str, iterable, or generator function\n The weights or weights generator function\n max_length : int\n The number of characters to encode as a eudex hash\n\n Returns\n -------\n int\n The normalized Eudex Hamming distance\n\n Examples\n --------\n >>> round(dist_eudex('cat', 'hat'), 12)\n 0.062745098039\n >>> round(dist_eudex('Niall', 'Neil'), 12)\n 0.000980392157\n >>> round(dist_eudex('Colin', 'Cuilen'), 12)\n 0.004901960784\n >>> round(dist_eudex('ATCG', 'TAGC'), 12)\n 0.197549019608"}
{"_id": "q_8664", "text": "Yield the next Fibonacci number.\n\n Based on https://www.python-course.eu/generators.php\n Starts at Fibonacci number 3 (the second 1)\n\n Yields\n ------\n int\n The next Fibonacci number"}
{"_id": "q_8665", "text": "Return the Euclidean distance between two strings.\n\n This is a wrapper for :py:meth:`Euclidean.dist_abs`.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n normalized : bool\n Normalizes to [0, 1] if True\n alphabet : collection or int\n The values or size of the alphabet\n\n Returns\n -------\n float: The Euclidean distance\n\n Examples\n --------\n >>> euclidean('cat', 'hat')\n 2.0\n >>> round(euclidean('Niall', 'Neil'), 12)\n 2.645751311065\n >>> euclidean('Colin', 'Cuilen')\n 3.0\n >>> round(euclidean('ATCG', 'TAGC'), 12)\n 3.162277660168"}
{"_id": "q_8666", "text": "Return the normalized Euclidean distance between two strings.\n\n This is a wrapper for :py:meth:`Euclidean.dist`.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n alphabet : collection or int\n The values or size of the alphabet\n\n Returns\n -------\n float\n The normalized Euclidean distance\n\n Examples\n --------\n >>> round(dist_euclidean('cat', 'hat'), 12)\n 0.57735026919\n >>> round(dist_euclidean('Niall', 'Neil'), 12)\n 0.683130051064\n >>> round(dist_euclidean('Colin', 'Cuilen'), 12)\n 0.727606875109\n >>> dist_euclidean('ATCG', 'TAGC')\n 1.0"}
{"_id": "q_8667", "text": "Return Lovins' condition N.\n\n Parameters\n ----------\n word : str\n Word to check\n suffix_len : int\n Suffix length\n\n Returns\n -------\n bool\n True if condition is met"}
{"_id": "q_8668", "text": "Return Lovins' condition S.\n\n Parameters\n ----------\n word : str\n Word to check\n suffix_len : int\n Suffix length\n\n Returns\n -------\n bool\n True if condition is met"}
{"_id": "q_8669", "text": "Return Lovins' condition X.\n\n Parameters\n ----------\n word : str\n Word to check\n suffix_len : int\n Suffix length\n\n Returns\n -------\n bool\n True if condition is met"}
{"_id": "q_8670", "text": "Return Lovins' condition BB.\n\n Parameters\n ----------\n word : str\n Word to check\n suffix_len : int\n Suffix length\n\n Returns\n -------\n bool\n True if condition is met"}
{"_id": "q_8671", "text": "Return Lovins stem.\n\n Parameters\n ----------\n word : str\n The word to stem\n\n Returns\n -------\n str\n Word stem\n\n Examples\n --------\n >>> stmr = Lovins()\n >>> stmr.stem('reading')\n 'read'\n >>> stmr.stem('suspension')\n 'suspens'\n >>> stmr.stem('elusiveness')\n 'elus'"}
{"_id": "q_8672", "text": "Return the NCD between two strings using zlib compression.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Compression distance\n\n Examples\n --------\n >>> cmp = NCDzlib()\n >>> cmp.dist('cat', 'hat')\n 0.3333333333333333\n >>> cmp.dist('Niall', 'Neil')\n 0.45454545454545453\n >>> cmp.dist('aluminum', 'Catalan')\n 0.5714285714285714\n >>> cmp.dist('ATCG', 'TAGC')\n 0.4"}
{"_id": "q_8673", "text": "Return Pylint badge color.\n\n Parameters\n ----------\n score : float\n A Pylint score\n\n Returns\n -------\n str\n Badge color"}
{"_id": "q_8674", "text": "Return pydocstyle badge color.\n\n Parameters\n ----------\n score : float\n A pydocstyle score\n\n Returns\n -------\n str\n Badge color"}
{"_id": "q_8675", "text": "Return the bag distance between two strings.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n int\n Bag distance\n\n Examples\n --------\n >>> cmp = Bag()\n >>> cmp.dist_abs('cat', 'hat')\n 1\n >>> cmp.dist_abs('Niall', 'Neil')\n 2\n >>> cmp.dist_abs('aluminum', 'Catalan')\n 5\n >>> cmp.dist_abs('ATCG', 'TAGC')\n 0\n >>> cmp.dist_abs('abcdefg', 'hijklm')\n 7\n >>> cmp.dist_abs('abcdefg', 'hijklmno')\n 8"}
{"_id": "q_8676", "text": "Return the normalized bag distance between two strings.\n\n Bag distance is normalized by dividing by :math:`max( |src|, |tar| )`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Normalized bag distance\n\n Examples\n --------\n >>> cmp = Bag()\n >>> cmp.dist('cat', 'hat')\n 0.3333333333333333\n >>> cmp.dist('Niall', 'Neil')\n 0.4\n >>> cmp.dist('aluminum', 'Catalan')\n 0.625\n >>> cmp.dist('ATCG', 'TAGC')\n 0.0"}
{"_id": "q_8677", "text": "Return the MLIPNS distance between two strings.\n\n This is a wrapper for :py:meth:`MLIPNS.dist`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n threshold : float\n A number [0, 1] indicating the maximum similarity score, below which\n the strings are considered 'similar' (0.25 by default)\n max_mismatches : int\n A number indicating the allowable number of mismatches to remove before\n declaring two strings not similar (2 by default)\n\n Returns\n -------\n float\n MLIPNS distance\n\n Examples\n --------\n >>> dist_mlipns('cat', 'hat')\n 0.0\n >>> dist_mlipns('Niall', 'Neil')\n 1.0\n >>> dist_mlipns('aluminum', 'Catalan')\n 1.0\n >>> dist_mlipns('ATCG', 'TAGC')\n 1.0"}
{"_id": "q_8678", "text": "Return a similarity of two strings.\n\n This is a generalized function for calling other similarity functions.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n method : function\n Specifies the similarity metric (:py:func:`sim_levenshtein` by default)\n\n Returns\n -------\n float\n Similarity according to the specified function\n\n Raises\n ------\n AttributeError\n Unknown distance function\n\n Examples\n --------\n >>> round(sim('cat', 'hat'), 12)\n 0.666666666667\n >>> round(sim('Niall', 'Neil'), 12)\n 0.4\n >>> sim('aluminum', 'Catalan')\n 0.125\n >>> sim('ATCG', 'TAGC')\n 0.25"}
{"_id": "q_8679", "text": "Return Porter helper function _m_degree value.\n\n m-degree is equal to the number of V to C transitions\n\n Parameters\n ----------\n term : str\n The word for which to calculate the m-degree\n\n Returns\n -------\n int\n The m-degree as defined in the Porter stemmer definition"}
{"_id": "q_8680", "text": "Return Porter helper function _has_vowel value.\n\n Parameters\n ----------\n term : str\n The word to scan for vowels\n\n Returns\n -------\n bool\n True iff a vowel exists in the term (as defined in the Porter\n stemmer definition)"}
{"_id": "q_8681", "text": "Return Porter helper function _ends_in_doubled_cons value.\n\n Parameters\n ----------\n term : str\n The word to check for a final doubled consonant\n\n Returns\n -------\n bool\n True iff the stem ends in a doubled consonant (as defined in the\n Porter stemmer definition)"}
{"_id": "q_8682", "text": "Return Porter helper function _ends_in_cvc value.\n\n Parameters\n ----------\n term : str\n The word to scan for cvc\n\n Returns\n -------\n bool\n True iff the stem ends in cvc (as defined in the Porter stemmer\n definition)"}
{"_id": "q_8683", "text": "Symmetrical logarithmic scale.\n\n Optional arguments:\n\n *base*:\n The base of the logarithm."}
{"_id": "q_8684", "text": "Show usage and available curve functions."}
{"_id": "q_8685", "text": "Get the current terminal size."}
{"_id": "q_8686", "text": "Return the escape sequence for the selected Control Sequence."}
{"_id": "q_8687", "text": "Return a value wrapped in the selected CSI and does a reset."}
{"_id": "q_8688", "text": "Read points from istream and output to ostream."}
{"_id": "q_8689", "text": "Consume data from a line."}
{"_id": "q_8690", "text": "Add a set of data points."}
{"_id": "q_8691", "text": "Generate a color ramp for the current screen height."}
{"_id": "q_8692", "text": "Run the filter function on the provided points."}
{"_id": "q_8693", "text": "Resolve the points to make a line between two points."}
{"_id": "q_8694", "text": "Set a text value in the screen canvas."}
{"_id": "q_8695", "text": "Normalised data points using numpy."}
{"_id": "q_8696", "text": "Loads the content of the text file"}
{"_id": "q_8697", "text": "translate the incoming symbol into locally-used"}
{"_id": "q_8698", "text": "Loads all symbol maps from db"}
{"_id": "q_8699", "text": "Add individual price"}
{"_id": "q_8700", "text": "Import prices from CSV file"}
{"_id": "q_8701", "text": "displays last price, for symbol if provided"}
{"_id": "q_8702", "text": "Display all prices"}
{"_id": "q_8703", "text": "Download the latest prices"}
{"_id": "q_8704", "text": "Return the default session. The path is read from the default config."}
{"_id": "q_8705", "text": "Creates a symbol mapping"}
{"_id": "q_8706", "text": "Displays all symbol maps"}
{"_id": "q_8707", "text": "Finds the map by in-symbol"}
{"_id": "q_8708", "text": "Read text lines from a file"}
{"_id": "q_8709", "text": "Parse into the Price entity, ready for saving"}
{"_id": "q_8710", "text": "Read the config file"}
{"_id": "q_8711", "text": "gets the default config path from resources"}
{"_id": "q_8712", "text": "Copy the config template into user's directory"}
{"_id": "q_8713", "text": "Returns the path where the active config file is expected.\n This is the user's profile folder."}
{"_id": "q_8714", "text": "Sets a value in config"}
{"_id": "q_8715", "text": "Retrieves a config value"}
{"_id": "q_8716", "text": "Splits the symbol into namespace, symbol tuple"}
{"_id": "q_8717", "text": "Returns the current db session"}
{"_id": "q_8718", "text": "Fetches all the prices for the given arguments"}
{"_id": "q_8719", "text": "Returns the latest price on the date"}
{"_id": "q_8720", "text": "Prune historical prices for all symbols, leaving only the latest.\n Returns the number of items removed."}
{"_id": "q_8721", "text": "Delete all but the latest available price for the given symbol.\n Returns the number of items removed."}
{"_id": "q_8722", "text": "Downloads and parses the price"}
{"_id": "q_8723", "text": "Fetches the securities that match the given filters"}
{"_id": "q_8724", "text": "Return partial of original function call"}
{"_id": "q_8725", "text": "Verify that a part that is zoomed in on has equal length.\n\n Typically used in the context of ``check_function_def()``\n\n Arguments:\n name (str): name of the part for which to check the length to the corresponding part in the solution.\n unequal_msg (str): Message in case the lengths do not match.\n state (State): state as passed by the SCT chain. Don't specify this explicitly.\n\n :Examples:\n\n Student and solution code::\n\n def shout(word):\n return word + '!!!'\n\n SCT that checks number of arguments::\n\n Ex().check_function_def('shout').has_equal_part_len('args', 'not enough args!')"}
{"_id": "q_8726", "text": "Checks whether student imported a package or function correctly.\n\n Python features many ways to import packages.\n All of these different methods revolve around the ``import``, ``from`` and ``as`` keywords.\n ``has_import()`` provides a robust way to check whether a student correctly imported a certain package.\n\n By default, ``has_import()`` allows for different ways of aliasing the imported package or function.\n If you want to make sure the correct alias was used to refer to the package or function that was imported,\n set ``same_as=True``.\n\n Args:\n name (str): the name of the package that has to be checked.\n same_as (bool): if True, the alias of the package or function has to be the same. Defaults to False.\n not_imported_msg (str): feedback message when the package is not imported.\n incorrect_as_msg (str): feedback message if the alias is wrong.\n\n :Example:\n\n Example 1, where aliases don't matter (defaut): ::\n\n # solution\n import matplotlib.pyplot as plt\n\n # sct\n Ex().has_import(\"matplotlib.pyplot\")\n\n # passing submissions\n import matplotlib.pyplot as plt\n from matplotlib import pyplot as plt\n import matplotlib.pyplot as pltttt\n\n # failing submissions\n import matplotlib as mpl\n\n Example 2, where the SCT is coded so aliases do matter: ::\n\n # solution\n import matplotlib.pyplot as plt\n\n # sct\n Ex().has_import(\"matplotlib.pyplot\", same_as=True)\n\n # passing submissions\n import matplotlib.pyplot as plt\n from matplotlib import pyplot as plt\n\n # failing submissions\n import matplotlib.pyplot as pltttt"}
{"_id": "q_8727", "text": "Search student output for a pattern.\n\n Among the student and solution process, the student submission and solution code as a string,\n the ``Ex()`` state also contains the output that a student generated with his or her submission.\n\n With ``has_output()``, you can access this output and match it against a regular or fixed expression.\n\n Args:\n text (str): the text that is searched for\n pattern (bool): if True (default), the text is treated as a pattern. If False, it is treated as plain text.\n no_output_msg (str): feedback message to be displayed if the output is not found.\n\n :Example:\n\n As an example, suppose we want a student to print out a sentence: ::\n\n # Print the \"This is some ... stuff\"\n print(\"This is some weird stuff\")\n\n The following SCT tests whether the student prints out ``This is some weird stuff``: ::\n\n # Using exact string matching\n Ex().has_output(\"This is some weird stuff\", pattern = False)\n\n # Using a regular expression (more robust)\n # pattern = True is the default\n msg = \"Print out ``This is some ... stuff`` to the output, \" + \\\\\n \"fill in ``...`` with a word you like.\"\n Ex().has_output(r\"This is some \\w* stuff\", no_output_msg = msg)"}
{"_id": "q_8728", "text": "Check if the right printouts happened.\n\n ``has_printout()`` will look for the printout in the solution code that you specified with ``index`` (0 in this case), rerun the ``print()`` call in\n the solution process, capture its output, and verify whether the output is present in the output of the student.\n\n This is more robust as ``Ex().check_function('print')`` initiated chains as students can use as many\n printouts as they want, as long as they do the correct one somewhere.\n\n Args:\n index (int): index of the ``print()`` call in the solution whose output you want to search for in the student output.\n not_printed_msg (str): if specified, this overrides the default message that is generated when the output\n is not found in the student output.\n pre_code (str): Python code as a string that is executed before running the targeted student call.\n This is the ideal place to set a random seed, for example.\n copy (bool): whether to try to deep copy objects in the environment, such as lists, that could\n accidentally be mutated. Disabled by default, which speeds up SCTs.\n state (State): state as passed by the SCT chain. Don't specify this explicitly.\n\n :Example:\n\n Suppose you want somebody to print out 4: ::\n\n print(1, 2, 3, 4)\n\n The following SCT would check that: ::\n\n Ex().has_printout(0)\n\n All of the following SCTs would pass: ::\n\n print(1, 2, 3, 4)\n print('1 2 3 4')\n print(1, 2, '3 4')\n print(\"random\"); print(1, 2, 3, 4)\n\n :Example:\n\n Watch out: ``has_printout()`` will effectively **rerun** the ``print()`` call in the solution process after the entire solution script was executed.\n If your solution script updates the value of `x` after executing it, ``has_printout()`` will not work.\n\n Suppose you have the following solution: ::\n\n x = 4\n print(x)\n x = 6\n\n The following SCT will not work: ::\n\n Ex().has_printout(0)\n\n Why? When the ``print(x)`` call is executed, the value of ``x`` will be 6, and pythonwhat will look for the output `'6`' in the output the student generated.\n In cases like these, ``has_printout()`` cannot be used.\n\n :Example:\n\n Inside a for loop ``has_printout()``\n\n Suppose you have the following solution: ::\n\n for i in range(5):\n print(i)\n\n The following SCT will not work: ::\n\n Ex().check_for_loop().check_body().has_printout(0)\n\n The reason is that ``has_printout()`` can only be called from the root state. ``Ex()``.\n If you want to check printouts done in e.g. a for loop, you have to use a `check_function('print')` chain instead: ::\n\n Ex().check_for_loop().check_body().\\\\\n set_context(0).check_function('print').\\\\\n check_args(0).has_equal_value()"}
{"_id": "q_8729", "text": "Test multiple choice exercise.\n\n Test for a MultipleChoiceExercise. The correct answer (as an integer) and feedback messages\n are passed to this function.\n\n Args:\n correct (int): the index of the correct answer (should be an instruction). Starts at 1.\n msgs (list(str)): a list containing all feedback messages belonging to each choice of the\n student. The list should have the same length as the number of options."}
{"_id": "q_8730", "text": "Get a value from process, return tuple of value, res if succesful"}
{"_id": "q_8731", "text": "Override the solution code with something arbitrary.\n\n There might be cases in which you want to temporarily override the solution code\n so you can allow for alternative ways of solving an exercise.\n When you use ``override()`` in an SCT chain, the remainder of that SCT chain will\n run as if the solution code you specified is the only code that was in the solution.\n\n Check the glossary for an example (pandas plotting)\n\n Args:\n solution: solution code as a string that overrides the original solution code.\n state: State instance describing student and solution code. Can be omitted if used with Ex()."}
{"_id": "q_8732", "text": "Check whether an object is an instance of a certain class.\n\n ``is_instance()`` can currently only be used when chained from ``check_object()``, the function that is\n used to 'zoom in' on the object of interest.\n\n Args:\n inst (class): The class that the object should have.\n not_instance_msg (str): When specified, this overrides the automatically generated message in case\n the object does not have the expected class.\n state (State): The state that is passed in through the SCT chain (don't specify this).\n\n :Example:\n\n Student code and solution code::\n\n import numpy as np\n arr = np.array([1, 2, 3, 4, 5])\n\n SCT::\n\n # Verify the class of arr\n import numpy\n Ex().check_object('arr').is_instance(numpy.ndarray)"}
{"_id": "q_8733", "text": "Return copy of instance, omitting entries that are EMPTY"}
{"_id": "q_8734", "text": "getter for Parser outputs"}
{"_id": "q_8735", "text": "When dispatched on loops, has_context the target vars are the attribute _target_vars.\n\n Note: This is to allow people to call has_context on a node (e.g. for_loop) rather than\n one of its attributes (e.g. body). Purely for convenience."}
{"_id": "q_8736", "text": "Return child state with name part as its ast tree"}
{"_id": "q_8737", "text": "Return child state with indexed name part as its ast tree.\n\n ``index`` can be:\n\n - an integer, in which case the student/solution_parts are indexed by position.\n - a string, in which case the student/solution_parts are expected to be a dictionary.\n - a list of indices (which can be integer or string), in which case the student parts are indexed step by step."}
{"_id": "q_8738", "text": "Return the true anomaly at each time"}
{"_id": "q_8739", "text": "Loads the class from the class_path string"}
{"_id": "q_8740", "text": "process pagination requests from request parameter"}
{"_id": "q_8741", "text": "Create separate dictionary of supported filter values provided"}
{"_id": "q_8742", "text": "Search for courses\n\n Args:\n request (required) - django request object\n\n Returns:\n http json response with the following fields\n \"took\" - how many seconds the operation took\n \"total\" - how many results were found\n \"max_score\" - maximum score from these resutls\n \"results\" - json array of result documents\n\n or\n\n \"error\" - displayable information about an error that occured on the server\n\n POST Params:\n \"search_string\" (optional) - text with which to search for courses\n \"page_size\" (optional)- how many results to return per page (defaults to 20, with maximum cutoff at 100)\n \"page_index\" (optional) - for which page (zero-indexed) to include results (defaults to 0)"}
{"_id": "q_8743", "text": "Return field to apply into filter, if an array then use a range, otherwise look for a term match"}
{"_id": "q_8744", "text": "We have a field_dictionary - we want to match the values for an elasticsearch \"match\" query\n This is only potentially useful when trying to tune certain search operations"}
{"_id": "q_8745", "text": "We have a filter_dictionary - this means that if the field is included\n and matches, then we can include, OR if the field is undefined, then we\n assume it is safe to include"}
{"_id": "q_8746", "text": "We have a list of terms with which we return facets"}
{"_id": "q_8747", "text": "fetch mapped-items structure from cache"}
{"_id": "q_8748", "text": "Logs indexing errors and raises a general ElasticSearch Exception"}
{"_id": "q_8749", "text": "Interfaces with the elasticsearch mappings for the index\n prevents multiple loading of the same mappings from ES when called more than once\n\n Mappings format in elasticsearch is as follows:\n {\n \"doc_type\": {\n \"properties\": {\n \"nested_property\": {\n \"properties\": {\n \"an_analysed_property\": {\n \"type\": \"string\"\n },\n \"another_analysed_property\": {\n \"type\": \"string\"\n }\n }\n },\n \"a_not_analysed_property\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n },\n \"a_date_property\": {\n \"type\": \"date\"\n }\n }\n }\n }\n\n We cache the properties of each doc_type, if they are not available, we'll load them again from Elasticsearch"}
{"_id": "q_8750", "text": "Implements call to add documents to the ES index\n Note the call to _check_mappings which will setup fields with the desired mappings"}
{"_id": "q_8751", "text": "Implements call to search the index for the desired content.\n\n Args:\n query_string (str): the string of values upon which to search within the\n content of the objects within the index\n\n field_dictionary (dict): dictionary of values which _must_ exist and\n _must_ match in order for the documents to be included in the results\n\n filter_dictionary (dict): dictionary of values which _must_ match if the\n field exists in order for the documents to be included in the results;\n documents for which the field does not exist may be included in the\n results if they are not otherwise filtered out\n\n exclude_dictionary(dict): dictionary of values all of which which must\n not match in order for the documents to be included in the results;\n documents which have any of these fields and for which the value matches\n one of the specified values shall be filtered out of the result set\n\n facet_terms (dict): dictionary of terms to include within search\n facets list - key is the term desired to facet upon, and the value is a\n dictionary of extended information to include. Supported right now is a\n size specification for a cap upon how many facet results to return (can\n be an empty dictionary to use default size for underlying engine):\n\n e.g.\n {\n \"org\": {\"size\": 10}, # only show top 10 organizations\n \"modes\": {}\n }\n\n use_field_match (bool): flag to indicate whether to use elastic\n filtering or elastic matching for field matches - this is nothing but a\n potential performance tune for certain queries\n\n (deprecated) exclude_ids (list): list of id values to exclude from the results -\n useful for finding maches that aren't \"one of these\"\n\n Returns:\n dict object with results in the desired format\n {\n \"took\": 3,\n \"total\": 4,\n \"max_score\": 2.0123,\n \"results\": [\n {\n \"score\": 2.0123,\n \"data\": {\n ...\n }\n },\n {\n \"score\": 0.0983,\n \"data\": {\n ...\n }\n }\n ],\n \"facets\": {\n \"org\": {\n \"total\": total_count,\n \"other\": 1,\n \"terms\": {\n \"MITx\": 25,\n \"HarvardX\": 18\n }\n },\n \"modes\": {\n \"total\": modes_count,\n \"other\": 15,\n \"terms\": {\n \"honor\": 58,\n \"verified\": 44,\n }\n }\n }\n }\n\n Raises:\n ElasticsearchException when there is a problem with the response from elasticsearch\n\n Example usage:\n .search(\n \"find the words within this string\",\n {\n \"must_have_field\": \"mast_have_value for must_have_field\"\n },\n {\n\n }\n )"}
{"_id": "q_8752", "text": "Call the search engine with the appropriate parameters"}
{"_id": "q_8753", "text": "Course Discovery activities against the search engine index of course details"}
{"_id": "q_8754", "text": "Used by default implementation for finding excerpt"}
{"_id": "q_8755", "text": "Used by default property excerpt"}
{"_id": "q_8756", "text": "decorate the matches within the excerpt"}
{"_id": "q_8757", "text": "Called during post processing of result\n Any properties defined in your subclass will get exposed as members of the result json from the search"}
{"_id": "q_8758", "text": "Called from within search handler. Finds desired subclass and decides if the\n result should be removed and adds properties derived from the result information"}
{"_id": "q_8759", "text": "Property to display a useful excerpt representing the matches within the results"}
{"_id": "q_8760", "text": "Called from within search handler\n Finds desired subclass and adds filter information based upon user information"}
{"_id": "q_8761", "text": "Called from within search handler\n Finds desired subclass and calls initialize method"}
{"_id": "q_8762", "text": "Opens data file and for each line, calls _eat_name_line"}
{"_id": "q_8763", "text": "Parses one line of data file"}
{"_id": "q_8764", "text": "Finds the most popular gender for the given name counting by given counter"}
{"_id": "q_8765", "text": "Returns best gender for the given name and country pair"}
{"_id": "q_8766", "text": "Executes the suite of TidyPy tools upon the project and returns the\n issues that are found.\n\n :param config: the TidyPy configuration to use\n :type config: dict\n :param path: that path to the project to analyze\n :type path: str\n :param progress:\n the progress reporter object that will receive callbacks during the\n execution of the tool suite. If not specified, not progress\n notifications will occur.\n :type progress: tidypy.Progress\n :rtype: tidypy.Collector"}
{"_id": "q_8767", "text": "Executes the configured suite of issue reports.\n\n :param config: the TidyPy configuration to use\n :type config: dict\n :param path: that path to the project that was analyzed\n :type path: str\n :param collector: the issues to report\n :type collector: tidypy.Collector"}
{"_id": "q_8768", "text": "Determines whether or not the specified file is excluded by the\n project's configuration.\n\n :param path: the path to check\n :type path: pathlib.Path\n :rtype: bool"}
{"_id": "q_8769", "text": "Determines whether or not the specified directory is excluded by the\n project's configuration.\n\n :param path: the path to check\n :type path: pathlib.Path\n :rtype: bool"}
{"_id": "q_8770", "text": "A generator that produces a sequence of paths to files in the project\n that matches the specified filters.\n\n :param filters:\n the regular expressions to use when finding files in the project.\n If not specified, all files are returned.\n :type filters: list(str)"}
{"_id": "q_8771", "text": "A generator that produces a sequence of paths to directories in the\n project that matches the specified filters.\n\n :param filters:\n the regular expressions to use when finding directories in the\n project. If not specified, all directories are returned.\n :type filters: list(str)\n :param containing:\n if a directory passes through the specified filters, it is checked\n for the presence of a file that matches one of the regular\n expressions in this parameter.\n :type containing: list(str)"}
{"_id": "q_8772", "text": "Adds an issue to the collection.\n\n :param issues: the issue(s) to add\n :type issues: tidypy.Issue or list(tidypy.Issue)"}
{"_id": "q_8773", "text": "Returns the number of issues in the collection.\n\n :param include_unclean:\n whether or not to include issues that are being ignored due to\n being a duplicate, excluded, etc.\n :type include_unclean: bool\n :rtype: int"}
{"_id": "q_8774", "text": "Retrieves the issues in the collection.\n\n :param sortby: the properties to sort the issues by\n :type sortby: list(str)\n :rtype: list(tidypy.Issue)"}
{"_id": "q_8775", "text": "A convenience method for parsing a TOML-serialized configuration.\n\n :param content: a TOML string containing a TidyPy configuration\n :type content: str\n :param is_pyproject:\n whether or not the content is (or resembles) a ``pyproject.toml``\n file, where the TidyPy configuration is located within a key named\n ``tool``.\n :type is_pyproject: bool\n :rtype: dict"}
{"_id": "q_8776", "text": "Retrieves the TidyPy tools that are available in the current Python\n environment.\n\n The returned dictionary has keys that are the tool names and values are the\n tool classes.\n\n :rtype: dict"}
{"_id": "q_8777", "text": "Retrieves the TidyPy configuration extenders that are available in the\n current Python environment.\n\n The returned dictionary has keys are the extender names and values are the\n extender classes.\n\n :rtype: dict"}
{"_id": "q_8778", "text": "Clears out the cache of TidyPy configurations that were retrieved from\n outside the normal locations."}
{"_id": "q_8779", "text": "Prints the specified string to ``stderr``.\n\n :param msg: the message to print\n :type msg: str"}
{"_id": "q_8780", "text": "A context manager that will append the specified paths to Python's\n ``sys.path`` during the execution of the block.\n\n :param paths: the paths to append\n :type paths: list(str)"}
{"_id": "q_8781", "text": "Compiles a list of regular expressions.\n\n :param masks: the regular expressions to compile\n :type masks: list(str) or str\n :returns: list(regular expression object)"}
{"_id": "q_8782", "text": "Retrieves the AST of the specified file.\n\n This function performs simple caching so that the same file isn't read or\n parsed more than once per process.\n\n :param filepath: the file to parse\n :type filepath: str\n :returns: ast.AST"}
{"_id": "q_8783", "text": "Called when an individual tool completes execution.\n\n :param tool: the name of the tool that completed\n :type tool: str"}
{"_id": "q_8784", "text": "Execute an x3270 command\n\n `cmdstr` gets sent directly to the x3270 subprocess on it's stdin."}
{"_id": "q_8785", "text": "Connect to a host"}
{"_id": "q_8786", "text": "Wait until the screen is ready, the cursor has been positioned\n on a modifiable field, and the keyboard is unlocked.\n\n Sometimes the server will \"unlock\" the keyboard but the screen will\n not yet be ready. In that case, an attempt to read or write to the\n screen will result in a 'E' keyboard status because we tried to\n read from a screen that is not yet ready.\n\n Using this method tells the client to wait until a field is\n detected and the cursor has been positioned on it."}
{"_id": "q_8787", "text": "move the cursor to the given co-ordinates. Co-ordinates are 1\n based, as listed in the status area of the terminal."}
{"_id": "q_8788", "text": "clears the field at the position given and inserts the string\n `tosend`\n\n tosend: the string to insert\n length: the length of the field\n\n Co-ordinates are 1 based, as listed in the status area of the\n terminal.\n\n raises: FieldTruncateError if `tosend` is longer than\n `length`."}
{"_id": "q_8789", "text": "Configures this extension with a given configuration dictionary.\n This allows use of this extension without a flask app.\n\n Args:\n config (dict): A dictionary with configuration keys"}
{"_id": "q_8790", "text": "Cleanup after a request. Close any open connections."}
{"_id": "q_8791", "text": "An abstracted authentication method. Decides whether to perform a\n direct bind or a search bind based upon the login attribute configured\n in the config.\n\n Args:\n username (str): Username of the user to bind\n password (str): User's password to bind with.\n\n Returns:\n AuthenticationResponse"}
{"_id": "q_8792", "text": "Performs a search bind to authenticate a user. This is\n required when a the login attribute is not the same\n as the RDN, since we cannot string together their DN on\n the fly, instead we have to find it in the LDAP, then attempt\n to bind with their credentials.\n\n Args:\n username (str): Username of the user to bind (the field specified\n as LDAP_BIND_LOGIN_ATTR)\n password (str): User's password to bind with when we find their dn.\n\n Returns:\n AuthenticationResponse"}
{"_id": "q_8793", "text": "Gets a list of groups a user at dn is a member of\n\n Args:\n dn (str): The dn of the user to find memberships for.\n _connection (ldap3.Connection): A connection object to use when\n searching. If not given, a temporary connection will be\n created, and destroyed after use.\n group_search_dn (str): The search dn for groups. Defaults to\n ``'{LDAP_GROUP_DN},{LDAP_BASE_DN}'``.\n\n Returns:\n list: A list of LDAP groups the user is a member of."}
{"_id": "q_8794", "text": "Gets info about a user specified at dn.\n\n Args:\n dn (str): The dn of the user to find\n _connection (ldap3.Connection): A connection object to use when\n searching. If not given, a temporary connection will be\n created, and destroyed after use.\n\n Returns:\n dict: A dictionary of the user info from LDAP"}
{"_id": "q_8795", "text": "Gets info about a user at a specified username by searching the\n Users DN. Username attribute is the same as specified as\n LDAP_USER_LOGIN_ATTR.\n\n\n Args:\n username (str): Username of the user to search for.\n _connection (ldap3.Connection): A connection object to use when\n searching. If not given, a temporary connection will be\n created, and destroyed after use.\n Returns:\n dict: A dictionary of the user info from LDAP"}
{"_id": "q_8796", "text": "Gets an object at the specified dn and returns it.\n\n Args:\n dn (str): The dn of the object to find.\n filter (str): The LDAP syntax search filter.\n attributes (list): A list of LDAP attributes to get when searching.\n _connection (ldap3.Connection): A connection object to use when\n searching. If not given, a temporary connection will be created,\n and destroyed after use.\n\n Returns:\n dict: A dictionary of the object info from LDAP"}
{"_id": "q_8797", "text": "Convenience property for externally accessing an authenticated\n connection to the server. This connection is automatically\n handled by the appcontext, so you do not have to perform an unbind.\n\n Returns:\n ldap3.Connection: A bound ldap3.Connection\n Raises:\n ldap3.core.exceptions.LDAPException: Since this method is performing\n a bind on behalf of the caller. You should handle this case\n occuring, such as invalid service credentials."}
{"_id": "q_8798", "text": "Make a connection.\n\n Args:\n bind_user (str): User to bind with. If `None`, AUTH_ANONYMOUS is\n used, otherwise authentication specified with\n config['LDAP_BIND_AUTHENTICATION_TYPE'] is used.\n bind_password (str): Password to bind to the directory with\n contextualise (bool): If true (default), will add this connection to the\n appcontext so it can be unbound upon app_teardown.\n\n Returns:\n ldap3.Connection: An unbound ldap3.Connection. You should handle exceptions\n upon bind if you use this internal method."}
{"_id": "q_8799", "text": "Destroys a connection. Removes the connection from the appcontext, and\n unbinds it.\n\n Args:\n connection (ldap3.Connection): The connnection to destroy"}
{"_id": "q_8800", "text": "query a s3 endpoint for an image based on a string\n\n EXAMPLE QUERIES:\n\n [empty] list all container collections\n vsoch/dinosaur look for containers with name vsoch/dinosaur"}
{"_id": "q_8801", "text": "search across labels"}
{"_id": "q_8802", "text": "query a GitLab artifacts folder for a list of images. \n If query is None, collections are listed."}
{"_id": "q_8803", "text": "a \"show all\" search that doesn't require a query\n the user is shown URLs to"}
{"_id": "q_8804", "text": "update headers with a token & other fields"}
{"_id": "q_8805", "text": "require secrets ensures that the client has the secrets file, and\n specifically has one or more parameters defined. If params is None,\n only a check is done for the file.\n\n Parameters\n ==========\n params: a list of keys to lookup in the client secrets, eg:\n \n secrets[client_name][params1] should not be in [None,''] or not set"}
{"_id": "q_8806", "text": "stream to a temporary file, rename on successful completion\n\n Parameters\n ==========\n file_name: the file name to stream to\n url: the url to stream from\n headers: additional headers to add"}
{"_id": "q_8807", "text": "update_token uses HTTP basic authentication to attempt to authenticate\n given a 401 response. We take as input previous headers, and update \n them.\n\n Parameters\n ==========\n response: the http request response to parse for the challenge."}
{"_id": "q_8808", "text": "attempt to read the detail provided by the response. If none, \n default to using the reason"}
{"_id": "q_8809", "text": "given a bucket name and a client that is initialized, get or\n create the bucket."}
{"_id": "q_8810", "text": "init_ cliends will obtain the tranfer and access tokens, and then\n use them to create a transfer client."}
{"_id": "q_8811", "text": "return logs for a particular container. The logs file is equivalent to\n the name, but with extension .log. If there is no name, the most recent\n log is returned.\n\n Parameters\n ==========\n name: the container name to print logs for."}
{"_id": "q_8812", "text": "return a list of logs. We return any file that ends in .log"}
{"_id": "q_8813", "text": "create an endpoint folder, catching the error if it exists.\n\n Parameters\n ==========\n endpoint_id: the endpoint id parameters\n folder: the relative path of the folder to create"}
{"_id": "q_8814", "text": "return a transfer client for the user"}
{"_id": "q_8815", "text": "print the status for all or one of the backends."}
{"_id": "q_8816", "text": "add the variable to the config"}
{"_id": "q_8817", "text": "remove a variable from the config, if found."}
{"_id": "q_8818", "text": "generate a base64 encoded header to ask for a token. This means\n base64 encoding a username and password and adding to the\n Authorization header to identify the client.\n\n Parameters\n ==========\n username: the username\n password: the password"}
{"_id": "q_8819", "text": "Authorize a client based on encrypting the payload with the client\n secret, timestamp, and other metadata"}
{"_id": "q_8820", "text": "head request, typically used for status code retrieval, etc."}
{"_id": "q_8821", "text": "paginate_call is a wrapper for get to paginate results"}
{"_id": "q_8822", "text": "verify will return a True or False to determine to verify the\n requests call or not. If False, we should the user a warning message,\n as this should not be done in production!"}
{"_id": "q_8823", "text": "delete an image to Singularity Registry"}
{"_id": "q_8824", "text": "get version by way of sregistry.version, returns a \n lookup dictionary with several global variables without\n needing to import singularity"}
{"_id": "q_8825", "text": "get requirements, mean reading in requirements and versions from\n the lookup obtained with get_lookup"}
{"_id": "q_8826", "text": "get_singularity_version will determine the singularity version for a\n build first, an environmental variable is looked at, followed by \n using the system version.\n\n Parameters\n ==========\n singularity_version: if not defined, look for in environment. If still\n not find, try finding via executing --version to Singularity. Only return\n None if not set in environment or installed."}
{"_id": "q_8827", "text": "get_installdir returns the installation directory of the application"}
{"_id": "q_8828", "text": "return the robot.png thumbnail from the database folder.\n if the user has exported a different image, use that instead."}
{"_id": "q_8829", "text": "run_command uses subprocess to send a command to the terminal.\n\n Parameters\n ==========\n cmd: the command to send, should be a list for subprocess\n error_message: the error message to give to user if fails,\n if none specified, will alert that command failed."}
{"_id": "q_8830", "text": "this is a wrapper around the main client.get_metadata to first parse\n a Dropbox FileMetadata into a dicionary, then pass it on to the \n primary get_metadata function.\n\n Parameters\n ==========\n image_file: the full path to the image file that had metadata\n extracted\n metadata: the Dropbox FileMetadata to parse."}
{"_id": "q_8831", "text": "print the output to the console for the user. If the user wants the content\n also printed to an output file, do that.\n\n Parameters\n ==========\n response: the response from the builder, with metadata added\n output_file: if defined, write output also to file"}
{"_id": "q_8832", "text": "list a specific log for a builder, or the latest log if none provided\n\n Parameters\n ==========\n args: the argparse object to look for a container name\n container_name: a default container name set to be None (show latest log)"}
{"_id": "q_8833", "text": "get a listing of collections that the user has access to."}
{"_id": "q_8834", "text": "update secrets will look for a user and token in the environment\n If we find the values, cache and continue. Otherwise, exit with error"}
{"_id": "q_8835", "text": "The user is required to have an application secrets file in his\n or her environment. The information isn't saved to the secrets\n file, but the client exists with error if the variable isn't found."}
{"_id": "q_8836", "text": "get the correct client depending on the driver of interest. The\n selected client can be chosen based on the environment variable\n SREGISTRY_CLIENT, and later changed based on the image uri parsed\n If there is no preference, the default is to load the singularity \n hub client.\n\n Parameters\n ==========\n image: if provided, we derive the correct client based on the uri\n of an image. If not provided, we default to environment, then hub.\n quiet: if True, suppress most output about the client (e.g. speak)"}
{"_id": "q_8837", "text": "get_manifests calls get_manifest for each of the schema versions,\n including v2 and v1. Version 1 includes image layers and metadata,\n and version 2 must be parsed for a specific manifest, and the 2nd\n call includes the layers. If a digest is not provided\n latest is used.\n\n Parameters\n ==========\n repo_name: reference to the <username>/<repository>:<tag> to obtain\n digest: a tag or shasum version"}
{"_id": "q_8838", "text": "get_manifest should return an image manifest\n for a particular repo and tag. The image details\n are extracted when the client is generated.\n\n Parameters\n ==========\n repo_name: reference to the <username>/<repository>:<tag> to obtain\n digest: a tag or shasum version\n version: one of v1, v2, and config (for image config)"}
{"_id": "q_8839", "text": "determine the user preference for atomic download of layers. If\n the user has set a singularity cache directory, honor it. Otherwise,\n use the Singularity default."}
{"_id": "q_8840", "text": "extract the environment from the manifest, or return None.\n Used by functions env_extract_image, and env_extract_tar"}
{"_id": "q_8841", "text": "get all settings, either for a particular client if a name is provided,\n or across clients.\n\n Parameters\n ==========\n client_name: the client name to return settings for (optional)"}
{"_id": "q_8842", "text": "a wrapper to get_and_update, but if not successful, will print an\n error and exit."}
{"_id": "q_8843", "text": "Just update a setting, doesn't need to be returned."}
{"_id": "q_8844", "text": "Authorize a client based on encrypting the payload with the client\n token, which should be matched on the receiving server"}
{"_id": "q_8845", "text": "load a particular template based on a name. We look for a name IN data,\n so the query name can be a partial string of the full name.\n\n Parameters\n ==========\n name: the name of a template to look up"}
{"_id": "q_8846", "text": "run a build, meaning inserting an instance. Retry if there is failure\n\n Parameters\n ==========\n config: the configuration dictionary generated by setup_build"}
{"_id": "q_8847", "text": "return a list of containers, determined by finding the metadata field\n \"type\" with value \"container.\" We alert the user to no containers \n if results is empty, and exit\n\n {'metadata': {'items': \n [\n {'key': 'type', 'value': 'container'}, ... \n ]\n }\n }"}
{"_id": "q_8848", "text": "a \"list all\" search that doesn't require a query. Here we return to\n the user all objects that have custom metadata value of \"container\"\n\n IMPORTANT: the upload function adds this metadata. For a container to\n be found by the client, it must have the type as container in metadata."}
{"_id": "q_8849", "text": "sharing an image means sending a remote share from an image you\n control to a contact, usually an email."}
{"_id": "q_8850", "text": "initialize the database, with the default database path or custom of\n\n the format sqlite:////scif/data/expfactory.db\n\n The custom path can be set with the environment variable SREGISTRY_DATABASE\n when a user creates the client, we must initialize this db\n the database should use the .singularity cache folder to cache\n layers and images, and .singularity/sregistry.db as a database"}
{"_id": "q_8851", "text": "get default build template."}
{"_id": "q_8852", "text": "query will show images determined by the extension of img\n or simg.\n\n Parameters\n ==========\n query: the container name (path) or uri to search for\n args.endpoint: can be an endpoint id and optional path, e.g.:\n\n --endpoint 6881ae2e-db26-11e5-9772-22000b9da45e:.singularity'\n --endpoint 6881ae2e-db26-11e5-9772-22000b9da45e'\n\n if not defined, we show the user endpoints to choose from\n\n Usage\n =====\n If endpoint is defined with a query, then we search the given endpoint\n for a container of interested (designated by ending in .img or .simg\n\n If no endpoint is provided but instead just a query, we use the query\n to search endpoints."}
{"_id": "q_8853", "text": "list all endpoints, providing a list of endpoints to the user to\n better filter the search. This function takes no arguments,\n as the user has not provided an endpoint id or query."}
{"_id": "q_8854", "text": "An endpoint is required here to list files within. Optionally, we can\n take a path relative to the endpoint root.\n\n Parameters\n ==========\n endpoint: a single endpoint ID or an endpoint id and relative path.\n If no path is provided, we use '', which defaults to scratch.\n\n query: if defined, limit files to those that have query match"}
{"_id": "q_8855", "text": "share will use the client to get a shareable link for an image of choice.\n the functions returns a url of choice to send to a recipient."}
{"_id": "q_8856", "text": "for private or protected registries, a client secrets file is required\n to be located at .sregistry. If no secrets are found, we use default\n of Singularity Hub, and return a dummy secrets."}
{"_id": "q_8857", "text": "delete object will delete a file from a bucket\n\n Parameters\n ==========\n storage_service: the service obtained with get_storage_service\n bucket_name: the name of the bucket\n object_name: the \"name\" parameter of the object."}
{"_id": "q_8858", "text": "delete an image from Google Storage.\n\n Parameters\n ==========\n name: the name of the file (or image) to delete"}
{"_id": "q_8859", "text": "get_subparser will get a dictionary of subparsers, to help with printing help"}
{"_id": "q_8860", "text": "Generate a robot name. Inspiration from Haikunator, but much more\n poorly implemented ;)\n\n Parameters\n ==========\n delim: Delimiter\n length: TokenLength\n chars: TokenChars"}
{"_id": "q_8861", "text": "get a temporary directory for an operation. If SREGISTRY_TMPDIR\n is set, return that. Otherwise, return the output of tempfile.mkdtemp\n\n Parameters\n ==========\n requested_tmpdir: an optional requested temporary directory, first\n priority as is coming from calling function.\n prefix: Given a need for a sandbox (or similar), we will need to \n create a subfolder *within* the SREGISTRY_TMPDIR.\n create: boolean to determine if we should create folder (True)"}
{"_id": "q_8862", "text": "extract a tar archive to a specified output folder\n\n Parameters\n ==========\n archive: the archive file to extract\n output_folder: the output folder to extract to\n handle_whiteout: use docker2oci variation to handle whiteout files"}
{"_id": "q_8863", "text": "use blob2oci to handle whiteout files for extraction. Credit for this\n script goes to docker2oci by Olivier Freyermouth, and see script\n folder for license.\n\n Parameters\n ==========\n archive: the archive to extract\n output_folder the output folder (sandbox) to extract to"}
{"_id": "q_8864", "text": "read_json reads in a json file and returns\n the data structure as dict."}
{"_id": "q_8865", "text": "clean up will delete a list of files, only if they exist"}
{"_id": "q_8866", "text": "push an image to an S3 endpoint"}
{"_id": "q_8867", "text": "get a collection if it exists. If it doesn't exist, create it first.\n\n Parameters\n ==========\n name: the collection name, usually parsed from get_image_names()['name']"}
{"_id": "q_8868", "text": "get a container, otherwise return None."}
{"_id": "q_8869", "text": "List local images in the database, optionally with a query.\n\n Paramters\n =========\n query: a string to search for in the container or collection name|tag|uri"}
{"_id": "q_8870", "text": "Inspect a local image in the database, which typically includes the\n basic fields in the model."}
{"_id": "q_8871", "text": "rename performs a move, but ensures the path is maintained in storage\n\n Parameters\n ==========\n image_name: the image name (uri) to rename to.\n path: the name to rename (basename is taken)"}
{"_id": "q_8872", "text": "Move an image from it's current location to a new path.\n Removing the image from organized storage is not the recommended approach\n however is still a function wanted by some.\n\n Parameters\n ==========\n image_name: the parsed image name.\n path: the location to move the image to"}
{"_id": "q_8873", "text": "Remove an image from the database and filesystem."}
{"_id": "q_8874", "text": "get or create a container, including the collection to add it to.\n This function can be used from a file on the local system, or via a URL\n that has been downloaded. Either way, if one of url, version, or image_file\n is not provided, the model is created without it. If a version is not\n provided but a file path is, then the file hash is used.\n\n Parameters\n ==========\n image_path: full path to image file\n image_name: if defined, the user wants a custom name (and not based on uri)\n metadata: any extra metadata to keep for the image (dict)\n save: if True, move the image to the cache if it's not there\n copy: If True, copy the image instead of moving it.\n\n image_name: a uri that gets parsed into a names object that looks like:\n\n {'collection': 'vsoch',\n 'image': 'hello-world',\n 'storage': 'vsoch/hello-world-latest.img',\n 'tag': 'latest',\n 'version': '12345'\n 'uri': 'vsoch/hello-world:latest@12345'}\n\n After running add, the user will take some image in a working\n directory, add it to the database, and have it available for search\n and use under SREGISTRY_STORAGE/<collection>/<container>\n\n If the container was retrieved from a webby place, it should have version\n If no version is found, the file hash is used."}
{"_id": "q_8875", "text": "push an image to Singularity Registry"}
{"_id": "q_8876", "text": "take a recipe, and return the complete header, line. If\n remove_header is True, only return the value.\n\n Parameters\n ==========\n recipe: the recipe file\n headers: the header key to find and parse\n remove_header: if true, remove the key"}
{"_id": "q_8877", "text": "find_single_recipe will parse a single file, and if valid,\n return an updated manifest\n\n Parameters\n ==========\n filename: the filename to assess for a recipe\n pattern: a default pattern to search for\n manifest: an already started manifest"}
{"_id": "q_8878", "text": "given a list of files, copy them to a temporary folder,\n compress into a .tar.gz, and rename based on the file hash.\n Return the full path to the .tar.gz in the temporary folder.\n\n Parameters\n ==========\n package_files: a list of files to include in the tar.gz"}
{"_id": "q_8879", "text": "format_container_name will take a name supplied by the user,\n remove all special characters (except for those defined by \"special-characters\"\n and return the new image name."}
{"_id": "q_8880", "text": "useColor will determine if color should be added\n to a print. Will check if being run in a terminal, and\n if has support for asci"}
{"_id": "q_8881", "text": "determine if a level should print to\n stderr, includes all levels but INFO and QUIET"}
{"_id": "q_8882", "text": "write will write a message to a stream,\n first checking the encoding"}
{"_id": "q_8883", "text": "table will print a table of entries. If the rows is \n a dictionary, the keys are interpreted as column names. if\n not, a numbered list is used."}
{"_id": "q_8884", "text": "return a default template for some function in sregistry\n If there is no template, None is returned.\n\n Parameters\n ==========\n name: the name of the template to retrieve"}
{"_id": "q_8885", "text": "return the image manifest via the aws client, saved in self.manifest"}
{"_id": "q_8886", "text": "update secrets will take a secrets credential file\n either located at .sregistry or the environment variable\n SREGISTRY_CLIENT_SECRETS and update the current client \n secrets as well as the associated API base. This is where you\n should do any customization of the secrets flie, or using\n it to update your client, if needed."}
{"_id": "q_8887", "text": "Translate S3 errors to FSErrors."}
{"_id": "q_8888", "text": "Create a S3File backed with a temporary file."}
{"_id": "q_8889", "text": "Builds a url to a gravatar from an email address.\n\n :param email: The email to fetch the gravatar for\n :param size: The size (in pixels) of the gravatar to fetch\n :param default: What type of default image to use if the gravatar does not exist\n :param rating: Used to filter the allowed gravatar ratings\n :param secure: If True use https, otherwise plain http"}
{"_id": "q_8890", "text": "Returns True if the user has a gravatar, False if otherwise"}
{"_id": "q_8891", "text": "Generator for blocks for a chimera block quotient"}
{"_id": "q_8892", "text": "Extract the blocks from a graph, and returns a\n block-quotient graph according to the acceptability\n functions block_good and eblock_good\n\n Inputs:\n G: a networkx graph\n blocks: a tuple of tuples"}
{"_id": "q_8893", "text": "Return a set of resonance forms as SMILES strings, given a SMILES string.\n\n :param smiles: A SMILES string.\n :returns: A set containing SMILES strings for every possible resonance form.\n :rtype: set of strings."}
{"_id": "q_8894", "text": "Repeatedly apply normalization transform to molecule until no changes occur.\n\n It is possible for multiple products to be produced when a rule is applied. The rule is applied repeatedly to\n each of the products, until no further changes occur or after 20 attempts. If there are multiple unique products\n after the final application, the first product (sorted alphabetically by SMILES) is chosen."}
{"_id": "q_8895", "text": "Return a canonical tautomer by enumerating and scoring all possible tautomers.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The canonical tautomer.\n :rtype: rdkit.Chem.rdchem.Mol"}
{"_id": "q_8896", "text": "Break covalent bonds between metals and organic atoms under certain conditions.\n\n The algorithm works as follows:\n\n - Disconnect N, O, F from any metal.\n - Disconnect other non-metals from transition metals + Al (but not Hg, Ga, Ge, In, Sn, As, Tl, Pb, Bi, Po).\n - For every bond broken, adjust the charges of the begin and end atoms accordingly.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The molecule with metals disconnected.\n :rtype: rdkit.Chem.rdchem.Mol"}
{"_id": "q_8897", "text": "Return a standardized canonical SMILES string given a SMILES string.\n\n Note: This is a convenience function for quickly standardizing a single SMILES string. It is more efficient to use\n the :class:`~molvs.standardize.Standardizer` class directly when working with many molecules or when custom options\n are needed.\n\n :param string smiles: The SMILES for the molecule.\n :returns: The SMILES for the standardized molecule.\n :rtype: string."}
{"_id": "q_8898", "text": "Return a set of tautomers as SMILES strings, given a SMILES string.\n\n :param smiles: A SMILES string.\n :returns: A set containing SMILES strings for every possible tautomer.\n :rtype: set of strings."}
{"_id": "q_8899", "text": "Return a standardized version the given molecule.\n\n The standardization process consists of the following stages: RDKit\n :py:func:`~rdkit.Chem.rdmolops.RemoveHs`, RDKit :py:func:`~rdkit.Chem.rdmolops.SanitizeMol`,\n :class:`~molvs.metal.MetalDisconnector`, :class:`~molvs.normalize.Normalizer`,\n :class:`~molvs.charge.Reionizer`, RDKit :py:func:`~rdkit.Chem.rdmolops.AssignStereochemistry`.\n\n :param mol: The molecule to standardize.\n :type mol: rdkit.Chem.rdchem.Mol\n :returns: The standardized molecule.\n :rtype: rdkit.Chem.rdchem.Mol"}
{"_id": "q_8900", "text": "Return the tautomer parent of a given molecule.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The tautomer parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol"}
{"_id": "q_8901", "text": "Return the fragment parent of a given molecule.\n\n The fragment parent is the largest organic covalent unit in the molecule.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The fragment parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol"}
{"_id": "q_8902", "text": "Return the stereo parent of a given molecule.\n\n The stereo parent has all stereochemistry information removed from tetrahedral centers and double bonds.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The stereo parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol"}
{"_id": "q_8903", "text": "Return the isotope parent of a given molecule.\n\n The isotope parent has all atoms replaced with the most abundant isotope for that element.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The isotope parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol"}
{"_id": "q_8904", "text": "Return the super parent of a given molecule.\n\n THe super parent is fragment, charge, isotope, stereochemistry and tautomer insensitive. From the input\n molecule, the largest fragment is taken. This is uncharged and then isotope and stereochemistry information is\n discarded. Finally, the canonical tautomer is determined and returned.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The super parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol"}
{"_id": "q_8905", "text": "Main function for molvs command line interface."}
{"_id": "q_8906", "text": "Return the molecule with specified fragments removed.\n\n :param mol: The molecule to remove fragments from.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The molecule with fragments removed.\n :rtype: rdkit.Chem.rdchem.Mol"}
{"_id": "q_8907", "text": "Return the largest covalent unit.\n\n The largest fragment is determined by number of atoms (including hydrogens). Ties are broken by taking the\n fragment with the higher molecular weight, and then by taking the first alphabetically by SMILES if needed.\n\n :param mol: The molecule to choose the largest fragment from.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The largest fragment.\n :rtype: rdkit.Chem.rdchem.Mol"}
{"_id": "q_8908", "text": "Construct a constraint from a validation function.\n\n Args:\n func (function):\n Function that evaluates True when the variables satisfy the constraint.\n\n variables (iterable):\n Iterable of variable labels.\n\n vartype (:class:`~dimod.Vartype`/str/set):\n Variable type for the constraint. Accepted input values:\n\n * :attr:`~dimod.Vartype.SPIN`, ``'SPIN'``, ``{-1, 1}``\n * :attr:`~dimod.Vartype.BINARY`, ``'BINARY'``, ``{0, 1}``\n\n name (string, optional, default='Constraint'):\n Name for the constraint.\n\n Examples:\n This example creates a constraint that binary variables `a` and `b`\n are not equal.\n\n >>> import dwavebinarycsp\n >>> import operator\n >>> const = dwavebinarycsp.Constraint.from_func(operator.ne, ['a', 'b'], 'BINARY')\n >>> print(const.name)\n Constraint\n >>> (0, 1) in const.configurations\n True\n\n This example creates a constraint that :math:`out = NOT(x)`\n for spin variables.\n\n >>> import dwavebinarycsp\n >>> def not_(y, x): # y=NOT(x) for spin variables\n ... return (y == -x)\n ...\n >>> const = dwavebinarycsp.Constraint.from_func(\n ... not_,\n ... ['out', 'in'],\n ... {1, -1},\n ... name='not_spin')\n >>> print(const.name)\n not_spin\n >>> (1, -1) in const.configurations\n True"}
{"_id": "q_8909", "text": "Construct a constraint from valid configurations.\n\n Args:\n configurations (iterable[tuple]):\n Valid configurations of the variables. Each configuration is a tuple of variable\n assignments ordered by :attr:`~Constraint.variables`.\n\n variables (iterable):\n Iterable of variable labels.\n\n vartype (:class:`~dimod.Vartype`/str/set):\n Variable type for the constraint. Accepted input values:\n\n * :attr:`~dimod.Vartype.SPIN`, ``'SPIN'``, ``{-1, 1}``\n * :attr:`~dimod.Vartype.BINARY`, ``'BINARY'``, ``{0, 1}``\n\n name (string, optional, default='Constraint'):\n Name for the constraint.\n\n Examples:\n\n This example creates a constraint that variables `a` and `b` are not equal.\n\n >>> import dwavebinarycsp\n >>> const = dwavebinarycsp.Constraint.from_configurations([(0, 1), (1, 0)],\n ... ['a', 'b'], dwavebinarycsp.BINARY)\n >>> print(const.name)\n Constraint\n >>> (0, 0) in const.configurations # Order matches variables: a,b\n False\n\n This example creates a constraint based on specified valid configurations\n that represents an OR gate for spin variables.\n\n >>> import dwavebinarycsp\n >>> const = dwavebinarycsp.Constraint.from_configurations(\n ... [(-1, -1, -1), (1, -1, 1), (1, 1, -1), (1, 1, 1)],\n ... ['y', 'x1', 'x2'],\n ... dwavebinarycsp.SPIN, name='or_spin')\n >>> print(const.name)\n or_spin\n >>> (1, 1, -1) in const.configurations # Order matches variables: y,x1,x2\n True"}
{"_id": "q_8910", "text": "Check that a solution satisfies the constraint.\n\n Args:\n solution (container):\n An assignment for the variables in the constraint.\n\n Returns:\n bool: True if the solution satisfies the constraint; otherwise False.\n\n Examples:\n This example creates a constraint that :math:`a \\\\ne b` on binary variables\n and tests it for two candidate solutions, with additional unconstrained\n variable c.\n\n >>> import dwavebinarycsp\n >>> const = dwavebinarycsp.Constraint.from_configurations([(0, 1), (1, 0)],\n ... ['a', 'b'], dwavebinarycsp.BINARY)\n >>> solution = {'a': 1, 'b': 1, 'c': 0}\n >>> const.check(solution)\n False\n >>> solution = {'a': 1, 'b': 0, 'c': 0}\n >>> const.check(solution)\n True"}
{"_id": "q_8911", "text": "Flip a variable in the constraint.\n\n Args:\n v (variable):\n Variable in the constraint to take the complementary value of its\n construction value.\n\n Examples:\n This example creates a constraint that :math:`a = b` on binary variables\n and flips variable a.\n\n >>> import dwavebinarycsp\n >>> const = dwavebinarycsp.Constraint.from_func(operator.eq,\n ... ['a', 'b'], dwavebinarycsp.BINARY)\n >>> const.check({'a': 0, 'b': 0})\n True\n >>> const.flip_variable('a')\n >>> const.check({'a': 1, 'b': 0})\n True\n >>> const.check({'a': 0, 'b': 0})\n False"}
{"_id": "q_8912", "text": "Add a constraint.\n\n Args:\n constraint (function/iterable/:obj:`.Constraint`):\n Constraint definition in one of the supported formats:\n\n 1. Function, with input arguments matching the order and\n :attr:`~.ConstraintSatisfactionProblem.vartype` type of the `variables`\n argument, that evaluates True when the constraint is satisfied.\n 2. List explicitly specifying each allowed configuration as a tuple.\n 3. :obj:`.Constraint` object built either explicitly or by :mod:`dwavebinarycsp.factories`.\n\n variables(iterable):\n Variables associated with the constraint. Not required when `constraint` is\n a :obj:`.Constraint` object.\n\n Examples:\n This example defines a function that evaluates True when the constraint is satisfied.\n The function's input arguments match the order and type of the `variables` argument.\n\n >>> import dwavebinarycsp\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> def all_equal(a, b, c): # works for both dwavebinarycsp.BINARY and dwavebinarycsp.SPIN\n ... return (a == b) and (b == c)\n >>> csp.add_constraint(all_equal, ['a', 'b', 'c'])\n >>> csp.check({'a': 0, 'b': 0, 'c': 0})\n True\n >>> csp.check({'a': 0, 'b': 0, 'c': 1})\n False\n\n This example explicitly lists allowed configurations.\n\n >>> import dwavebinarycsp\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.SPIN)\n >>> eq_configurations = {(-1, -1), (1, 1)}\n >>> csp.add_constraint(eq_configurations, ['v0', 'v1'])\n >>> csp.check({'v0': -1, 'v1': +1})\n False\n >>> csp.check({'v0': -1, 'v1': -1})\n True\n\n This example uses a :obj:`.Constraint` object built by :mod:`dwavebinarycsp.factories`.\n\n >>> import dwavebinarycsp\n >>> import dwavebinarycsp.factories.constraint.gates as gates\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(gates.and_gate(['a', 'b', 'c'])) # add an AND gate\n >>> csp.add_constraint(gates.xor_gate(['a', 'c', 'd'])) # add an XOR gate\n >>> csp.check({'a': 1, 'b': 0, 'c': 0, 'd': 1})\n True"}
{"_id": "q_8913", "text": "Build a binary quadratic model with minimal energy levels at solutions to the specified constraint satisfaction\n problem.\n\n Args:\n csp (:obj:`.ConstraintSatisfactionProblem`):\n Constraint satisfaction problem.\n\n min_classical_gap (float, optional, default=2.0):\n Minimum energy gap from ground. Each constraint violated by the solution increases\n the energy level of the binary quadratic model by at least this much relative\n to ground energy.\n\n max_graph_size (int, optional, default=8):\n Maximum number of variables in the binary quadratic model that can be used to\n represent a single constraint.\n\n Returns:\n :class:`~dimod.BinaryQuadraticModel`\n\n Notes:\n For a `min_classical_gap` > 2 or constraints with more than two variables, requires\n access to factories from the penaltymodel_ ecosystem to construct the binary quadratic\n model.\n\n .. _penaltymodel: https://github.com/dwavesystems/penaltymodel\n\n Examples:\n This example creates a binary-valued constraint satisfaction problem\n with two constraints, :math:`a = b` and :math:`b \\\\ne c`, and builds\n a binary quadratic model with a minimum energy level of -2 such that\n each constraint violation by a solution adds the default minimum energy gap.\n\n >>> import dwavebinarycsp\n >>> import operator\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(operator.eq, ['a', 'b']) # a == b\n >>> csp.add_constraint(operator.ne, ['b', 'c']) # b != c\n >>> bqm = dwavebinarycsp.stitch(csp)\n >>> bqm.energy({'a': 0, 'b': 0, 'c': 1}) # satisfies csp\n -2.0\n >>> bqm.energy({'a': 0, 'b': 0, 'c': 0}) # violates one constraint\n 0.0\n >>> bqm.energy({'a': 1, 'b': 0, 'c': 0}) # violates two constraints\n 2.0\n\n This example creates a binary-valued constraint satisfaction problem\n with two constraints, :math:`a = b` and :math:`b \\\\ne c`, and builds\n a binary quadratic model with a minimum energy gap of 4.\n Note that in this case the conversion to binary quadratic model adds two\n ancillary variables that must be minimized over when solving.\n\n >>> import dwavebinarycsp\n >>> import operator\n >>> import itertools\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(operator.eq, ['a', 'b']) # a == b\n >>> csp.add_constraint(operator.ne, ['b', 'c']) # b != c\n >>> bqm = dwavebinarycsp.stitch(csp, min_classical_gap=4.0)\n >>> list(bqm) # # doctest: +SKIP\n ['a', 'aux1', 'aux0', 'b', 'c']\n >>> min([bqm.energy({'a': 0, 'b': 0, 'c': 1, 'aux0': aux0, 'aux1': aux1}) for\n ... aux0, aux1 in list(itertools.product([0, 1], repeat=2))]) # satisfies csp\n -6.0\n >>> min([bqm.energy({'a': 0, 'b': 0, 'c': 0, 'aux0': aux0, 'aux1': aux1}) for\n ... aux0, aux1 in list(itertools.product([0, 1], repeat=2))]) # violates one constraint\n -2.0\n >>> min([bqm.energy({'a': 1, 'b': 0, 'c': 0, 'aux0': aux0, 'aux1': aux1}) for\n ... aux0, aux1 in list(itertools.product([0, 1], repeat=2))]) # violates two constraints\n 2.0\n\n This example finds for the previous example the minimum graph size.\n\n >>> import dwavebinarycsp\n >>> import operator\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(operator.eq, ['a', 'b']) # a == b\n >>> csp.add_constraint(operator.ne, ['b', 'c']) # b != c\n >>> for n in range(8, 1, -1):\n ... try:\n ... bqm = dwavebinarycsp.stitch(csp, min_classical_gap=4.0, max_graph_size=n)\n ... except dwavebinarycsp.exceptions.ImpossibleBQM:\n ... print(n+1)\n ...\n 3"}
{"_id": "q_8914", "text": "create a bqm for a constraint with two variables.\n\n bqm will have exactly classical gap 2."}
{"_id": "q_8915", "text": "AND gate.\n\n Args:\n variables (list): Variable labels for the and gate as `[in1, in2, out]`,\n where `in1, in2` are inputs and `out` the gate's output.\n vartype (Vartype, optional, default='BINARY'): Variable type. Accepted\n input values:\n\n * Vartype.SPIN, 'SPIN', {-1, 1}\n * Vartype.BINARY, 'BINARY', {0, 1}\n name (str, optional, default='AND'): Name for the constraint.\n\n Returns:\n Constraint(:obj:`.Constraint`): Constraint that is satisfied when its variables are\n assigned values that match the valid states of an AND gate.\n\n Examples:\n >>> import dwavebinarycsp\n >>> import dwavebinarycsp.factories.constraint.gates as gates\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(gates.and_gate(['a', 'b', 'c'], name='AND1'))\n >>> csp.check({'a': 1, 'b': 0, 'c': 0})\n True"}
{"_id": "q_8916", "text": "XOR gate.\n\n Args:\n variables (list): Variable labels for the and gate as `[in1, in2, out]`,\n where `in1, in2` are inputs and `out` the gate's output.\n vartype (Vartype, optional, default='BINARY'): Variable type. Accepted\n input values:\n\n * Vartype.SPIN, 'SPIN', {-1, 1}\n * Vartype.BINARY, 'BINARY', {0, 1}\n name (str, optional, default='XOR'): Name for the constraint.\n\n Returns:\n Constraint(:obj:`.Constraint`): Constraint that is satisfied when its variables are\n assigned values that match the valid states of an XOR gate.\n\n Examples:\n >>> import dwavebinarycsp\n >>> import dwavebinarycsp.factories.constraint.gates as gates\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(gates.xor_gate(['x', 'y', 'z'], name='XOR1'))\n >>> csp.check({'x': 1, 'y': 1, 'z': 1})\n False"}
{"_id": "q_8917", "text": "Half adder.\n\n Args:\n variables (list): Variable labels for the and gate as `[in1, in2, sum, carry]`,\n where `in1, in2` are inputs to be added and `sum` and 'carry' the resultant\n outputs.\n vartype (Vartype, optional, default='BINARY'): Variable type. Accepted\n input values:\n\n * Vartype.SPIN, 'SPIN', {-1, 1}\n * Vartype.BINARY, 'BINARY', {0, 1}\n name (str, optional, default='HALF_ADDER'): Name for the constraint.\n\n Returns:\n Constraint(:obj:`.Constraint`): Constraint that is satisfied when its variables are\n assigned values that match the valid states of a Boolean half adder.\n\n Examples:\n >>> import dwavebinarycsp\n >>> import dwavebinarycsp.factories.constraint.gates as gates\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(gates.halfadder_gate(['a', 'b', 'total', 'carry'], name='HA1'))\n >>> csp.check({'a': 1, 'b': 1, 'total': 0, 'carry': 1})\n True"}
{"_id": "q_8918", "text": "Random XOR constraint satisfaction problem.\n\n Args:\n num_variables (integer): Number of variables (at least three).\n num_clauses (integer): Number of constraints that together constitute the\n constraint satisfaction problem.\n vartype (Vartype, optional, default='BINARY'): Variable type. Accepted\n input values:\n\n * Vartype.SPIN, 'SPIN', {-1, 1}\n * Vartype.BINARY, 'BINARY', {0, 1}\n satisfiable (bool, optional, default=True): True if the CSP can be satisfied.\n\n Returns:\n CSP (:obj:`.ConstraintSatisfactionProblem`): CSP that is satisfied when its variables\n are assigned values that satisfy a XOR satisfiability problem.\n\n Examples:\n This example creates a CSP with 5 variables and two random constraints and checks\n whether a particular assignment of variables satisifies it.\n\n >>> import dwavebinarycsp\n >>> import dwavebinarycsp.factories as sat\n >>> csp = sat.random_xorsat(5, 2)\n >>> csp.constraints # doctest: +SKIP\n [Constraint.from_configurations(frozenset({(1, 0, 0), (1, 1, 1), (0, 1, 0), (0, 0, 1)}), (4, 3, 0),\n Vartype.BINARY, name='XOR (0 flipped)'),\n Constraint.from_configurations(frozenset({(1, 1, 0), (0, 1, 1), (0, 0, 0), (1, 0, 1)}), (2, 0, 4),\n Vartype.BINARY, name='XOR (2 flipped) (0 flipped)')]\n >>> csp.check({0: 1, 1: 0, 2: 0, 3: 1, 4: 1}) # doctest: +SKIP\n True"}
{"_id": "q_8919", "text": "Generates a model chooser definition from a model, and adds it to the\n registry."}
{"_id": "q_8920", "text": "Parse additional url fields and map them to inputs\n\n Attempt to create a dictionary with keys being user input, and\n response being the returned URL"}
{"_id": "q_8921", "text": "Convert a list of JSON values to a list of models"}
{"_id": "q_8922", "text": "Populate all fields of a model with data\n\n Given a model with a PandoraModel superclass will enumerate all\n declared fields on that model and populate the values of their Field\n and SyntheticField classes. All declared fields will have a value after\n this function runs even if they are missing from the incoming JSON."}
{"_id": "q_8923", "text": "Convert one JSON value to a model object"}
{"_id": "q_8924", "text": "Common repr logic for subclasses to hook"}
{"_id": "q_8925", "text": "Write command to remote process"}
{"_id": "q_8926", "text": "Ensure player backing process is started"}
{"_id": "q_8927", "text": "Play a new song from a Pandora model\n\n Returns once the stream starts but does not shut down the remote audio\n output backend process. Calls the input callback when the user has\n input."}
{"_id": "q_8928", "text": "Play the station until something ends it\n\n This function will run forever until termintated by calling\n end_station."}
{"_id": "q_8929", "text": "Set stdout to non-blocking\n\n VLC does not always return a newline when reading status so in order to\n be lazy and still use the read API without caring about how much output\n there is we switch stdout to nonblocking mode and just read a large\n chunk of datin order to be lazy and still use the read API without\n caring about how much output there is we switch stdout to nonblocking\n mode and just read a large chunk of data."}
{"_id": "q_8930", "text": "Format a station menu and make the user select a station"}
{"_id": "q_8931", "text": "Input callback, handles key presses"}
{"_id": "q_8932", "text": "Function decorator implementing retrying logic.\n\n exceptions: A tuple of exception classes; default (Exception,)\n\n The decorator will call the function up to max_tries times if it raises\n an exception.\n\n By default it catches instances of the Exception class and subclasses.\n This will recover after all but the most fatal errors. You may specify a\n custom tuple of exception classes with the 'exceptions' argument; the\n function will only be retried if it raises one of the specified\n exceptions."}
{"_id": "q_8933", "text": "Iterate over a finite iterator forever\n\n When the iterator is exhausted will call the function again to generate a\n new iterator and keep iterating."}
{"_id": "q_8934", "text": "Gather user input and convert it to an integer\n\n Will keep trying till the user enters an interger or until they ^C the\n program."}
{"_id": "q_8935", "text": "Example program integrating an IVP problem of van der Pol oscillator"}
{"_id": "q_8936", "text": "open the drop box\n\n You need to call this method before starting putting packages.\n\n Returns\n -------\n None"}
{"_id": "q_8937", "text": "put a task\n\n This method places a task in the working area and have the\n dispatcher execute it.\n\n If you need to put multiple tasks, it can be much faster to\n use `put_multiple()` than to use this method multiple times\n depending of the dispatcher.\n\n Parameters\n ----------\n package : callable\n A task\n\n Returns\n -------\n int\n A package index assigned by the working area"}
{"_id": "q_8938", "text": "return pairs of package indices and results of finished tasks\n\n This method does not wait for tasks to finish.\n\n Returns\n -------\n list\n A list of pairs of package indices and results"}
{"_id": "q_8939", "text": "return a pair of a package index and result of a task\n\n This method waits until a tasks finishes. It returns `None` if\n no task is running.\n\n Returns\n -------\n tuple or None\n A pair of a package index and result. `None` if no tasks\n is running."}
{"_id": "q_8940", "text": "run the event loops in the background.\n\n Args:\n eventLoops (list): a list of event loops to run"}
{"_id": "q_8941", "text": "Return a pair of a run id and a result.\n\n This method waits until an event loop finishes.\n This method returns None if no loop is running."}
{"_id": "q_8942", "text": "wait until all event loops end and returns the results."}
{"_id": "q_8943", "text": "Convert ``key_vals_dict`` to `tuple_list``.\n\n Args:\n key_vals_dict (dict): The first parameter.\n fill: a value to fill missing data\n\n Returns:\n A list of tuples"}
{"_id": "q_8944", "text": "Open the working area\n\n Returns\n -------\n None"}
{"_id": "q_8945", "text": "Collect the result of a task\n\n Parameters\n ----------\n package_index :\n a package index\n\n Returns\n -------\n obj\n The result of the task"}
{"_id": "q_8946", "text": "Returns the relative path of the result\n\n This method returns the path to the result relative to the\n top dir of the working area. This method simply constructs the\n path based on the convention and doesn't check if the result\n actually exists.\n\n Parameters\n ----------\n package_index :\n a package index\n\n Returns\n -------\n str\n the relative path to the result"}
{"_id": "q_8947", "text": "Submit multiple jobs\n\n Parameters\n ----------\n workingArea :\n A workingArea\n package_indices : list(int)\n A list of package indices\n\n Returns\n -------\n list(str)\n The list of the run IDs of the jobs"}
{"_id": "q_8948", "text": "Return the run IDs of the finished jobs\n\n Returns\n -------\n list(str)\n The list of the run IDs of the finished jobs"}
{"_id": "q_8949", "text": "Wait until all jobs finish and return the run IDs of the finished jobs\n\n Returns\n -------\n list(str)\n The list of the run IDs of the finished jobs"}
{"_id": "q_8950", "text": "put a task and its arguments\n\n If you need to put multiple tasks, it can be faster to put\n multiple tasks with `put_multiple()` than to use this method\n multiple times.\n\n Parameters\n ----------\n task : a function\n A function to be executed\n args : list\n A list of positional arguments to the `task`\n kwargs : dict\n A dict with keyword arguments to the `task`\n\n Returns\n -------\n int, str, or any hashable and sortable\n A task ID. IDs are sortable in the order in which the\n corresponding tasks are put."}
{"_id": "q_8951", "text": "return a list of pairs of IDs and results of finished tasks.\n\n This method doesn't wait for tasks to finish. It returns IDs\n and results which have already finished.\n\n Returns\n -------\n list\n A list of pairs of IDs and results"}
{"_id": "q_8952", "text": "return a pair of an ID and a result of a task.\n\n This method waits for a task to finish.\n\n Returns\n -------\n An ID and a result of a task. `None` if no task is running."}
{"_id": "q_8953", "text": "return a list of pairs of IDs and results of all tasks.\n\n This method waits for all tasks to finish.\n\n Returns\n -------\n list\n A list of pairs of IDs and results"}
{"_id": "q_8954", "text": "return a list results of all tasks.\n\n This method waits for all tasks to finish.\n\n Returns\n -------\n list\n A list of results of the tasks. The results are sorted in\n the order in which the tasks are put."}
{"_id": "q_8955", "text": "expand a path config\n\n Args:\n path_cfg (str, tuple, dict): a config for path\n alias_dict (dict): a dict for aliases\n overriding_kargs (dict): to be used for recursive call"}
{"_id": "q_8956", "text": "check if the jobs are running and return a list of pids for\n finished jobs"}
{"_id": "q_8957", "text": "wait until all jobs finish and return a list of pids"}
{"_id": "q_8958", "text": "return the ROOT.vector object for the branch."}
{"_id": "q_8959", "text": "Ensure all config-time files have been generated. Return a\n dictionary of generated items."}
{"_id": "q_8960", "text": "unpack the specified tarball url into the specified directory"}
{"_id": "q_8961", "text": "return a list of Version objects, each with a tarball URL set"}
{"_id": "q_8962", "text": "return a list of GithubComponentVersion objects for the tip of each branch"}
{"_id": "q_8963", "text": "Try to unpublish a recently published version. Return any errors that\n occur."}
{"_id": "q_8964", "text": "Read a list of files. Their configuration values are merged, with\n preference to values from files earlier in the list."}
{"_id": "q_8965", "text": "return a configuration value\n\n usage:\n get('section.property')\n\n Note that currently array indexes are not supported. You must\n get the whole array.\n\n returns None if any path element or the property is missing"}
{"_id": "q_8966", "text": "Set a configuration value. If no filename is specified, the\n property is set in the first configuration file. Note that if a\n filename is specified and the property path is present in an\n earlier filename then set property will be hidden.\n\n usage:\n set('section.property', value='somevalue')\n\n Note that currently array indexes are not supported. You must\n set the whole array."}
{"_id": "q_8967", "text": "indicate whether the current item is the last one in a generator"}
{"_id": "q_8968", "text": "Publish to the appropriate registry, return a description of any\n errors that occured, or None if successful.\n No VCS tagging is performed."}
{"_id": "q_8969", "text": "Try to un-publish the current version. Return a description of any\n errors that occured, or None if successful."}
{"_id": "q_8970", "text": "Return the specified script command. If the first part of the\n command is a .py file, then the current python interpreter is\n prepended.\n\n If the script is a single string, rather than an array, it is\n shlex-split."}
{"_id": "q_8971", "text": "Check if this module has any dependencies with the specified name\n in its dependencies list, or in target dependencies for the\n specified target"}
{"_id": "q_8972", "text": "Check if this module, or any of its dependencies, have a\n dependencies with the specified name in their dependencies, or in\n their targetDependencies corresponding to the specified target.\n\n Note that if recursive dependencies are not installed, this test\n may return a false-negative."}
{"_id": "q_8973", "text": "Retrieve and install all the dependencies of this component and its\n dependencies, recursively, or satisfy them from a collection of\n available_components or from disk.\n\n Returns\n =======\n (components, errors)\n\n components: dictionary of name:Component\n errors: sequence of errors\n\n Parameters\n ==========\n\n available_components:\n None (default) or a dictionary of name:component. This is\n searched before searching directories or fetching remote\n components\n\n search_dirs:\n None (default), or sequence of directories to search for\n already installed, (but not yet loaded) components. Used so\n that manually installed or linked components higher up the\n dependency tree are found by their users lower down.\n\n These directories are searched in order, and finally the\n current directory is checked.\n\n update_installed:\n False (default), True, or set(): whether to check the\n available versions of installed components, and update if a\n newer version is available. If this is a set(), only update\n things in the specified set.\n\n traverse_links:\n False (default) or True: whether to recurse into linked\n dependencies when updating/installing.\n\n target:\n None (default), or a Target object. If specified the target\n name and it's similarTo list will be used in resolving\n dependencies. If None, then only target-independent\n dependencies will be installed\n\n test:\n True, False, or 'toplevel: should test-only dependencies be\n installed? (yes, no, or only for this module, not its\n dependencies)."}
{"_id": "q_8974", "text": "Some components must export whole directories full of headers into\n the search path. This is really really bad, and they shouldn't do\n it, but support is provided as a concession to compatibility."}
{"_id": "q_8975", "text": "merge dictionaries of dictionaries recursively, with elements from\n dictionaries earlier in the argument sequence taking precedence"}
{"_id": "q_8976", "text": "create a new nested dictionary object with the same structure as\n 'dictionary', but with all scalar values replaced with 'value'"}
{"_id": "q_8977", "text": "returns pack.DependencySpec for the base target of this target (or\n None if this target does not inherit from another target."}
{"_id": "q_8978", "text": "Return true if this target inherits from the named target (directly\n or indirectly. Also returns true if this target is the named\n target. Otherwise return false."}
{"_id": "q_8979", "text": "Execute the given command, returning an error message if an error occured\n or None if the command was succesful."}
{"_id": "q_8980", "text": "Execute the commands necessary to build this component, and all of\n its dependencies."}
{"_id": "q_8981", "text": "return decorator to prune cache after calling fn with a probability of p"}
{"_id": "q_8982", "text": "Calibrate noisy variance estimates with empirical Bayes.\n\n Parameters\n ----------\n vars: ndarray\n List of variance estimates.\n sigma2: int\n Estimate of the Monte Carlo noise in vars.\n\n Returns\n -------\n An array of the calibrated variance estimates"}
{"_id": "q_8983", "text": "Derive samples used to create trees in scikit-learn RandomForest objects.\n\n Recovers the samples in each tree from the random state of that tree using\n :func:`forest._generate_sample_indices`.\n\n Parameters\n ----------\n n_samples : int\n The number of samples used to fit the scikit-learn RandomForest object.\n\n forest : RandomForest\n Regressor or Classifier object that is already fit by scikit-learn.\n\n Returns\n -------\n Array that records how many times a data point was placed in a tree.\n Columns are individual trees. Rows are the number of times a sample was\n used in a tree."}
{"_id": "q_8984", "text": "Retrieves the number of contributors to a repo in the organization.\n Also adds to unique contributor list."}
{"_id": "q_8985", "text": "Retrieves the number of closed issues."}
{"_id": "q_8986", "text": "Checks to see if the given repo has a top level LICENSE file."}
{"_id": "q_8987", "text": "Writes stats from the organization to JSON."}
{"_id": "q_8988", "text": "Updates the total.csv file with current data."}
{"_id": "q_8989", "text": "Updates languages.csv file with current data."}
{"_id": "q_8990", "text": "Checks if a directory exists. If not, it creates one with the specified\n file_path."}
{"_id": "q_8991", "text": "Removes all rows of the associated date from the given csv file.\n Defaults to today."}
{"_id": "q_8992", "text": "Create a github3.py session for a GitHub Enterprise instance\n\n If token is not provided, will attempt to use the GITHUB_API_TOKEN\n environment variable if present."}
{"_id": "q_8993", "text": "Simplified check for API limits\n\n If necessary, spin in place waiting for API to reset before returning.\n\n See: https://developer.github.com/v3/#rate-limiting"}
{"_id": "q_8994", "text": "Create a GitHub session for making requests"}
{"_id": "q_8995", "text": "Yields GitHub3.py repo objects for provided orgs and repo names\n\n If orgs and repos are BOTH empty, execute special mode of getting ALL\n repositories from the GitHub Server.\n\n If public_only is True, will return only those repos that are marked as\n public. Set this to false to return all organizations that the session has\n permissions to access."}
{"_id": "q_8996", "text": "Retrieves an organization via given org name. If given\n empty string, prompts user for an org name."}
{"_id": "q_8997", "text": "Create CodeGovProject object from GitLab Repository"}
{"_id": "q_8998", "text": "Create CodeGovProject object from DOE CODE record\n\n Handles crafting Code.gov Project"}
{"_id": "q_8999", "text": "Retrieves the traffic for the repositories of the given organization."}
{"_id": "q_9000", "text": "Retrieves the total referrers and unique referrers of all repos in json\n and then stores it in a dict."}
{"_id": "q_9001", "text": "Retrieves data from json and stores it in the supplied dict. Accepts\n 'clones' or 'views' as type."}
{"_id": "q_9002", "text": "Writes all traffic data to file."}
{"_id": "q_9003", "text": "Checks the given csv file against the json data scraped for the given\n dict. It will remove all data retrieved that has already been recorded\n so we don't write redundant data to file. Returns count of rows from\n file."}
{"_id": "q_9004", "text": "Writes given dict to file."}
{"_id": "q_9005", "text": "Converts a DOE CODE .json file into DOE CODE projects\n Yields DOE CODE records from a DOE CODE .json file"}
{"_id": "q_9006", "text": "Yeilds DOE CODE records based on provided input sources\n\n param:\n filename (str): Path to a DOE CODE .json file\n url (str): URL for a DOE CODE server json file\n key (str): API Key for connecting to DOE CODE server"}
{"_id": "q_9007", "text": "Performs a login and sets the Github object via given credentials. If\n credentials are empty or incorrect then prompts user for credentials.\n Stores the authentication token in a CREDENTIALS_FILE used for future\n logins. Handles Two Factor Authentication."}
{"_id": "q_9008", "text": "Retrieves the emails of the members of the organization. Note this Only\n gets public emails. Private emails would need authentication for each\n user."}
{"_id": "q_9009", "text": "Writes the user emails to file."}
{"_id": "q_9010", "text": "Return a connected Bitbucket session"}
{"_id": "q_9011", "text": "Return a connected GitLab session\n\n ``token`` should be a ``private_token`` from Gitlab"}
{"_id": "q_9012", "text": "Yields Gitlab project objects for all projects in Bitbucket"}
{"_id": "q_9013", "text": "Given a Git repository URL, returns number of lines of code based on cloc\n\n Reference:\n - cloc: https://github.com/AlDanial/cloc\n - https://www.omg.org/spec/AFP/\n - Another potential way to calculation effort\n\n Sample cloc output:\n {\n \"header\": {\n \"cloc_url\": \"github.com/AlDanial/cloc\",\n \"cloc_version\": \"1.74\",\n \"elapsed_seconds\": 0.195950984954834,\n \"n_files\": 27,\n \"n_lines\": 2435,\n \"files_per_second\": 137.78956000769,\n \"lines_per_second\": 12426.5769858787\n },\n \"C++\": {\n \"nFiles\": 7,\n \"blank\": 121,\n \"comment\": 314,\n \"code\": 371\n },\n \"C/C++ Header\": {\n \"nFiles\": 8,\n \"blank\": 107,\n \"comment\": 604,\n \"code\": 191\n },\n \"CMake\": {\n \"nFiles\": 11,\n \"blank\": 49,\n \"comment\": 465,\n \"code\": 165\n },\n \"Markdown\": {\n \"nFiles\": 1,\n \"blank\": 18,\n \"comment\": 0,\n \"code\": 30\n },\n \"SUM\": {\n \"blank\": 295,\n \"comment\": 1383,\n \"code\": 757,\n \"nFiles\": 27\n }\n }"}
{"_id": "q_9014", "text": "Read a 'pretty' formatted GraphQL query file into a one-line string.\n\n Removes line breaks and comments. Condenses white space.\n\n Args:\n filePath (str): A relative or absolute path to a file containing\n a GraphQL query.\n File may use comments and multi-line formatting.\n .. _GitHub GraphQL Explorer:\n https://developer.github.com/v4/explorer/\n verbose (Optional[bool]): If False, prints will be suppressed.\n Defaults to False.\n\n Returns:\n str: A single line GraphQL query."}
{"_id": "q_9015", "text": "Send a curl request to GitHub.\n\n Args:\n gitquery (str): The query or endpoint itself.\n Examples:\n query: 'query { viewer { login } }'\n endpoint: '/user'\n gitvars (Optional[Dict]): All query variables.\n Defaults to empty.\n verbose (Optional[bool]): If False, stderr prints will be\n suppressed. Defaults to False.\n rest (Optional[bool]): If True, uses the REST API instead\n of GraphQL. Defaults to False.\n\n Returns:\n {\n 'statusNum' (int): The HTTP status code.\n 'headDict' (Dict[str]): The response headers.\n 'linkDict' (Dict[int]): Link based pagination data.\n 'result' (str): The body of the response.\n }"}
{"_id": "q_9016", "text": "Wait until the given UTC timestamp.\n\n Args:\n utcTimeStamp (int): A UTC format timestamp.\n verbose (Optional[bool]): If False, all extra printouts will be\n suppressed. Defaults to True."}
{"_id": "q_9017", "text": "Makes a pretty countdown.\n\n Args:\n gitquery (str): The query or endpoint itself.\n Examples:\n query: 'query { viewer { login } }'\n endpoint: '/user'\n printString (Optional[str]): A counter message to display.\n Defaults to 'Waiting %*d seconds...'\n verbose (Optional[bool]): If False, all extra printouts will be\n suppressed. Defaults to True."}
{"_id": "q_9018", "text": "Creates the TFS Connection Context"}
{"_id": "q_9019", "text": "Create a core_client.py client for a Team Foundation Server Enterprise connection instance\n\n If token is not provided, will attempt to use the TFS_API_TOKEN\n environment variable if present."}
{"_id": "q_9020", "text": "Creates a TFS Git Client to pull Git repo info"}
{"_id": "q_9021", "text": "Uses the weekly commits and traverses back through the last\n year, each week subtracting the weekly commits and storing them. It\n needs an initial starting commits number, which should be taken from\n the most up to date number from github_stats.py output."}
{"_id": "q_9022", "text": "Generates a self-signed certificate for use in an internal development\n environment for testing SSL pages.\n\n http://almostalldigital.wordpress.com/2013/03/07/self-signed-ssl-certificate-for-ec2-load-balancer/"}
{"_id": "q_9023", "text": "Creates a certificate signing request to be submitted to a formal\n certificate authority to generate a certificate.\n\n Note, the provider may say the CSR must be created on the target server,\n but this is not necessary."}
{"_id": "q_9024", "text": "Reads the expiration date of a local crt file."}
{"_id": "q_9025", "text": "Scans through all local .crt files and displays the expiration dates."}
{"_id": "q_9026", "text": "Confirms the key, CSR, and certificate files all match."}
{"_id": "q_9027", "text": "Recursively merges two dictionaries.\n\n Uses fabric's AttributeDict so you can reference values via dot-notation.\n e.g. env.value1.value2.value3...\n\n http://stackoverflow.com/questions/3232943/update-value-of-a-nested-dictionary-of-varying-depth"}
{"_id": "q_9028", "text": "Compares the local version against the latest official version on PyPI and displays a warning message if a newer release is available.\n\n This check can be disabled by setting the environment variable BURLAP_CHECK_VERSION=0."}
{"_id": "q_9029", "text": "Decorator for registering a satchel method as a Fabric task.\n\n Can be used like:\n\n @task\n def my_method(self):\n ...\n\n @task(precursors=['other_satchel'])\n def my_method(self):\n ..."}
{"_id": "q_9030", "text": "Check if a path exists, and is a file."}
{"_id": "q_9031", "text": "Check if a path exists, and is a directory."}
{"_id": "q_9032", "text": "Check if a path exists, and is a symbolic link."}
{"_id": "q_9033", "text": "Upload a template file.\n\n This is a wrapper around :func:`fabric.contrib.files.upload_template`\n that adds some extra parameters.\n\n If ``mkdir`` is True, then the remote directory will be created, as\n the current user or as ``user`` if specified.\n\n If ``chown`` is True, then it will ensure that the current user (or\n ``user`` if specified) is the owner of the remote file."}
{"_id": "q_9034", "text": "Compute the MD5 sum of a file."}
{"_id": "q_9035", "text": "Get the lines of a remote file, ignoring empty or commented ones"}
{"_id": "q_9036", "text": "Return the time of last modification of path.\n The return value is a number giving the number of seconds since the epoch\n\n Same as :py:func:`os.path.getmtime()`"}
{"_id": "q_9037", "text": "Copy a file or directory"}
{"_id": "q_9038", "text": "Move a file or directory"}
{"_id": "q_9039", "text": "Remove a file or directory"}
{"_id": "q_9040", "text": "Require a file to exist and have specific contents and properties.\n\n You can provide either:\n\n - *contents*: the required contents of the file::\n\n from fabtools import require\n\n require.file('/tmp/hello.txt', contents='Hello, world')\n\n - *source*: the local path of a file to upload::\n\n from fabtools import require\n\n require.file('/tmp/hello.txt', source='files/hello.txt')\n\n - *url*: the URL of a file to download (*path* is then optional)::\n\n from fabric.api import cd\n from fabtools import require\n\n with cd('tmp'):\n require.file(url='http://example.com/files/hello.txt')\n\n If *verify_remote* is ``True`` (the default), then an MD5 comparison\n will be used to check whether the remote file is the same as the\n source. If this is ``False``, the file will be assumed to be the\n same if it is present. This is useful for very large files, where\n generating an MD5 sum may take a while.\n\n When providing either the *contents* or the *source* parameter, Fabric's\n ``put`` function will be used to upload the file to the remote host.\n When ``use_sudo`` is ``True``, the file will first be uploaded to a temporary\n directory, then moved to its final location. The default temporary\n directory is ``/tmp``, but can be overridden with the *temp_dir* parameter.\n If *temp_dir* is an empty string, then the user's home directory will\n be used.\n\n If `use_sudo` is `True`, then the remote file will be owned by root,\n and its mode will reflect root's default *umask*. The optional *owner*,\n *group* and *mode* parameters can be used to override these properties.\n\n .. note:: This function can be accessed directly from the\n ``fabtools.require`` module for convenience."}
{"_id": "q_9041", "text": "Determines if a new release has been made."}
{"_id": "q_9042", "text": "Upgrade all packages, skip obsoletes if ``obsoletes=0`` in ``yum.conf``.\n\n Exclude *kernel* upgrades by default."}
{"_id": "q_9043", "text": "Check if an RPM package is installed."}
{"_id": "q_9044", "text": "Install one or more RPM packages.\n\n Extra *repos* may be passed to ``yum`` to enable extra repositories at install time.\n\n Extra *yes* may be passed to ``yum`` to validate license if necessary.\n\n Extra *options* may be passed to ``yum`` if necessary\n (e.g. ``['--nogpgcheck', '--exclude=package']``).\n\n ::\n\n import burlap\n\n # Install a single package, in an alternative install root\n burlap.rpm.install('emacs', options='--installroot=/my/new/location')\n\n # Install multiple packages silently\n burlap.rpm.install([\n 'unzip',\n 'nano'\n ], '--quiet')"}
{"_id": "q_9045", "text": "Install a group of packages.\n\n You can use ``yum grouplist`` to get the list of groups.\n\n Extra *options* may be passed to ``yum`` if necessary like\n (e.g. ``['--nogpgcheck', '--exclude=package']``).\n\n ::\n\n import burlap\n\n # Install development packages\n burlap.rpm.groupinstall('Development tools')"}
{"_id": "q_9046", "text": "Remove an existing software group.\n\n Extra *options* may be passed to ``yum`` if necessary."}
{"_id": "q_9047", "text": "Get the list of ``yum`` repositories.\n\n Returns enabled repositories by default. Extra *status* may be passed\n to list disabled repositories if necessary.\n\n Media and debug repositories are kept disabled, except if you pass *media*.\n\n ::\n\n import burlap\n\n # Install a package that may be included in disabled repositories\n burlap.rpm.install('vim', burlap.rpm.repolist('disabled'))"}
{"_id": "q_9048", "text": "Uploads media to an Amazon S3 bucket using s3sync.\n\n Requires s3cmd. Install with:\n\n pip install s3cmd"}
{"_id": "q_9049", "text": "Issues invalidation requests to a Cloudfront distribution\n for the current static media bucket, triggering it to reload the specified\n paths from the origin.\n\n Note, only 1000 paths can be issued in a request at any one time."}
{"_id": "q_9050", "text": "Gets an S3 bucket of the given name, creating one if it doesn't already exist.\n\n Should be called with a role, if AWS credentials are stored in role settings. e.g.\n\n fab local s3.get_or_create_bucket:mybucket"}
{"_id": "q_9051", "text": "Configures the server to use a static IP."}
{"_id": "q_9052", "text": "Upgrade all packages."}
{"_id": "q_9053", "text": "Check if a package is installed."}
{"_id": "q_9054", "text": "Install one or more packages.\n\n If *update* is ``True``, the package definitions will be updated\n first, using :py:func:`~burlap.deb.update_index`.\n\n Extra *options* may be passed to ``apt-get`` if necessary.\n\n Example::\n\n import burlap\n\n # Update index, then install a single package\n burlap.deb.install('build-essential', update=True)\n\n # Install multiple packages\n burlap.deb.install([\n 'python-dev',\n 'libxml2-dev',\n ])\n\n # Install a specific version\n burlap.deb.install('emacs', version='23.3+1-1ubuntu9')"}
{"_id": "q_9055", "text": "Enable unattended package installation by preseeding ``debconf``\n parameters.\n\n Example::\n\n import burlap\n\n # Unattended install of Postfix mail server\n burlap.deb.preseed_package('postfix', {\n 'postfix/main_mailer_type': ('select', 'Internet Site'),\n 'postfix/mailname': ('string', 'example.com'),\n 'postfix/destinations': ('string', 'example.com, localhost.localdomain, localhost'),\n })\n burlap.deb.install('postfix')"}
{"_id": "q_9056", "text": "Check if the given key id exists in apt keyring."}
{"_id": "q_9057", "text": "Check if a group exists."}
{"_id": "q_9058", "text": "Responds to a forced password change via `passwd` prompts due to password expiration."}
{"_id": "q_9059", "text": "Adds the user to the given list of groups."}
{"_id": "q_9060", "text": "Creates a user with the given username."}
{"_id": "q_9061", "text": "Forces the user to change their password the next time they login."}
{"_id": "q_9062", "text": "Run a remote command as the root user.\n\n When connecting as root to the remote system, this will use Fabric's\n ``run`` function. In other cases, it will use ``sudo``."}
{"_id": "q_9063", "text": "Iteratively builds a file hash without loading the entire file into memory.\n Designed to process an arbitrary binary file."}
{"_id": "q_9064", "text": "Install the latest version of `setuptools`_.\n\n ::\n\n import burlap\n\n burlap.python_setuptools.install_setuptools()"}
{"_id": "q_9065", "text": "Install Python packages with ``easy_install``.\n\n Examples::\n\n import burlap\n\n # Install a single package\n burlap.python_setuptools.install('package', use_sudo=True)\n\n # Install a list of packages\n burlap.python_setuptools.install(['pkg1', 'pkg2'], use_sudo=True)\n\n .. note:: most of the time, you'll want to use\n :py:func:`burlap.python.install()` instead,\n which uses ``pip`` to install packages."}
{"_id": "q_9066", "text": "Installs all the necessary packages necessary for managing virtual\n environments with pip."}
{"_id": "q_9067", "text": "Returns true if the virtualenv tool is installed."}
{"_id": "q_9068", "text": "Returns true if the virtual environment has been created."}
{"_id": "q_9069", "text": "Lists the packages that require the given package."}
{"_id": "q_9070", "text": "Returns all requirements files combined into one string."}
{"_id": "q_9071", "text": "Creates and saves an EC2 key pair to a local PEM file."}
{"_id": "q_9072", "text": "Deletes and recreates one or more VM instances."}
{"_id": "q_9073", "text": "Utility to take the methods of the instance of a class, instance,\n and add them as functions to a module, module_name, so that Fabric\n can find and call them. Call this at the bottom of a module after\n the class definition."}
{"_id": "q_9074", "text": "Given the function name, looks up the method for dynamically retrieving host data."}
{"_id": "q_9075", "text": "Returns a subset of the env dictionary containing\n only those keys with the name prefix."}
{"_id": "q_9076", "text": "Returns a template to a local file.\n If no filename given, a temporary filename will be generated and returned."}
{"_id": "q_9077", "text": "Returns a template to a remote file.\n If no filename given, a temporary filename will be generated and returned."}
{"_id": "q_9078", "text": "Iterates over sites, safely setting environment variables for each site."}
{"_id": "q_9079", "text": "perform topo sort on elements.\n\n :arg source: list of ``(name, [list of dependancies])`` pairs\n :returns: list of names, with dependancies listed first"}
{"_id": "q_9080", "text": "Returns a list of hosts that have been configured to support the given site."}
{"_id": "q_9081", "text": "Returns a copy of the global environment with all the local variables copied back into it."}
{"_id": "q_9082", "text": "Context manager that hides the command prefix and activates dryrun to capture all following task commands to their equivalent Bash outputs."}
{"_id": "q_9083", "text": "Adds this satchel to the global registeries for fast lookup from other satchels."}
{"_id": "q_9084", "text": "Removes this satchel from global registeries."}
{"_id": "q_9085", "text": "Returns a version of env filtered to only include the variables in our namespace."}
{"_id": "q_9086", "text": "Loads settings for the target site."}
{"_id": "q_9087", "text": "Returns a list of all required packages."}
{"_id": "q_9088", "text": "Returns true if at least one tracker detects a change."}
{"_id": "q_9089", "text": "Delete a PostgreSQL database.\n\n Example::\n\n import burlap\n\n # Remove DB if it exists\n if burlap.postgres.database_exists('myapp'):\n burlap.postgres.drop_database('myapp')"}
{"_id": "q_9090", "text": "Directly transfers a table between two databases."}
{"_id": "q_9091", "text": "Get the IPv4 address assigned to an interface.\n\n Example::\n\n import burlap\n\n # Print all configured IP addresses\n for interface in burlap.network.interfaces():\n print(burlap.network.address(interface))"}
{"_id": "q_9092", "text": "Installs system packages listed in yum-requirements.txt."}
{"_id": "q_9093", "text": "Displays all packages required by the current role\n based on the documented services provided."}
{"_id": "q_9094", "text": "Writes entire crontab to the host."}
{"_id": "q_9095", "text": "Forcibly kills Rabbit and purges all its queues.\n\n For emergency use when the server becomes unresponsive, even to service stop calls.\n\n If this also fails to correct the performance issues, the server may have to be completely\n reinstalled."}
{"_id": "q_9096", "text": "Returns a generator yielding all the keys that have values that differ between each dictionary."}
{"_id": "q_9097", "text": "Given a list of components, re-orders them according to inter-component dependencies so the most depended upon are first."}
{"_id": "q_9098", "text": "Returns a generator yielding the named functions needed for a deployment."}
{"_id": "q_9099", "text": "Returns the path to the manifest file."}
{"_id": "q_9100", "text": "Returns a dictionary representing the current configuration state.\n\n Thumbprint is of the form:\n\n {\n component_name1: {key: value},\n component_name2: {key: value},\n ...\n }"}
{"_id": "q_9101", "text": "Returns a dictionary representing the previous configuration state.\n\n Thumbprint is of the form:\n\n {\n component_name1: {key: value},\n component_name2: {key: value},\n ...\n }"}
{"_id": "q_9102", "text": "Marks the remote server as currently being deployed to."}
{"_id": "q_9103", "text": "Update the thumbprint on the remote server but execute no satchel configurators.\n\n components = A comma-delimited list of satchel names to limit the fake deployment to.\n set_satchels = A semi-colon delimited list of key-value pairs to set in satchels before recording a fake deployment."}
{"_id": "q_9104", "text": "Inspects differences between the last deployment and the current code state."}
{"_id": "q_9105", "text": "Retrieves the Django settings dictionary."}
{"_id": "q_9106", "text": "Runs the Django createsuperuser management command."}
{"_id": "q_9107", "text": "Runs the Dango loaddata management command.\n\n By default, runs on only the current site.\n\n Pass site=all to run on all sites."}
{"_id": "q_9108", "text": "A generic wrapper around Django's manage command."}
{"_id": "q_9109", "text": "Runs the standard Django syncdb command for one or more sites."}
{"_id": "q_9110", "text": "Starts a Django management command in a screen.\n\n Parameters:\n\n command :- all arguments passed to `./manage` as a single string\n\n site :- the site to run the command for (default is all)\n\n Designed to be ran like:\n\n fab <role> dj.manage_async:\"some_management_command --force\""}
{"_id": "q_9111", "text": "Looks up the root login for the given database on the given host and sets\n it to environment variables.\n\n Populates these standard variables:\n\n db_root_password\n db_root_username"}
{"_id": "q_9112", "text": "Renders local settings for a specific database."}
{"_id": "q_9113", "text": "Return free space in bytes."}
{"_id": "q_9114", "text": "Loads database parameters from a specific named set."}
{"_id": "q_9115", "text": "Determines if there's enough space to load the target database."}
{"_id": "q_9116", "text": "Sets connection parameters to localhost, if not set already."}
{"_id": "q_9117", "text": "Configures HDMI to support hot-plugging, so it'll work even if it wasn't\n plugged in when the Pi was originally powered up.\n\n Note, this does cause slightly higher power consumption, so if you don't need HDMI,\n don't bother with this.\n\n http://raspberrypi.stackexchange.com/a/2171/29103"}
{"_id": "q_9118", "text": "Enables access to the camera.\n\n http://raspberrypi.stackexchange.com/questions/14229/how-can-i-enable-the-camera-without-using-raspi-config\n https://mike632t.wordpress.com/2014/06/26/raspberry-pi-camera-setup/\n\n Afterwards, test with:\n\n /opt/vc/bin/raspistill --nopreview --output image.jpg\n\n Check for compatibility with:\n\n vcgencmd get_camera\n\n which should show:\n\n supported=1 detected=1"}
{"_id": "q_9119", "text": "Some images purporting to support both the Pi2 and Pi3 use the wrong kernel modules."}
{"_id": "q_9120", "text": "Runs methods services have requested be run before each deployment."}
{"_id": "q_9121", "text": "Applies routine, typically application-level changes to the service."}
{"_id": "q_9122", "text": "Runs methods services have requested be run before after deployment."}
{"_id": "q_9123", "text": "Applies one-time settings changes to the host, usually to initialize the service."}
{"_id": "q_9124", "text": "Enables all modules in the current module list.\n Does not disable any currently enabled modules not in the list."}
{"_id": "q_9125", "text": "Based on the number of sites per server and the number of resources on the server,\n calculates the optimal number of processes that should be allocated for each WSGI site."}
{"_id": "q_9126", "text": "Instantiates a new local renderer.\n Override this to do any additional initialization."}
{"_id": "q_9127", "text": "Uploads select media to an Apache accessible directory."}
{"_id": "q_9128", "text": "Installs the mod-evasive Apache module for combating DDOS attacks.\n\n https://www.linode.com/docs/websites/apache-tips-and-tricks/modevasive-on-apache"}
{"_id": "q_9129", "text": "Installs the mod-rpaf Apache module.\n\n https://github.com/gnif/mod_rpaf"}
{"_id": "q_9130", "text": "Forwards all traffic to a page saying the server is down for maintenance."}
{"_id": "q_9131", "text": "Supervisor can take a very long time to start and stop,\n so wait for it."}
{"_id": "q_9132", "text": "Collects the configurations for all registered services and writes\n the appropriate supervisord.conf file."}
{"_id": "q_9133", "text": "Clone a remote Git repository into a new directory.\n\n :param remote_url: URL of the remote repository to clone.\n :type remote_url: str\n\n :param path: Path of the working copy directory. Must not exist yet.\n :type path: str\n\n :param use_sudo: If ``True`` execute ``git`` with\n :func:`fabric.operations.sudo`, else with\n :func:`fabric.operations.run`.\n :type use_sudo: bool\n\n :param user: If ``use_sudo is True``, run :func:`fabric.operations.sudo`\n with the given user. If ``use_sudo is False`` this parameter\n has no effect.\n :type user: str"}
{"_id": "q_9134", "text": "Add a remote Git repository into a directory.\n\n :param path: Path of the working copy directory. This directory must exist\n and be a Git working copy with a default remote to fetch from.\n :type path: str\n\n :param use_sudo: If ``True`` execute ``git`` with\n :func:`fabric.operations.sudo`, else with\n :func:`fabric.operations.run`.\n :type use_sudo: bool\n\n :param user: If ``use_sudo is True``, run :func:`fabric.operations.sudo`\n with the given user. If ``use_sudo is False`` this parameter\n has no effect.\n :type user: str\n\n :param name: name for the remote repository\n :type name: str\n\n :param remote_url: URL of the remote repository\n :type remote_url: str\n\n :param fetch: If ``True`` execute ``git remote add -f``\n :type fetch: bool"}
{"_id": "q_9135", "text": "Fetch changes from the default remote repository and merge them.\n\n :param path: Path of the working copy directory. This directory must exist\n and be a Git working copy with a default remote to pull from.\n :type path: str\n\n :param use_sudo: If ``True`` execute ``git`` with\n :func:`fabric.operations.sudo`, else with\n :func:`fabric.operations.run`.\n :type use_sudo: bool\n\n :param user: If ``use_sudo is True``, run :func:`fabric.operations.sudo`\n with the given user. If ``use_sudo is False`` this parameter\n has no effect.\n :type user: str\n :param force: If ``True``, append the ``--force`` option to the command.\n :type force: bool"}
{"_id": "q_9136", "text": "Retrieves all commit messages for all commits between the given commit numbers\n on the current branch."}
{"_id": "q_9137", "text": "Retrieves the git commit number of the current head branch."}
{"_id": "q_9138", "text": "Get the Vagrant version."}
{"_id": "q_9139", "text": "Run the following tasks on a vagrant box.\n\n First, you need to import this task in your ``fabfile.py``::\n\n from fabric.api import *\n from burlap.vagrant import vagrant\n\n @task\n def some_task():\n run('echo hello')\n\n Then you can easily run tasks on your current Vagrant box::\n\n $ fab vagrant some_task"}
{"_id": "q_9140", "text": "Context manager that sets a vagrant VM\n as the remote host.\n\n Use this context manager inside a task to run commands\n on your current Vagrant box::\n\n from burlap.vagrant import vagrant_settings\n\n with vagrant_settings():\n run('hostname')"}
{"_id": "q_9141", "text": "Get the list of vagrant base boxes"}
{"_id": "q_9142", "text": "Get the distribution family.\n\n Returns one of ``debian``, ``redhat``, ``arch``, ``gentoo``,\n ``sun``, ``other``."}
{"_id": "q_9143", "text": "Gets the list of supported locales.\n\n Each locale is returned as a ``(locale, charset)`` tuple."}
{"_id": "q_9144", "text": "Sets ownership and permissions for Celery-related files."}
{"_id": "q_9145", "text": "This is called for each site to render a Celery config file."}
{"_id": "q_9146", "text": "Ensures all tests have passed for this branch.\n\n This should be called before deployment, to prevent accidental deployment of code\n that hasn't passed automated testing."}
{"_id": "q_9147", "text": "Returns true if the given host exists on the network.\n Returns false otherwise."}
{"_id": "q_9148", "text": "Deletes all SSH keys on the localhost associated with the current remote host."}
{"_id": "q_9149", "text": "Returns true if the host does not exist at the expected location and may need\n to have its initial configuration set.\n Returns false if the host exists at the expected location."}
{"_id": "q_9150", "text": "Called to set default password login for systems that do not yet have passwordless\n login setup."}
{"_id": "q_9151", "text": "Assigns a name to the server accessible from user space.\n\n Note, we add the name to /etc/hosts since not all programs use\n /etc/hostname to reliably identify the server hostname."}
{"_id": "q_9152", "text": "Get a partition list for all disk or for selected device only\n\n Example::\n\n from burlap.disk import partitions\n\n spart = {'Linux': 0x83, 'Swap': 0x82}\n parts = partitions()\n # parts = {'/dev/sda1': 131, '/dev/sda2': 130, '/dev/sda3': 131}\n r = parts['/dev/sda1'] == spart['Linux']\n r = r and parts['/dev/sda2'] == spart['Swap']\n if r:\n print(\"You can format these partitions\")"}
{"_id": "q_9153", "text": "Get a HDD device by uuid\n\n Example::\n\n from burlap.disk import getdevice_by_uuid\n\n device = getdevice_by_uuid(\"356fafdc-21d5-408e-a3e9-2b3f32cb2a8c\")\n if device:\n mount(device,'/mountpoint')"}
{"_id": "q_9154", "text": "Run a MySQL query."}
{"_id": "q_9155", "text": "Create a MySQL user.\n\n Example::\n\n import burlap\n\n # Create DB user if it does not exist\n if not burlap.mysql.user_exists('dbuser'):\n burlap.mysql.create_user('dbuser', password='somerandomstring')"}
{"_id": "q_9156", "text": "Check if a MySQL database exists."}
{"_id": "q_9157", "text": "Retrieves the path to the MySQL configuration file."}
{"_id": "q_9158", "text": "This does a cross-match against the TIC catalog on MAST.\n\n Speed tests: about 10 crossmatches per second. (-> 3 hours for 10^5 objects\n to crossmatch).\n\n Parameters\n ----------\n\n ra,dec : np.array\n The coordinates to cross match against, all in decimal degrees.\n\n radius : float\n The cross-match radius to use, in decimal degrees.\n\n Returns\n -------\n\n dict\n Returns the match results JSON from MAST loaded into a dict."}
{"_id": "q_9159", "text": "This converts the normalized fluxes in the TESS lcdicts to TESS mags.\n\n Uses the object's TESS mag stored in lcdict['objectinfo']['tessmag']::\n\n mag - object_tess_mag = -2.5 log (flux/median_flux)\n\n Parameters\n ----------\n\n lcdict : lcdict\n An `lcdict` produced by `read_tess_fitslc` or\n `consolidate_tess_fitslc`. This must have normalized fluxes in its\n measurement columns (use the `normalize` kwarg for these functions).\n\n columns : sequence of str\n The column keys of the normalized flux and background measurements in\n the `lcdict` to operate on and convert to magnitudes in TESS band (T).\n\n Returns\n -------\n\n lcdict\n The returned `lcdict` will contain extra columns corresponding to\n magnitudes for each input normalized flux/background column."}
{"_id": "q_9160", "text": "This returns the periodogram plot PNG as base64, plus info as a dict.\n\n Parameters\n ----------\n\n lspinfo : dict\n This is an lspinfo dict containing results from a period-finding\n function. If it's from an astrobase period-finding function in\n periodbase, this will already be in the correct format. To use external\n period-finder results with this function, the `lspinfo` dict must be of\n the following form, with at least the keys listed below::\n\n {'periods': np.array of all periods searched by the period-finder,\n 'lspvals': np.array of periodogram power value for each period,\n 'bestperiod': a float value that is the period with the highest\n peak in the periodogram, i.e. the most-likely actual\n period,\n 'method': a three-letter code naming the period-finder used; must\n be one of the keys in the\n `astrobase.periodbase.METHODLABELS` dict,\n 'nbestperiods': a list of the periods corresponding to periodogram\n peaks (`nbestlspvals` below) to annotate on the\n periodogram plot so they can be called out\n visually,\n 'nbestlspvals': a list of the power values associated with\n periodogram peaks to annotate on the periodogram\n plot so they can be called out visually; should be\n the same length as `nbestperiods` above}\n\n `nbestperiods` and `nbestlspvals` must have at least 5 elements each,\n e.g. describing the five 'best' (highest power) peaks in the\n periodogram.\n\n plotdpi : int\n The resolution in DPI of the output periodogram plot to make.\n\n override_pfmethod : str or None\n This is used to set a custom label for this periodogram\n method. Normally, this is taken from the 'method' key in the input\n `lspinfo` dict, but if you want to override the output method name,\n provide this as a string here. This can be useful if you have multiple\n results you want to incorporate into a checkplotdict from a single\n period-finder (e.g. if you ran BLS over several period ranges\n separately).\n\n Returns\n -------\n\n dict\n Returns a dict that contains the following items::\n\n {methodname: {'periods':the period array from lspinfo,\n 'lspval': the periodogram power array from lspinfo,\n 'bestperiod': the best period from lspinfo,\n 'nbestperiods': the 'nbestperiods' list from lspinfo,\n 'nbestlspvals': the 'nbestlspvals' list from lspinfo,\n 'periodogram': base64 encoded string representation of\n the periodogram plot}}\n\n The dict is returned in this format so it can be directly incorporated\n under the period-finder's label `methodname` in a checkplotdict, using\n Python's dict `update()` method."}
{"_id": "q_9161", "text": "This normalizes the magnitude time-series to a specified value.\n\n This is used to normalize time series measurements that may have large time\n gaps and vertical offsets in mag/flux measurement between these\n 'timegroups', either due to instrument changes or different filters.\n\n NOTE: this works in-place! The mags array will be replaced with normalized\n mags when this function finishes.\n\n Parameters\n ----------\n\n times,mags : array-like\n The times (assumed to be some form of JD) and mags (or flux)\n measurements to be normalized.\n\n mingap : float\n This defines how much the difference between consecutive measurements is\n allowed to be to consider them as parts of different timegroups. By\n default it is set to 4.0 days.\n\n normto : {'globalmedian', 'zero'} or a float\n Specifies the normalization type::\n\n 'globalmedian' -> norms each mag to the global median of the LC column\n 'zero' -> norms each mag to zero\n a float -> norms each mag to this specified float value.\n\n magsarefluxes : bool\n Indicates if the input `mags` array is actually an array of flux\n measurements instead of magnitude measurements. If this is set to True,\n then:\n\n - if `normto` is 'zero', then the median flux is divided from each\n observation's flux value to yield normalized fluxes with 1.0 as the\n global median.\n\n - if `normto` is 'globalmedian', then the global median flux value\n across the entire time series is multiplied with each measurement.\n\n - if `norm` is set to a `float`, then this number is multiplied with the\n flux value for each measurement.\n\n debugmode : bool\n If this is True, will print out verbose info on each timegroup found.\n\n Returns\n -------\n\n times,normalized_mags : np.arrays\n Normalized magnitude values after normalization. If normalization fails\n for some reason, `times` and `normalized_mags` will both be None."}
{"_id": "q_9162", "text": "Calculate the total SNR of a transit assuming gaussian uncertainties.\n\n `modelmags` gets interpolated onto the cadence of `mags`. The noise is\n calculated as the 1-sigma std deviation of the residual (see below).\n\n Following Carter et al. 2009::\n\n Q = sqrt( \u0393 T ) * \u03b4 / \u03c3\n\n for Q the total SNR of the transit in the r->0 limit, where::\n\n r = Rp/Rstar,\n T = transit duration,\n \u03b4 = transit depth,\n \u03c3 = RMS of the lightcurve in transit.\n \u0393 = sampling rate\n\n Thus \u0393 * T is roughly the number of points obtained during transit.\n (This doesn't correctly account for the SNR during ingress/egress, but this\n is a second-order correction).\n\n Note this is the same total SNR as described by e.g., Kovacs et al. 2002,\n their Equation 11.\n\n NOTE: this only works with fluxes at the moment.\n\n Parameters\n ----------\n\n times,mags : np.array\n The input flux time-series to process.\n\n modeltimes,modelmags : np.array\n A transiting planet model, either from BLS, a trapezoid model, or a\n Mandel-Agol model.\n\n atol_normalization : float\n The absolute tolerance to which the median of the passed model fluxes\n must be equal to 1.\n\n indsforrms : np.array\n A array of bools of `len(mags)` used to select points for the RMS\n measurement. If not passed, the RMS of the entire passed timeseries is\n used as an approximation. Genearlly, it's best to use out of transit\n points, so the RMS measurement is not model-dependent.\n\n magsarefluxes : bool\n Currently forced to be True because this function only works with\n fluxes.\n\n verbose : bool\n If True, indicates progress and warns about problems.\n\n transitdepth : float or None\n If the transit depth is known, pass it in here. Otherwise, it is\n calculated assuming OOT flux is 1.\n\n npoints_in_transits : int or None\n If the number of points in transit is known, pass it in here. Otherwise,\n the function will guess at this value.\n\n Returns\n -------\n\n (snr, transit_depth, noise) : tuple\n The returned tuple contains the calculated SNR, transit depth, and noise\n of the residual lightcurve calculated using the relation described\n above."}
{"_id": "q_9163", "text": "Using Carter et al. 2009's estimate, calculate the theoretical optimal\n precision on mid-transit time measurement possible given a transit of a\n particular SNR.\n\n The relation used is::\n\n sigma_tc = Q^{-1} * T * sqrt(\u03b8/2)\n\n Q = SNR of the transit.\n T = transit duration, which is 2.14 hours from discovery paper.\n \u03b8 = \u03c4/T = ratio of ingress to total duration\n ~= (few minutes [guess]) / 2.14 hours\n\n Parameters\n ----------\n\n snr : float\n The measured signal-to-noise of the transit, e,g. from\n :py:func:`astrobase.periodbase.kbls.bls_stats_singleperiod` or from\n running the `.compute_stats()` method on an Astropy BoxLeastSquares\n object.\n\n t_ingress_min : float\n The ingress duration in minutes. This is t_I to t_II in Winn (2010)\n nomenclature.\n\n t_duration_hr : float\n The transit duration in hours. This is t_I to t_IV in Winn (2010)\n nomenclature.\n\n Returns\n -------\n\n float\n Returns the precision achievable for transit-center time as calculated\n from the relation above. This is in days."}
{"_id": "q_9164", "text": "This gets the out-of-transit light curve points.\n\n Relevant during iterative masking of transits for multiple planet system\n search.\n\n Parameters\n ----------\n\n time,flux,err_flux : np.array\n The input flux time-series measurements and their associated measurement\n errors\n\n blsfit_savpath : str or None\n If provided as a str, indicates the path of the fit plot to make for a\n simple BLS model fit to the transit using the obtained period and epoch.\n\n trapfit_savpath : str or None\n If provided as a str, indicates the path of the fit plot to make for a\n trapezoidal transit model fit to the transit using the obtained period\n and epoch.\n\n in_out_transit_savpath : str or None\n If provided as a str, indicates the path of the plot file that will be\n made for a plot showing the in-transit points and out-of-transit points\n tagged separately.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n magsarefluxes : bool\n This is by default True for this function, since it works on fluxes only\n at the moment.\n\n nworkers : int\n The number of parallel BLS period-finder workers to use.\n\n extra_maskfrac : float\n This is the separation (N) from in-transit points you desire, in units\n of the transit duration. `extra_maskfrac = 0` if you just want points\n inside transit, otherwise::\n\n t_starts = t_Is - N*tdur, t_ends = t_IVs + N*tdur\n\n Thus setting N=0.03 masks slightly more than the guessed transit\n duration.\n\n Returns\n -------\n\n (times_oot, fluxes_oot, errs_oot) : tuple of np.array\n The `times`, `flux`, `err_flux` values from the input at the time values\n out-of-transit are returned."}
{"_id": "q_9165", "text": "This just compresses the sqlitecurve. Should be independent of OS."}
{"_id": "q_9166", "text": "This just compresses the sqlitecurve in gzip format.\n\n FIXME: this doesn't work with gzip < 1.6 or non-GNU gzip (probably)."}
{"_id": "q_9167", "text": "This just uncompresses the sqlitecurve in gzip format.\n\n FIXME: this doesn't work with gzip < 1.6 or non-GNU gzip (probably)."}
{"_id": "q_9168", "text": "This just tries to apply the caster function to castee.\n\n Returns None on failure."}
{"_id": "q_9169", "text": "This parses the CSV header from the CSV HAT sqlitecurve.\n\n Returns a dict that can be used to update an existing lcdict with the\n relevant metadata info needed to form a full LC."}
{"_id": "q_9170", "text": "This parses the header of the LCC CSV V1 LC format."}
{"_id": "q_9171", "text": "This describes the LCC CSV format light curve file.\n\n Parameters\n ----------\n\n lcdict : dict\n The input lcdict to parse for column and metadata info.\n\n returndesc : bool\n If True, returns the description string as an str instead of just\n printing it to stdout.\n\n Returns\n -------\n\n str or None\n If returndesc is True, returns the description lines as a str, otherwise\n returns nothing."}
{"_id": "q_9172", "text": "This reads a HAT data server or LCC-Server produced CSV light curve\n into an lcdict.\n\n This will automatically figure out the format of the file\n provided. Currently, it can read:\n\n - legacy HAT data server CSV LCs (e.g. from\n https://hatsouth.org/planets/lightcurves.html) with an extension of the\n form: `.hatlc.csv.gz`.\n - all LCC-Server produced LCC-CSV-V1 LCs (e.g. from\n https://data.hatsurveys.org) with an extension of the form: `-csvlc.gz`.\n\n\n Parameters\n ----------\n\n lcfile : str\n The light curve file to read.\n\n Returns\n -------\n\n dict\n Returns an lcdict that can be read and used by many astrobase processing\n functions."}
{"_id": "q_9173", "text": "This finds the time gaps in the light curve, so we can figure out which\n times are for consecutive observations and which represent gaps\n between seasons.\n\n Parameters\n ----------\n\n lctimes : np.array\n This is the input array of times, assumed to be in some form of JD.\n\n mingap : float\n This defines how much the difference between consecutive measurements is\n allowed to be to consider them as parts of different timegroups. By\n default it is set to 4.0 days.\n\n Returns\n -------\n\n tuple\n A tuple of the form below is returned, containing the number of time\n groups found and Python slice objects for each group::\n\n (ngroups, [slice(start_ind_1, end_ind_1), ...])"}
{"_id": "q_9174", "text": "This is called when we're executed from the commandline.\n\n The current usage from the command-line is described below::\n\n usage: hatlc [-h] [--describe] hatlcfile\n\n read a HAT LC of any format and output to stdout\n\n positional arguments:\n hatlcfile path to the light curve you want to read and pipe to stdout\n\n optional arguments:\n -h, --help show this help message and exit\n --describe don't dump the columns, show only object info and LC metadata"}
{"_id": "q_9175", "text": "This calculates the M-dwarf subtype given SDSS `r-i` and `i-z` colors.\n\n Parameters\n ----------\n\n ri_color : float\n The SDSS `r-i` color of the object.\n\n iz_color : float\n The SDSS `i-z` color of the object.\n\n Returns\n -------\n\n (subtype, index1, index2) : tuple\n `subtype`: if the star appears to be an M dwarf, will return an int\n between 0 and 9 indicating its subtype, e.g. will return 4 for an M4\n dwarf. If the object isn't an M dwarf, will return None\n\n `index1`, `index2`: the M-dwarf color locus value and spread of this\n object calculated from the `r-i` and `i-z` colors."}
{"_id": "q_9176", "text": "This applies EPD in parallel to all LCs in the input list.\n\n Parameters\n ----------\n\n lclist : list of str\n This is the list of light curve files to run EPD on.\n\n externalparams : dict or None\n This is a dict that indicates which keys in the lcdict obtained from the\n lcfile correspond to the required external parameters. As with timecol,\n magcol, and errcol, these can be simple keys (e.g. 'rjd') or compound\n keys ('magaperture1.mags'). The dict should look something like::\n\n {'fsv':'<lcdict key>' array: S values for each observation,\n 'fdv':'<lcdict key>' array: D values for each observation,\n 'fkv':'<lcdict key>' array: K values for each observation,\n 'xcc':'<lcdict key>' array: x coords for each observation,\n 'ycc':'<lcdict key>' array: y coords for each observation,\n 'bgv':'<lcdict key>' array: sky background for each observation,\n 'bge':'<lcdict key>' array: sky background err for each observation,\n 'iha':'<lcdict key>' array: hour angle for each observation,\n 'izd':'<lcdict key>' array: zenith distance for each observation}\n\n Alternatively, if these exact keys are already present in the lcdict,\n indicate this by setting externalparams to None.\n\n timecols,magcols,errcols : lists of str\n The keys in the lcdict produced by your light curve reader function that\n correspond to the times, mags/fluxes, and associated measurement errors\n that will be used as inputs to the EPD process. If these are None, the\n default values for `timecols`, `magcols`, and `errcols` for your light\n curve format will be used here.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curve files.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n epdsmooth_sigclip : float or int or sequence of two floats/ints or None\n This specifies how to sigma-clip the input LC before fitting the EPD\n function to it.\n\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n epdsmooth_windowsize : int\n This is the number of LC points to smooth over to generate a smoothed\n light curve that will be used to fit the EPD function.\n\n epdsmooth_func : Python function\n This sets the smoothing filter function to use. A Savitsky-Golay filter\n is used to smooth the light curve by default. The functions that can be\n used with this kwarg are listed in `varbase.trends`. If you want to use\n your own function, it MUST have the following signature::\n\n def smoothfunc(mags_array, window_size, **extraparams)\n\n and return a numpy array of the same size as `mags_array` with the\n smoothed time-series. Any extra params can be provided using the\n `extraparams` dict.\n\n epdsmooth_extraparams : dict\n This is a dict of any extra filter params to supply to the smoothing\n function.\n\n nworkers : int\n The number of parallel workers to launch when processing the LCs.\n\n maxworkertasks : int\n The maximum number of tasks a parallel worker will complete before it is\n replaced with a new one (sometimes helps with memory-leaks).\n\n Returns\n -------\n\n dict\n Returns a dict organized by all the keys in the input `magcols` list,\n containing lists of EPD pickle light curves for that `magcol`.\n\n Notes\n -----\n\n - S -> measure of PSF sharpness (~1/sigma^2 sosmaller S = wider PSF)\n - D -> measure of PSF ellipticity in xy direction\n - K -> measure of PSF ellipticity in cross direction\n\n S, D, K are related to the PSF's variance and covariance, see eqn 30-33 in\n A. Pal's thesis: https://arxiv.org/abs/0906.3486"}
{"_id": "q_9177", "text": "This applies EPD in parallel to all LCs in a directory.\n\n Parameters\n ----------\n\n lcdir : str\n The light curve directory to process.\n\n externalparams : dict or None\n This is a dict that indicates which keys in the lcdict obtained from the\n lcfile correspond to the required external parameters. As with timecol,\n magcol, and errcol, these can be simple keys (e.g. 'rjd') or compound\n keys ('magaperture1.mags'). The dict should look something like::\n\n {'fsv':'<lcdict key>' array: S values for each observation,\n 'fdv':'<lcdict key>' array: D values for each observation,\n 'fkv':'<lcdict key>' array: K values for each observation,\n 'xcc':'<lcdict key>' array: x coords for each observation,\n 'ycc':'<lcdict key>' array: y coords for each observation,\n 'bgv':'<lcdict key>' array: sky background for each observation,\n 'bge':'<lcdict key>' array: sky background err for each observation,\n 'iha':'<lcdict key>' array: hour angle for each observation,\n 'izd':'<lcdict key>' array: zenith distance for each observation}\n\n lcfileglob : str or None\n A UNIX fileglob to use to select light curve files in `lcdir`. If this\n is not None, the value provided will override the default fileglob for\n your light curve format.\n\n timecols,magcols,errcols : lists of str\n The keys in the lcdict produced by your light curve reader function that\n correspond to the times, mags/fluxes, and associated measurement errors\n that will be used as inputs to the EPD process. If these are None, the\n default values for `timecols`, `magcols`, and `errcols` for your light\n curve format will be used here.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n epdsmooth_sigclip : float or int or sequence of two floats/ints or None\n This specifies how to sigma-clip the input LC before fitting the EPD\n function to it.\n\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n epdsmooth_windowsize : int\n This is the number of LC points to smooth over to generate a smoothed\n light curve that will be used to fit the EPD function.\n\n epdsmooth_func : Python function\n This sets the smoothing filter function to use. A Savitsky-Golay filter\n is used to smooth the light curve by default. The functions that can be\n used with this kwarg are listed in `varbase.trends`. If you want to use\n your own function, it MUST have the following signature::\n\n def smoothfunc(mags_array, window_size, **extraparams)\n\n and return a numpy array of the same size as `mags_array` with the\n smoothed time-series. Any extra params can be provided using the\n `extraparams` dict.\n\n epdsmooth_extraparams : dict\n This is a dict of any extra filter params to supply to the smoothing\n function.\n\n nworkers : int\n The number of parallel workers to launch when processing the LCs.\n\n maxworkertasks : int\n The maximum number of tasks a parallel worker will complete before it is\n replaced with a new one (sometimes helps with memory-leaks).\n\n Returns\n -------\n\n dict\n Returns a dict organized by all the keys in the input `magcols` list,\n containing lists of EPD pickle light curves for that `magcol`.\n\n Notes\n -----\n\n - S -> measure of PSF sharpness (~1/sigma^2 sosmaller S = wider PSF)\n - D -> measure of PSF ellipticity in xy direction\n - K -> measure of PSF ellipticity in cross direction\n\n S, D, K are related to the PSF's variance and covariance, see eqn 30-33 in\n A. Pal's thesis: https://arxiv.org/abs/0906.3486"}
{"_id": "q_9178", "text": "This wraps Astropy's BoxLeastSquares for use with bls_parallel_pfind below.\n\n `task` is a tuple::\n\n task[0] = times\n task[1] = mags\n task[2] = errs\n task[3] = magsarefluxes\n\n task[4] = minfreq\n task[5] = nfreq\n task[6] = stepsize\n\n task[7] = ndurations\n task[8] = mintransitduration\n task[9] = maxtransitduration\n\n task[10] = blsobjective\n task[11] = blsmethod\n task[12] = blsoversample"}
{"_id": "q_9179", "text": "This wraps starfeatures."}
{"_id": "q_9180", "text": "This drives the `get_starfeatures` function for a collection of LCs.\n\n Parameters\n ----------\n\n lclist : list of str\n The list of light curve file names to process.\n\n outdir : str\n The output directory where the results will be placed.\n\n lc_catalog_pickle : str\n The path to a catalog containing at a dict with least:\n\n - an object ID array accessible with `dict['objects']['objectid']`\n\n - an LC filename array accessible with `dict['objects']['lcfname']`\n\n - a `scipy.spatial.KDTree` or `cKDTree` object to use for finding\n neighbors for each object accessible with `dict['kdtree']`\n\n A catalog pickle of the form needed can be produced using\n :py:func:`astrobase.lcproc.catalogs.make_lclist` or\n :py:func:`astrobase.lcproc.catalogs.filter_lclist`.\n\n neighbor_radius_arcsec : float\n This indicates the radius in arcsec to search for neighbors for this\n object using the light curve catalog's `kdtree`, `objlist`, `lcflist`,\n and in GAIA.\n\n maxobjects : int\n The number of objects to process from `lclist`.\n\n deredden : bool\n This controls if the colors and any color classifications will be\n dereddened using 2MASS DUST.\n\n custom_bandpasses : dict or None\n This is a dict used to define any custom bandpasses in the\n `in_objectinfo` dict you want to make this function aware of and\n generate colors for. Use the format below for this dict::\n\n {\n '<bandpass_key_1>':{'dustkey':'<twomass_dust_key_1>',\n 'label':'<band_label_1>'\n 'colors':[['<bandkey1>-<bandkey2>',\n '<BAND1> - <BAND2>'],\n ['<bandkey3>-<bandkey4>',\n '<BAND3> - <BAND4>']]},\n .\n ...\n .\n '<bandpass_key_N>':{'dustkey':'<twomass_dust_key_N>',\n 'label':'<band_label_N>'\n 'colors':[['<bandkey1>-<bandkey2>',\n '<BAND1> - <BAND2>'],\n ['<bandkey3>-<bandkey4>',\n '<BAND3> - <BAND4>']]},\n }\n\n Where:\n\n `bandpass_key` is a key to use to refer to this bandpass in the\n `objectinfo` dict, e.g. 'sdssg' for SDSS g band\n\n `twomass_dust_key` is the key to use in the 2MASS DUST result table for\n reddening per band-pass. For example, given the following DUST result\n table (using http://irsa.ipac.caltech.edu/applications/DUST/)::\n\n |Filter_name|LamEff |A_over_E_B_V_SandF|A_SandF|A_over_E_B_V_SFD|A_SFD|\n |char |float |float |float |float |float|\n | |microns| |mags | |mags |\n CTIO U 0.3734 4.107 0.209 4.968 0.253\n CTIO B 0.4309 3.641 0.186 4.325 0.221\n CTIO V 0.5517 2.682 0.137 3.240 0.165\n .\n .\n ...\n\n The `twomass_dust_key` for 'vmag' would be 'CTIO V'. If you want to\n skip DUST lookup and want to pass in a specific reddening magnitude\n for your bandpass, use a float for the value of\n `twomass_dust_key`. If you want to skip DUST lookup entirely for\n this bandpass, use None for the value of `twomass_dust_key`.\n\n `band_label` is the label to use for this bandpass, e.g. 'W1' for\n WISE-1 band, 'u' for SDSS u, etc.\n\n The 'colors' list contains color definitions for all colors you want\n to generate using this bandpass. this list contains elements of the\n form::\n\n ['<bandkey1>-<bandkey2>','<BAND1> - <BAND2>']\n\n where the the first item is the bandpass keys making up this color,\n and the second item is the label for this color to be used by the\n frontends. An example::\n\n ['sdssu-sdssg','u - g']\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n Returns\n -------\n\n list of str\n A list of all star features pickles produced."}
{"_id": "q_9181", "text": "This runs `get_starfeatures` in parallel for all light curves in `lclist`.\n\n Parameters\n ----------\n\n lclist : list of str\n The list of light curve file names to process.\n\n outdir : str\n The output directory where the results will be placed.\n\n lc_catalog_pickle : str\n The path to a catalog containing at a dict with least:\n\n - an object ID array accessible with `dict['objects']['objectid']`\n\n - an LC filename array accessible with `dict['objects']['lcfname']`\n\n - a `scipy.spatial.KDTree` or `cKDTree` object to use for finding\n neighbors for each object accessible with `dict['kdtree']`\n\n A catalog pickle of the form needed can be produced using\n :py:func:`astrobase.lcproc.catalogs.make_lclist` or\n :py:func:`astrobase.lcproc.catalogs.filter_lclist`.\n\n neighbor_radius_arcsec : float\n This indicates the radius in arcsec to search for neighbors for this\n object using the light curve catalog's `kdtree`, `objlist`, `lcflist`,\n and in GAIA.\n\n maxobjects : int\n The number of objects to process from `lclist`.\n\n deredden : bool\n This controls if the colors and any color classifications will be\n dereddened using 2MASS DUST.\n\n custom_bandpasses : dict or None\n This is a dict used to define any custom bandpasses in the\n `in_objectinfo` dict you want to make this function aware of and\n generate colors for. Use the format below for this dict::\n\n {\n '<bandpass_key_1>':{'dustkey':'<twomass_dust_key_1>',\n 'label':'<band_label_1>'\n 'colors':[['<bandkey1>-<bandkey2>',\n '<BAND1> - <BAND2>'],\n ['<bandkey3>-<bandkey4>',\n '<BAND3> - <BAND4>']]},\n .\n ...\n .\n '<bandpass_key_N>':{'dustkey':'<twomass_dust_key_N>',\n 'label':'<band_label_N>'\n 'colors':[['<bandkey1>-<bandkey2>',\n '<BAND1> - <BAND2>'],\n ['<bandkey3>-<bandkey4>',\n '<BAND3> - <BAND4>']]},\n }\n\n Where:\n\n `bandpass_key` is a key to use to refer to this bandpass in the\n `objectinfo` dict, e.g. 'sdssg' for SDSS g band\n\n `twomass_dust_key` is the key to use in the 2MASS DUST result table for\n reddening per band-pass. For example, given the following DUST result\n table (using http://irsa.ipac.caltech.edu/applications/DUST/)::\n\n |Filter_name|LamEff |A_over_E_B_V_SandF|A_SandF|A_over_E_B_V_SFD|A_SFD|\n |char |float |float |float |float |float|\n | |microns| |mags | |mags |\n CTIO U 0.3734 4.107 0.209 4.968 0.253\n CTIO B 0.4309 3.641 0.186 4.325 0.221\n CTIO V 0.5517 2.682 0.137 3.240 0.165\n .\n .\n ...\n\n The `twomass_dust_key` for 'vmag' would be 'CTIO V'. If you want to\n skip DUST lookup and want to pass in a specific reddening magnitude\n for your bandpass, use a float for the value of\n `twomass_dust_key`. If you want to skip DUST lookup entirely for\n this bandpass, use None for the value of `twomass_dust_key`.\n\n `band_label` is the label to use for this bandpass, e.g. 'W1' for\n WISE-1 band, 'u' for SDSS u, etc.\n\n The 'colors' list contains color definitions for all colors you want\n to generate using this bandpass. this list contains elements of the\n form::\n\n ['<bandkey1>-<bandkey2>','<BAND1> - <BAND2>']\n\n where the the first item is the bandpass keys making up this color,\n and the second item is the label for this color to be used by the\n frontends. An example::\n\n ['sdssu-sdssg','u - g']\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n nworkers : int\n The number of parallel workers to launch.\n\n Returns\n -------\n\n dict\n A dict with key:val pairs of the input light curve filename and the\n output star features pickle for each LC processed."}
{"_id": "q_9182", "text": "This runs parallel star feature extraction for a directory of LCs.\n\n Parameters\n ----------\n\n lcdir : list of str\n The directory to search for light curves.\n\n outdir : str\n The output directory where the results will be placed.\n\n lc_catalog_pickle : str\n The path to a catalog containing at a dict with least:\n\n - an object ID array accessible with `dict['objects']['objectid']`\n\n - an LC filename array accessible with `dict['objects']['lcfname']`\n\n - a `scipy.spatial.KDTree` or `cKDTree` object to use for finding\n neighbors for each object accessible with `dict['kdtree']`\n\n A catalog pickle of the form needed can be produced using\n :py:func:`astrobase.lcproc.catalogs.make_lclist` or\n :py:func:`astrobase.lcproc.catalogs.filter_lclist`.\n\n neighbor_radius_arcsec : float\n This indicates the radius in arcsec to search for neighbors for this\n object using the light curve catalog's `kdtree`, `objlist`, `lcflist`,\n and in GAIA.\n\n fileglob : str\n The UNIX file glob to use to search for the light curves in `lcdir`. If\n None, the default value for the light curve format specified will be\n used.\n\n maxobjects : int\n The number of objects to process from `lclist`.\n\n deredden : bool\n This controls if the colors and any color classifications will be\n dereddened using 2MASS DUST.\n\n custom_bandpasses : dict or None\n This is a dict used to define any custom bandpasses in the\n `in_objectinfo` dict you want to make this function aware of and\n generate colors for. Use the format below for this dict::\n\n {\n '<bandpass_key_1>':{'dustkey':'<twomass_dust_key_1>',\n 'label':'<band_label_1>'\n 'colors':[['<bandkey1>-<bandkey2>',\n '<BAND1> - <BAND2>'],\n ['<bandkey3>-<bandkey4>',\n '<BAND3> - <BAND4>']]},\n .\n ...\n .\n '<bandpass_key_N>':{'dustkey':'<twomass_dust_key_N>',\n 'label':'<band_label_N>'\n 'colors':[['<bandkey1>-<bandkey2>',\n '<BAND1> - <BAND2>'],\n ['<bandkey3>-<bandkey4>',\n '<BAND3> - <BAND4>']]},\n }\n\n Where:\n\n `bandpass_key` is a key to use to refer to this bandpass in the\n `objectinfo` dict, e.g. 'sdssg' for SDSS g band\n\n `twomass_dust_key` is the key to use in the 2MASS DUST result table for\n reddening per band-pass. For example, given the following DUST result\n table (using http://irsa.ipac.caltech.edu/applications/DUST/)::\n\n |Filter_name|LamEff |A_over_E_B_V_SandF|A_SandF|A_over_E_B_V_SFD|A_SFD|\n |char |float |float |float |float |float|\n | |microns| |mags | |mags |\n CTIO U 0.3734 4.107 0.209 4.968 0.253\n CTIO B 0.4309 3.641 0.186 4.325 0.221\n CTIO V 0.5517 2.682 0.137 3.240 0.165\n .\n .\n ...\n\n The `twomass_dust_key` for 'vmag' would be 'CTIO V'. If you want to\n skip DUST lookup and want to pass in a specific reddening magnitude\n for your bandpass, use a float for the value of\n `twomass_dust_key`. If you want to skip DUST lookup entirely for\n this bandpass, use None for the value of `twomass_dust_key`.\n\n `band_label` is the label to use for this bandpass, e.g. 'W1' for\n WISE-1 band, 'u' for SDSS u, etc.\n\n The 'colors' list contains color definitions for all colors you want\n to generate using this bandpass. this list contains elements of the\n form::\n\n ['<bandkey1>-<bandkey2>','<BAND1> - <BAND2>']\n\n where the the first item is the bandpass keys making up this color,\n and the second item is the label for this color to be used by the\n frontends. An example::\n\n ['sdssu-sdssg','u - g']\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n nworkers : int\n The number of parallel workers to launch.\n\n Returns\n -------\n\n dict\n A dict with key:val pairs of the input light curve filename and the\n output star features pickle for each LC processed."}
{"_id": "q_9183", "text": "This is the parallel worker for the function below.\n\n task[0] = frequency for this worker\n task[1] = times array\n task[2] = mags array\n task[3] = fold_time\n task[4] = j_range\n task[5] = keep_threshold_1\n task[6] = keep_threshold_2\n task[7] = phasebinsize\n\n we don't need errs for the worker."}
{"_id": "q_9184", "text": "This drives the periodicfeatures collection for a list of periodfinding\n pickles.\n\n Parameters\n ----------\n\n pfpkl_list : list of str\n The list of period-finding pickles to use.\n\n lcbasedir : str\n The base directory where the associated light curves are located.\n\n outdir : str\n The directory where the results will be written.\n\n starfeaturesdir : str or None\n The directory containing the `starfeatures-<objectid>.pkl` files for\n each object to use calculate neighbor proximity light curve features.\n\n fourierorder : int\n The Fourier order to use to generate sinusoidal function and fit that to\n the phased light curve.\n\n transitparams : list of floats\n The transit depth, duration, and ingress duration to use to generate a\n trapezoid planet transit model fit to the phased light curve. The period\n used is the one provided in `period`, while the epoch is automatically\n obtained from a spline fit to the phased light curve.\n\n ebparams : list of floats\n The primary eclipse depth, eclipse duration, the primary-secondary depth\n ratio, and the phase of the secondary eclipse to use to generate an\n eclipsing binary model fit to the phased light curve. The period used is\n the one provided in `period`, while the epoch is automatically obtained\n from a spline fit to the phased light curve.\n\n pdiff_threshold : float\n This is the max difference between periods to consider them the same.\n\n sidereal_threshold : float\n This is the max difference between any of the 'best' periods and the\n sidereal day periods to consider them the same.\n\n sampling_peak_multiplier : float\n This is the minimum multiplicative factor of a 'best' period's\n normalized periodogram peak over the sampling periodogram peak at the\n same period required to accept the 'best' period as possibly real.\n\n sampling_startp, sampling_endp : float\n If the `pgramlist` doesn't have a time-sampling Lomb-Scargle\n periodogram, it will be obtained automatically. Use these kwargs to\n control the minimum and maximum period interval to be searched when\n generating this periodogram.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in calculating the features.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in calculating the features.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in calculating the features.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n verbose : bool\n If True, will indicate progress while working.\n\n maxobjects : int\n The total number of objects to process from `pfpkl_list`.\n\n Returns\n -------\n\n Nothing."}
{"_id": "q_9185", "text": "This runs periodic feature generation in parallel for all periodfinding\n pickles in the input list.\n\n Parameters\n ----------\n\n pfpkl_list : list of str\n The list of period-finding pickles to use.\n\n lcbasedir : str\n The base directory where the associated light curves are located.\n\n outdir : str\n The directory where the results will be written.\n\n starfeaturesdir : str or None\n The directory containing the `starfeatures-<objectid>.pkl` files for\n each object to use calculate neighbor proximity light curve features.\n\n fourierorder : int\n The Fourier order to use to generate sinusoidal function and fit that to\n the phased light curve.\n\n transitparams : list of floats\n The transit depth, duration, and ingress duration to use to generate a\n trapezoid planet transit model fit to the phased light curve. The period\n used is the one provided in `period`, while the epoch is automatically\n obtained from a spline fit to the phased light curve.\n\n ebparams : list of floats\n The primary eclipse depth, eclipse duration, the primary-secondary depth\n ratio, and the phase of the secondary eclipse to use to generate an\n eclipsing binary model fit to the phased light curve. The period used is\n the one provided in `period`, while the epoch is automatically obtained\n from a spline fit to the phased light curve.\n\n pdiff_threshold : float\n This is the max difference between periods to consider them the same.\n\n sidereal_threshold : float\n This is the max difference between any of the 'best' periods and the\n sidereal day periods to consider them the same.\n\n sampling_peak_multiplier : float\n This is the minimum multiplicative factor of a 'best' period's\n normalized periodogram peak over the sampling periodogram peak at the\n same period required to accept the 'best' period as possibly real.\n\n sampling_startp, sampling_endp : float\n If the `pgramlist` doesn't have a time-sampling Lomb-Scargle\n periodogram, it will be obtained automatically. Use these kwargs to\n control the minimum and maximum period interval to be searched when\n generating this periodogram.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in calculating the features.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in calculating the features.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in calculating the features.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n verbose : bool\n If True, will indicate progress while working.\n\n maxobjects : int\n The total number of objects to process from `pfpkl_list`.\n\n nworkers : int\n The number of parallel workers to launch to process the input.\n\n Returns\n -------\n\n dict\n A dict containing key: val pairs of the input period-finder result and\n the output periodic feature result pickles for each input pickle is\n returned."}
{"_id": "q_9186", "text": "This runs parallel periodicfeature extraction for a directory of\n periodfinding result pickles.\n\n Parameters\n ----------\n\n pfpkl_dir : str\n The directory containing the pickles to process.\n\n lcbasedir : str\n The directory where all of the associated light curve files are located.\n\n outdir : str\n The directory where all the output will be written.\n\n pfpkl_glob : str\n The UNIX file glob to use to search for period-finder result pickles in\n `pfpkl_dir`.\n\n starfeaturesdir : str or None\n The directory containing the `starfeatures-<objectid>.pkl` files for\n each object to use calculate neighbor proximity light curve features.\n\n fourierorder : int\n The Fourier order to use to generate sinusoidal function and fit that to\n the phased light curve.\n\n transitparams : list of floats\n The transit depth, duration, and ingress duration to use to generate a\n trapezoid planet transit model fit to the phased light curve. The period\n used is the one provided in `period`, while the epoch is automatically\n obtained from a spline fit to the phased light curve.\n\n ebparams : list of floats\n The primary eclipse depth, eclipse duration, the primary-secondary depth\n ratio, and the phase of the secondary eclipse to use to generate an\n eclipsing binary model fit to the phased light curve. The period used is\n the one provided in `period`, while the epoch is automatically obtained\n from a spline fit to the phased light curve.\n\n pdiff_threshold : float\n This is the max difference between periods to consider them the same.\n\n sidereal_threshold : float\n This is the max difference between any of the 'best' periods and the\n sidereal day periods to consider them the same.\n\n sampling_peak_multiplier : float\n This is the minimum multiplicative factor of a 'best' period's\n normalized periodogram peak over the sampling periodogram peak at the\n same period required to accept the 'best' period as possibly real.\n\n sampling_startp, sampling_endp : float\n If the `pgramlist` doesn't have a time-sampling Lomb-Scargle\n periodogram, it will be obtained automatically. Use these kwargs to\n control the minimum and maximum period interval to be searched when\n generating this periodogram.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in calculating the features.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in calculating the features.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in calculating the features.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n verbose : bool\n If True, will indicate progress while working.\n\n maxobjects : int\n The total number of objects to process from `pfpkl_list`.\n\n nworkers : int\n The number of parallel workers to launch to process the input.\n\n Returns\n -------\n\n dict\n A dict containing key: val pairs of the input period-finder result and\n the output periodic feature result pickles for each input pickle is\n returned."}
{"_id": "q_9187", "text": "This parses the header for a catalog file and returns it as a file object.\n\n Parameters\n ----------\n\n xc : str\n The file name of an xmatch catalog prepared previously.\n\n xk : list of str\n This is a list of column names to extract from the xmatch catalog.\n\n Returns\n -------\n\n tuple\n The tuple returned is of the form::\n\n (infd: the file object associated with the opened xmatch catalog,\n catdefdict: a dict describing the catalog column definitions,\n catcolinds: column number indices of the catalog,\n catcoldtypes: the numpy dtypes of the catalog columns,\n catcolnames: the names of each catalog column,\n catcolunits: the units associated with each catalog column)"}
{"_id": "q_9188", "text": "This loads the external xmatch catalogs into a dict for use in an xmatch.\n\n Parameters\n ----------\n\n xmatchto : list of str\n This is a list of paths to all the catalog text files that will be\n loaded.\n\n The text files must be 'CSVs' that use the '|' character as the\n separator betwen columns. These files should all begin with a header in\n JSON format on lines starting with the '#' character. this header will\n define the catalog and contains the name of the catalog and the column\n definitions. Column definitions must have the column name and the numpy\n dtype of the columns (in the same format as that expected for the\n numpy.genfromtxt function). Any line that does not begin with '#' is\n assumed to be part of the columns in the catalog. An example is shown\n below::\n\n # {\"name\":\"NSVS catalog of variable stars\",\n # \"columns\":[\n # {\"key\":\"objectid\", \"dtype\":\"U20\", \"name\":\"Object ID\", \"unit\": null},\n # {\"key\":\"ra\", \"dtype\":\"f8\", \"name\":\"RA\", \"unit\":\"deg\"},\n # {\"key\":\"decl\",\"dtype\":\"f8\", \"name\": \"Declination\", \"unit\":\"deg\"},\n # {\"key\":\"sdssr\",\"dtype\":\"f8\",\"name\":\"SDSS r\", \"unit\":\"mag\"},\n # {\"key\":\"vartype\",\"dtype\":\"U20\",\"name\":\"Variable type\", \"unit\":null}\n # ],\n # \"colra\":\"ra\",\n # \"coldec\":\"decl\",\n # \"description\":\"Contains variable stars from the NSVS catalog\"}\n objectid1 | 45.0 | -20.0 | 12.0 | detached EB\n objectid2 | 145.0 | 23.0 | 10.0 | RRab\n objectid3 | 12.0 | 11.0 | 14.0 | Cepheid\n .\n .\n .\n\n xmatchkeys : list of lists\n This is the list of lists of column names (as str) to get out of each\n `xmatchto` catalog. This should be the same length as `xmatchto` and\n each element here will apply to the respective file in `xmatchto`.\n\n outfile : str or None\n If this is not None, set this to the name of the pickle to write the\n collected xmatch catalogs to. this pickle can then be loaded\n transparently by the :py:func:`astrobase.checkplot.pkl.checkplot_dict`,\n :py:func:`astrobase.checkplot.pkl.checkplot_pickle` functions to provide\n xmatch info to the\n :py:func:`astrobase.checkplot.pkl_xmatch.xmatch_external_catalogs`\n function below.\n\n If this is None, will return the loaded xmatch catalogs directly. This\n will be a huge dict, so make sure you have enough RAM.\n\n Returns\n -------\n\n str or dict\n Based on the `outfile` kwarg, will either return the path to a collected\n xmatch pickle file or the collected xmatch dict."}
{"_id": "q_9189", "text": "Wraps the input angle to 360.0 degrees.\n\n Parameters\n ----------\n\n angle : float\n The angle to wrap around 360.0 deg.\n\n radians : bool\n If True, will assume that the input is in radians. The output will then\n also be in radians.\n\n Returns\n -------\n\n float\n Wrapped angle. If radians is True: input is assumed to be in radians,\n output is also in radians."}
{"_id": "q_9190", "text": "Calculates the great circle angular distance between two coords.\n\n This calculates the great circle angular distance in arcseconds between two\n coordinates (ra1,dec1) and (ra2,dec2). This is basically a clone of GCIRC\n from the IDL Astrolib.\n\n Parameters\n ----------\n\n ra1,dec1 : float or array-like\n The first coordinate's right ascension and declination value(s) in\n decimal degrees.\n\n ra2,dec2 : float or array-like\n The second coordinate's right ascension and declination value(s) in\n decimal degrees.\n\n Returns\n -------\n\n float or array-like\n Great circle distance between the two coordinates in arseconds.\n\n Notes\n -----\n\n If (`ra1`, `dec1`) is scalar and (`ra2`, `dec2`) is scalar: the result is a\n float distance in arcseconds.\n\n If (`ra1`, `dec1`) is scalar and (`ra2`, `dec2`) is array-like: the result\n is an np.array with distance in arcseconds between (`ra1`, `dec1`) and each\n element of (`ra2`, `dec2`).\n\n If (`ra1`, `dec1`) is array-like and (`ra2`, `dec2`) is scalar: the result\n is an np.array with distance in arcseconds between (`ra2`, `dec2`) and each\n element of (`ra1`, `dec1`).\n\n If (`ra1`, `dec1`) and (`ra2`, `dec2`) are both array-like: the result is an\n np.array with the pair-wise distance in arcseconds between each element of\n the two coordinate lists. In this case, if the input array-likes are not the\n same length, then excess elements of the longer one will be ignored."}
{"_id": "q_9191", "text": "This calculates the total proper motion of an object.\n\n Parameters\n ----------\n\n pmra : float or array-like\n The proper motion(s) in right ascension, measured in mas/yr.\n\n pmdecl : float or array-like\n The proper motion(s) in declination, measured in mas/yr.\n\n decl : float or array-like\n The declination of the object(s) in decimal degrees.\n\n Returns\n -------\n\n float or array-like\n The total proper motion(s) of the object(s) in mas/yr."}
{"_id": "q_9192", "text": "This converts from galactic coords to equatorial coordinates.\n\n Parameters\n ----------\n\n gl : float or array-like\n Galactic longitude values(s) in decimal degrees.\n\n gb : float or array-like\n Galactic latitude value(s) in decimal degrees.\n\n Returns\n -------\n\n tuple of (float, float) or tuple of (np.array, np.array)\n The equatorial coordinates (RA, DEC) for each element of the input\n (`gl`, `gb`) in decimal degrees. These are reported in the ICRS frame."}
{"_id": "q_9193", "text": "This returns the image-plane projected xi-eta coords for inra, indecl.\n\n Parameters\n ----------\n\n inra,indecl : array-like\n The equatorial coordinates to get the xi, eta coordinates for in decimal\n degrees or radians.\n\n incenterra,incenterdecl : float\n The center coordinate values to use to calculate the plane-projected\n coordinates around.\n\n deg : bool\n If this is True, the input angles are assumed to be in degrees and the\n output is in degrees as well.\n\n Returns\n -------\n\n tuple of np.arrays\n This is the (`xi`, `eta`) coordinate pairs corresponding to the\n image-plane projected coordinates for each pair of input equatorial\n coordinates in (`inra`, `indecl`)."}
{"_id": "q_9194", "text": "This generates fake planet transit light curves.\n\n Parameters\n ----------\n\n times : np.array\n This is an array of time values that will be used as the time base.\n\n mags,errs : np.array\n These arrays will have the model added to them. If either is\n None, `np.full_like(times, 0.0)` will used as a substitute and the model\n light curve will be centered around 0.0.\n\n paramdists : dict\n This is a dict containing parameter distributions to use for the\n model params, containing the following keys ::\n\n {'transitperiod', 'transitdepth', 'transitduration'}\n\n The values of these keys should all be 'frozen' scipy.stats distribution\n objects, e.g.:\n\n https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions\n The variability epoch will be automatically chosen from a uniform\n distribution between `times.min()` and `times.max()`.\n\n The ingress duration will be automatically chosen from a uniform\n distribution ranging from 0.05 to 0.5 of the transitduration.\n\n The transitdepth will be flipped automatically as appropriate if\n `magsarefluxes=True`.\n\n magsarefluxes : bool\n If the generated time series is meant to be a flux time-series, set this\n to True to get the correct sign of variability amplitude.\n\n Returns\n -------\n\n dict\n A dict of the form below is returned::\n\n {'vartype': 'planet',\n 'params': {'transitperiod': generated value of period,\n 'transitepoch': generated value of epoch,\n 'transitdepth': generated value of transit depth,\n 'transitduration': generated value of transit duration,\n 'ingressduration': generated value of transit ingress\n duration},\n 'times': the model times,\n 'mags': the model mags,\n 'errs': the model errs,\n 'varperiod': the generated period of variability == 'transitperiod'\n 'varamplitude': the generated amplitude of\n variability == 'transitdepth'}"}
{"_id": "q_9195", "text": "This generates fake flare light curves.\n\n Parameters\n ----------\n\n times : np.array\n This is an array of time values that will be used as the time base.\n\n mags,errs : np.array\n These arrays will have the model added to them. If either is\n None, `np.full_like(times, 0.0)` will used as a substitute and the model\n light curve will be centered around 0.0.\n\n paramdists : dict\n This is a dict containing parameter distributions to use for the\n model params, containing the following keys ::\n\n {'amplitude', 'nflares', 'risestdev', 'decayconst'}\n\n The values of these keys should all be 'frozen' scipy.stats distribution\n objects, e.g.:\n\n https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions\n The `flare_peak_time` for each flare will be generated automatically\n between `times.min()` and `times.max()` using a uniform distribution.\n\n The `amplitude` will be flipped automatically as appropriate if\n `magsarefluxes=True`.\n\n magsarefluxes : bool\n If the generated time series is meant to be a flux time-series, set this\n to True to get the correct sign of variability amplitude.\n\n Returns\n -------\n\n dict\n A dict of the form below is returned::\n\n {'vartype': 'flare',\n 'params': {'amplitude': generated value of flare amplitudes,\n 'nflares': generated value of number of flares,\n 'risestdev': generated value of stdev of rise time,\n 'decayconst': generated value of decay constant,\n 'peaktime': generated value of flare peak time},\n 'times': the model times,\n 'mags': the model mags,\n 'errs': the model errs,\n 'varamplitude': the generated amplitude of\n variability == 'amplitude'}"}
{"_id": "q_9196", "text": "This wraps `process_fakelc` for `make_fakelc_collection` below.\n\n Parameters\n ----------\n\n task : tuple\n This is of the form::\n\n task[0] = lcfile\n task[1] = outdir\n task[2] = magrms\n task[3] = dict with keys: {'lcformat', 'timecols', 'magcols',\n 'errcols', 'randomizeinfo'}\n\n Returns\n -------\n\n tuple\n This returns a tuple of the form::\n\n (fakelc_fpath,\n fakelc_lcdict['columns'],\n fakelc_lcdict['objectinfo'],\n fakelc_lcdict['moments'])"}
{"_id": "q_9197", "text": "This adds variability and noise to all fake LCs in `simbasedir`.\n\n If an object is marked as variable in the `fakelcs-info`.pkl file in\n `simbasedir`, a variable signal will be added to its light curve based on\n its selected type, default period and amplitude distribution, the\n appropriate params, etc. the epochs for each variable object will be chosen\n uniformly from its time-range (and may not necessarily fall on a actual\n observed time). Nonvariable objects will only have noise added as determined\n by their params, but no variable signal will be added.\n\n Parameters\n ----------\n\n simbasedir : str\n The directory containing the fake LCs to process.\n\n override_paramdists : dict\n This can be used to override the stored variable parameters in each fake\n LC. It should be a dict of the following form::\n\n {'<vartype1>': {'<param1>: a scipy.stats distribution function or\n the np.random.randint function,\n .\n .\n .\n '<paramN>: a scipy.stats distribution function\n or the np.random.randint function}\n\n for any vartype in VARTYPE_LCGEN_MAP. These are used to override the\n default parameter distributions for each variable type.\n\n overwrite_existingvar : bool\n If this is True, then will overwrite any existing variability in the\n input fake LCs in `simbasedir`.\n\n Returns\n -------\n\n dict\n This returns a dict containing the fake LC filenames as keys and\n variability info for each as values."}
{"_id": "q_9198", "text": "This finds flares in time series using the method in Walkowicz+ 2011.\n\n FIXME: finish this.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input time-series to find flares in.\n\n smoothbinsize : int\n The number of consecutive light curve points to smooth over in the time\n series using a Savitsky-Golay filter. The smoothed light curve is then\n subtracted from the actual light curve to remove trends that potentially\n last `smoothbinsize` light curve points. The default value is chosen as\n ~6.5 hours (97 x 4 minute cadence for HATNet/HATSouth).\n\n flare_minsigma : float\n The minimum sigma above the median LC level to designate points as\n belonging to possible flares.\n\n flare_maxcadencediff : int\n The maximum number of light curve points apart each possible flare event\n measurement is allowed to be. If this is 1, then we'll look for\n consecutive measurements.\n\n flare_mincadencepoints : int\n The minimum number of light curve points (each `flare_maxcadencediff`\n points apart) required that are at least `flare_minsigma` above the\n median light curve level to call an event a flare.\n\n magsarefluxes: bool\n If True, indicates that mags is actually an array of fluxes.\n\n savgol_polyorder: int\n The polynomial order of the function used by the Savitsky-Golay filter.\n\n savgol_kwargs : extra kwargs\n Any remaining keyword arguments are passed directly to the\n `savgol_filter` function from `scipy.signal`.\n\n Returns\n -------\n\n (nflares, flare_indices) : tuple\n Returns the total number of flares found and their time-indices (start,\n end) as tuples."}
{"_id": "q_9199", "text": "This calculates the relative peak heights for first npeaks in ACF.\n\n Usually, the first peak or the second peak (if its peak height > first peak)\n corresponds to the correct lag. When we know the correct lag, the period is\n then::\n\n bestperiod = time[lags == bestlag] - time[0]\n\n Parameters\n ----------\n\n lags : np.array\n An array of lags that the ACF is calculated at.\n\n acf : np.array\n The array containing the ACF values.\n\n npeaks : int\n THe maximum number of peaks to consider when finding peak heights.\n\n searchinterval : int\n From `scipy.signal.argrelmax`: \"How many points on each side to use for\n the comparison to consider comparator(n, n+x) to be True.\" This\n effectively sets how many points on each of the current peak will be\n used to check if the current peak is the local maximum.\n\n Returns\n -------\n\n dict\n This returns a dict of the following form::\n\n {'maxinds':the indices of the lag array where maxes are,\n 'maxacfs':the ACF values at each max,\n 'maxlags':the lag values at each max,\n 'mininds':the indices of the lag array where mins are,\n 'minacfs':the ACF values at each min,\n 'minlags':the lag values at each min,\n 'relpeakheights':the relative peak heights of each rel. ACF peak,\n 'relpeaklags':the lags at each rel. ACF peak found,\n 'peakindices':the indices of arrays where each rel. ACF peak is,\n 'bestlag':the lag value with the largest rel. ACF peak height,\n 'bestpeakheight':the largest rel. ACF peak height,\n 'bestpeakindex':the largest rel. ACF peak's number in all peaks}"}
{"_id": "q_9200", "text": "This is yet another alternative to calculate the autocorrelation.\n\n Taken from: `Bayesian Methods for Hackers by Cameron Pilon <http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter3_MCMC/Chapter3.ipynb#Autocorrelation>`_\n\n (This should be the fastest method to calculate ACFs.)\n\n Parameters\n ----------\n\n mags : np.array\n This is the magnitudes array. MUST NOT have any nans.\n\n lag : float\n The specific lag value to calculate the auto-correlation for. This MUST\n be less than total number of observations in `mags`.\n\n maglen : int\n The number of elements in the `mags` array.\n\n magmed : float\n The median of the `mags` array.\n\n magstd : float\n The standard deviation of the `mags` array.\n\n Returns\n -------\n\n float\n The auto-correlation at this specific `lag` value."}
{"_id": "q_9201", "text": "This calculates the ACF of a light curve.\n\n This will pre-process the light curve to fill in all the gaps and normalize\n everything to zero. If `fillgaps = 'noiselevel'`, fills the gaps with the\n noise level obtained via the procedure above. If `fillgaps = 'nan'`, fills\n the gaps with `np.nan`.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The measurement time-series and associated errors.\n\n maxlags : int\n The maximum number of lags to calculate.\n\n func : Python function\n This is a function to calculate the lags.\n\n fillgaps : 'noiselevel' or float\n This sets what to use to fill in gaps in the time series. If this is\n 'noiselevel', will smooth the light curve using a point window size of\n `filterwindow` (this should be an odd integer), subtract the smoothed LC\n from the actual LC and estimate the RMS. This RMS will be used to fill\n in the gaps. Other useful values here are 0.0, and npnan.\n\n filterwindow : int\n The light curve's smoothing filter window size to use if\n `fillgaps='noiselevel`'.\n\n forcetimebin : None or float\n This is used to force a particular cadence in the light curve other than\n the automatically determined cadence. This effectively rebins the light\n curve to this cadence. This should be in the same time units as `times`.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n magsarefluxes : bool\n If your input measurements in `mags` are actually fluxes instead of\n mags, set this is True.\n\n verbose : bool\n If True, will indicate progress and report errors.\n\n Returns\n -------\n\n dict\n A dict of the following form is returned::\n\n {'itimes': the interpolated time values after gap-filling,\n 'imags': the interpolated mag/flux values after gap-filling,\n 'ierrs': the interpolated mag/flux values after gap-filling,\n 'cadence': the cadence of the output mag/flux time-series,\n 'minitime': the minimum value of the interpolated times array,\n 'lags': the lags used to calculate the auto-correlation function,\n 'acf': the value of the ACF at each lag used}"}
{"_id": "q_9202", "text": "This calculates the harmonic AoV theta statistic for a frequency.\n\n This is a mostly faithful translation of the inner loop in `aovper.f90`. See\n the following for details:\n\n - http://users.camk.edu.pl/alex/\n - Schwarzenberg-Czerny (`1996\n <http://iopscience.iop.org/article/10.1086/309985/meta>`_)\n\n Schwarzenberg-Czerny (1996) equation 11::\n\n theta_prefactor = (K - 2N - 1)/(2N)\n theta_top = sum(c_n*c_n) (from n=0 to n=2N)\n theta_bot = variance(timeseries) - sum(c_n*c_n) (from n=0 to n=2N)\n\n theta = theta_prefactor * (theta_top/theta_bot)\n\n N = number of harmonics (nharmonics)\n K = length of time series (times.size)\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input time-series to calculate the test statistic for. These should\n all be of nans/infs and be normalized to zero.\n\n frequency : float\n The test frequency to calculate the statistic for.\n\n nharmonics : int\n The number of harmonics to calculate up to.The recommended range is 4 to\n 8.\n\n magvariance : float\n This is the (weighted by errors) variance of the magnitude time\n series. We provide it as a pre-calculated value here so we don't have to\n re-calculate it for every worker.\n\n Returns\n -------\n\n aov_harmonic_theta : float\n THe value of the harmonic AoV theta for the specified test `frequency`."}
{"_id": "q_9203", "text": "This opens a new database connection.\n\n Parameters\n ----------\n\n database : str\n Name of the database to connect to.\n\n user : str\n User name of the database server user.\n\n password : str\n Password for the database server user.\n\n host : str\n Database hostname or IP address to connect to."}
{"_id": "q_9204", "text": "This xmatches external catalogs to a collection of checkplots.\n\n Parameters\n ----------\n\n cplist : list of str\n This is the list of checkplot pickle files to process.\n\n xmatchpkl : str\n The filename of a pickle prepared beforehand with the\n `checkplot.pkl_xmatch.load_xmatch_external_catalogs` function,\n containing collected external catalogs to cross-match the objects in the\n input `cplist` against.\n\n xmatchradiusarcsec : float\n The match radius to use for the cross-match in arcseconds.\n\n updateexisting : bool\n If this is True, will only update the `xmatch` dict in each checkplot\n pickle with any new cross-matches to the external catalogs. If False,\n will overwrite the `xmatch` dict with results from the current run.\n\n resultstodir : str or None\n If this is provided, then it must be a directory to write the resulting\n checkplots to after xmatch is done. This can be used to keep the\n original checkplots in pristine condition for some reason.\n\n Returns\n -------\n\n dict\n Returns a dict with keys = input checkplot pickle filenames and vals =\n xmatch status dict for each checkplot pickle."}
{"_id": "q_9205", "text": "This xmatches external catalogs to all checkplots in a directory.\n\n Parameters\n -----------\n\n cpdir : str\n This is the directory to search in for checkplots.\n\n xmatchpkl : str\n The filename of a pickle prepared beforehand with the\n `checkplot.pkl_xmatch.load_xmatch_external_catalogs` function,\n containing collected external catalogs to cross-match the objects in the\n input `cplist` against.\n\n cpfileglob : str\n This is the UNIX fileglob to use in searching for checkplots.\n\n xmatchradiusarcsec : float\n The match radius to use for the cross-match in arcseconds.\n\n updateexisting : bool\n If this is True, will only update the `xmatch` dict in each checkplot\n pickle with any new cross-matches to the external catalogs. If False,\n will overwrite the `xmatch` dict with results from the current run.\n\n resultstodir : str or None\n If this is provided, then it must be a directory to write the resulting\n checkplots to after xmatch is done. This can be used to keep the\n original checkplots in pristine condition for some reason.\n\n Returns\n -------\n\n dict\n Returns a dict with keys = input checkplot pickle filenames and vals =\n xmatch status dict for each checkplot pickle."}
{"_id": "q_9206", "text": "This makes color-mag diagrams for all checkplot pickles in the provided\n list.\n\n Can make an arbitrary number of CMDs given lists of x-axis colors and y-axis\n mags to use.\n\n Parameters\n ----------\n\n cplist : list of str\n This is the list of checkplot pickles to process.\n\n outpkl : str\n The filename of the output pickle that will contain the color-mag\n information for all objects in the checkplots specified in `cplist`.\n\n color_mag1 : list of str\n This a list of the keys in each checkplot's `objectinfo` dict that will\n be used as color_1 in the equation::\n\n x-axis color = color_mag1 - color_mag2\n\n color_mag2 : list of str\n This a list of the keys in each checkplot's `objectinfo` dict that will\n be used as color_2 in the equation::\n\n x-axis color = color_mag1 - color_mag2\n\n yaxis_mag : list of str\n This is a list of the keys in each checkplot's `objectinfo` dict that\n will be used as the (absolute) magnitude y-axis of the color-mag\n diagrams.\n\n Returns\n -------\n\n str\n The path to the generated CMD pickle file for the collection of objects\n in the input checkplot list.\n\n Notes\n -----\n\n This can make many CMDs in one go. For example, the default kwargs for\n `color_mag`, `color_mag2`, and `yaxis_mag` result in two CMDs generated and\n written to the output pickle file:\n\n - CMD1 -> gaiamag - kmag on the x-axis vs gaia_absmag on the y-axis\n - CMD2 -> sdssg - kmag on the x-axis vs rpmj (J reduced PM) on the y-axis"}
{"_id": "q_9207", "text": "This makes CMDs for all checkplot pickles in the provided directory.\n\n Can make an arbitrary number of CMDs given lists of x-axis colors and y-axis\n mags to use.\n\n Parameters\n ----------\n\n cpdir : list of str\n This is the directory to get the list of input checkplot pickles from.\n\n outpkl : str\n The filename of the output pickle that will contain the color-mag\n information for all objects in the checkplots specified in `cplist`.\n\n cpfileglob : str\n The UNIX fileglob to use to search for checkplot pickle files.\n\n color_mag1 : list of str\n This a list of the keys in each checkplot's `objectinfo` dict that will\n be used as color_1 in the equation::\n\n x-axis color = color_mag1 - color_mag2\n\n color_mag2 : list of str\n This a list of the keys in each checkplot's `objectinfo` dict that will\n be used as color_2 in the equation::\n\n x-axis color = color_mag1 - color_mag2\n\n yaxis_mag : list of str\n This is a list of the keys in each checkplot's `objectinfo` dict that\n will be used as the (absolute) magnitude y-axis of the color-mag\n diagrams.\n\n Returns\n -------\n\n str\n The path to the generated CMD pickle file for the collection of objects\n in the input checkplot directory.\n\n Notes\n -----\n\n This can make many CMDs in one go. For example, the default kwargs for\n `color_mag`, `color_mag2`, and `yaxis_mag` result in two CMDs generated and\n written to the output pickle file:\n\n - CMD1 -> gaiamag - kmag on the x-axis vs gaia_absmag on the y-axis\n - CMD2 -> sdssg - kmag on the x-axis vs rpmj (J reduced PM) on the y-axis"}
{"_id": "q_9208", "text": "This adds CMDs for each object in cplist.\n\n Parameters\n ----------\n\n cplist : list of str\n This is the input list of checkplot pickles to add the CMDs to.\n\n cmdpkl : str\n This is the filename of the CMD pickle created previously.\n\n require_cmd_magcolor : bool\n If this is True, a CMD plot will not be made if the color and mag keys\n required by the CMD are not present or are nan in each checkplot's\n objectinfo dict.\n\n save_cmd_pngs : bool\n If this is True, then will save the CMD plots that were generated and\n added back to the checkplotdict as PNGs to the same directory as\n `cpx`.\n\n Returns\n -------\n\n Nothing."}
{"_id": "q_9209", "text": "This adds CMDs for each object in cpdir.\n\n Parameters\n ----------\n\n cpdir : list of str\n This is the directory to search for checkplot pickles.\n\n cmdpkl : str\n This is the filename of the CMD pickle created previously.\n\n cpfileglob : str\n The UNIX fileglob to use when searching for checkplot pickles to operate\n on.\n\n require_cmd_magcolor : bool\n If this is True, a CMD plot will not be made if the color and mag keys\n required by the CMD are not present or are nan in each checkplot's\n objectinfo dict.\n\n save_cmd_pngs : bool\n If this is True, then will save the CMD plots that were generated and\n added back to the checkplotdict as PNGs to the same directory as\n `cpx`.\n\n Returns\n -------\n\n Nothing."}
{"_id": "q_9210", "text": "This updates objectinfo for a list of checkplots.\n\n Useful in cases where a previous round of GAIA/finderchart/external catalog\n acquisition failed. This will preserve the following keys in the checkplots\n if they exist:\n\n comments\n varinfo\n objectinfo.objecttags\n\n Parameters\n ----------\n\n cplist : list of str\n A list of checkplot pickle file names to update.\n\n liststartindex : int\n The index of the input list to start working at.\n\n maxobjects : int\n The maximum number of objects to process in this run. Use this with\n `liststartindex` to effectively distribute working on a large list of\n input checkplot pickles over several sessions or machines.\n\n nworkers : int\n The number of parallel workers that will work on the checkplot\n update process.\n\n fast_mode : bool or float\n This runs the external catalog operations in a \"fast\" mode, with short\n timeouts and not trying to hit external catalogs that take a long time\n to respond. See the docstring for\n `checkplot.pkl_utils._pkl_finder_objectinfo` for details on how this\n works. If this is True, will run in \"fast\" mode with default timeouts (5\n seconds in most cases). If this is a float, will run in \"fast\" mode with\n the provided timeout value in seconds.\n\n findercmap : str or matplotlib.cm.Colormap object\n\n findercmap : str or matplotlib.cm.ColorMap object\n The Colormap object to use for the finder chart image.\n\n finderconvolve : astropy.convolution.Kernel object or None\n If not None, the Kernel object to use for convolving the finder image.\n\n deredden_objects : bool\n If this is True, will use the 2MASS DUST service to get extinction\n coefficients in various bands, and then try to deredden the magnitudes\n and colors of the object already present in the checkplot's objectinfo\n dict.\n\n custom_bandpasses : dict\n This is a dict used to provide custom bandpass definitions for any\n magnitude measurements in the objectinfo dict that are not automatically\n recognized by the `varclass.starfeatures.color_features` function. See\n its docstring for details on the required format.\n\n gaia_submit_timeout : float\n Sets the timeout in seconds to use when submitting a request to look up\n the object's information to the GAIA service. Note that if `fast_mode`\n is set, this is ignored.\n\n gaia_submit_tries : int\n Sets the maximum number of times the GAIA services will be contacted to\n obtain this object's information. If `fast_mode` is set, this is\n ignored, and the services will be contacted only once (meaning that a\n failure to respond will be silently ignored and no GAIA data will be\n added to the checkplot's objectinfo dict).\n\n gaia_max_timeout : float\n Sets the timeout in seconds to use when waiting for the GAIA service to\n respond to our request for the object's information. Note that if\n `fast_mode` is set, this is ignored.\n\n gaia_mirror : str\n This sets the GAIA mirror to use. This is a key in the\n `services.gaia.GAIA_URLS` dict which defines the URLs to hit for each\n mirror.\n\n complete_query_later : bool\n If this is True, saves the state of GAIA queries that are not yet\n complete when `gaia_max_timeout` is reached while waiting for the GAIA\n service to respond to our request. A later call for GAIA info on the\n same object will attempt to pick up the results from the existing query\n if it's completed. If `fast_mode` is True, this is ignored.\n\n lclistpkl : dict or str\n If this is provided, must be a dict resulting from reading a catalog\n produced by the `lcproc.catalogs.make_lclist` function or a str path\n pointing to the pickle file produced by that function. This catalog is\n used to find neighbors of the current object in the current light curve\n collection. Looking at neighbors of the object within the radius\n specified by `nbrradiusarcsec` is useful for light curves produced by\n instruments that have a large pixel scale, so are susceptible to\n blending of variability and potential confusion of neighbor variability\n with that of the actual object being looked at. If this is None, no\n neighbor lookups will be performed.\n\n nbrradiusarcsec : float\n The radius in arcseconds to use for a search conducted around the\n coordinates of this object to look for any potential confusion and\n blending of variability amplitude caused by their proximity.\n\n maxnumneighbors : int\n The maximum number of neighbors that will have their light curves and\n magnitudes noted in this checkplot as potential blends with the target\n object.\n\n plotdpi : int\n The resolution in DPI of the plots to generate in this function\n (e.g. the finder chart, etc.)\n\n findercachedir : str\n The path to the astrobase cache directory for finder chart downloads\n from the NASA SkyView service.\n\n verbose : bool\n If True, will indicate progress and warn about potential problems.\n\n Returns\n -------\n\n list of str\n Paths to the updated checkplot pickle file."}
{"_id": "q_9211", "text": "This updates the objectinfo for a directory of checkplot pickles.\n\n Useful in cases where a previous round of GAIA/finderchart/external catalog\n acquisition failed. This will preserve the following keys in the checkplots\n if they exist:\n\n comments\n varinfo\n objectinfo.objecttags\n\n Parameters\n ----------\n\n cpdir : str\n The directory to look for checkplot pickles in.\n\n cpglob : str\n The UNIX fileglob to use when searching for checkplot pickle files.\n\n liststartindex : int\n The index of the input list to start working at.\n\n maxobjects : int\n The maximum number of objects to process in this run. Use this with\n `liststartindex` to effectively distribute working on a large list of\n input checkplot pickles over several sessions or machines.\n\n nworkers : int\n The number of parallel workers that will work on the checkplot\n update process.\n\n fast_mode : bool or float\n This runs the external catalog operations in a \"fast\" mode, with short\n timeouts and not trying to hit external catalogs that take a long time\n to respond. See the docstring for\n `checkplot.pkl_utils._pkl_finder_objectinfo` for details on how this\n works. If this is True, will run in \"fast\" mode with default timeouts (5\n seconds in most cases). If this is a float, will run in \"fast\" mode with\n the provided timeout value in seconds.\n\n findercmap : str or matplotlib.cm.Colormap object\n\n findercmap : str or matplotlib.cm.ColorMap object\n The Colormap object to use for the finder chart image.\n\n finderconvolve : astropy.convolution.Kernel object or None\n If not None, the Kernel object to use for convolving the finder image.\n\n deredden_objects : bool\n If this is True, will use the 2MASS DUST service to get extinction\n coefficients in various bands, and then try to deredden the magnitudes\n and colors of the object already present in the checkplot's objectinfo\n dict.\n\n custom_bandpasses : dict\n This is a dict used to provide custom bandpass definitions for any\n magnitude measurements in the objectinfo dict that are not automatically\n recognized by the `varclass.starfeatures.color_features` function. See\n its docstring for details on the required format.\n\n gaia_submit_timeout : float\n Sets the timeout in seconds to use when submitting a request to look up\n the object's information to the GAIA service. Note that if `fast_mode`\n is set, this is ignored.\n\n gaia_submit_tries : int\n Sets the maximum number of times the GAIA services will be contacted to\n obtain this object's information. If `fast_mode` is set, this is\n ignored, and the services will be contacted only once (meaning that a\n failure to respond will be silently ignored and no GAIA data will be\n added to the checkplot's objectinfo dict).\n\n gaia_max_timeout : float\n Sets the timeout in seconds to use when waiting for the GAIA service to\n respond to our request for the object's information. Note that if\n `fast_mode` is set, this is ignored.\n\n gaia_mirror : str\n This sets the GAIA mirror to use. This is a key in the\n `services.gaia.GAIA_URLS` dict which defines the URLs to hit for each\n mirror.\n\n complete_query_later : bool\n If this is True, saves the state of GAIA queries that are not yet\n complete when `gaia_max_timeout` is reached while waiting for the GAIA\n service to respond to our request. A later call for GAIA info on the\n same object will attempt to pick up the results from the existing query\n if it's completed. If `fast_mode` is True, this is ignored.\n\n lclistpkl : dict or str\n If this is provided, must be a dict resulting from reading a catalog\n produced by the `lcproc.catalogs.make_lclist` function or a str path\n pointing to the pickle file produced by that function. This catalog is\n used to find neighbors of the current object in the current light curve\n collection. Looking at neighbors of the object within the radius\n specified by `nbrradiusarcsec` is useful for light curves produced by\n instruments that have a large pixel scale, so are susceptible to\n blending of variability and potential confusion of neighbor variability\n with that of the actual object being looked at. If this is None, no\n neighbor lookups will be performed.\n\n nbrradiusarcsec : float\n The radius in arcseconds to use for a search conducted around the\n coordinates of this object to look for any potential confusion and\n blending of variability amplitude caused by their proximity.\n\n maxnumneighbors : int\n The maximum number of neighbors that will have their light curves and\n magnitudes noted in this checkplot as potential blends with the target\n object.\n\n plotdpi : int\n The resolution in DPI of the plots to generate in this function\n (e.g. the finder chart, etc.)\n\n findercachedir : str\n The path to the astrobase cache directory for finder chart downloads\n from the NASA SkyView service.\n\n verbose : bool\n If True, will indicate progress and warn about potential problems.\n\n Returns\n -------\n\n list of str\n Paths to the updated checkplot pickle file."}
{"_id": "q_9212", "text": "This gets the required keys from the requested file.\n\n Parameters\n ----------\n\n task : tuple\n Task is a two element tuple::\n\n - task[0] is the dict to work on\n\n - task[1] is a list of lists of str indicating all the key address to\n extract items from the dict for\n\n Returns\n -------\n\n list\n This is a list of all of the items at the requested key addresses."}
{"_id": "q_9213", "text": "This is a double inverted gaussian.\n\n Parameters\n ----------\n\n x : np.array\n The items at which the Gaussian is evaluated.\n\n amp1,amp2 : float\n The amplitude of Gaussian 1 and Gaussian 2.\n\n loc1,loc2 : float\n The central value of Gaussian 1 and Gaussian 2.\n\n std1,std2 : float\n The standard deviation of Gaussian 1 and Gaussian 2.\n\n Returns\n -------\n\n np.array\n Returns a double inverted Gaussian function evaluated at the items in\n `x`, using the provided parameters of `amp`, `loc`, and `std` for two\n component Gaussians 1 and 2."}
{"_id": "q_9214", "text": "This returns a double eclipse shaped function.\n\n Suitable for first order modeling of eclipsing binaries.\n\n Parameters\n ----------\n\n ebparams : list of float\n This contains the parameters for the eclipsing binary::\n\n ebparams = [period (time),\n epoch (time),\n pdepth: primary eclipse depth (mags),\n pduration: primary eclipse duration (phase),\n psdepthratio: primary-secondary eclipse depth ratio,\n secondaryphase: center phase of the secondary eclipse]\n\n `period` is the period in days.\n\n `epoch` is the time of minimum in JD.\n\n `pdepth` is the depth of the primary eclipse.\n\n - for magnitudes -> pdepth should be < 0\n - for fluxes -> pdepth should be > 0\n\n `pduration` is the length of the primary eclipse in phase.\n\n `psdepthratio` is the ratio in the eclipse depths:\n `depth_secondary/depth_primary`. This is generally the same as the ratio\n of the `T_effs` of the two stars.\n\n `secondaryphase` is the phase at which the minimum of the secondary\n eclipse is located. This effectively parameterizes eccentricity.\n\n All of these will then have fitted values after the fit is done.\n\n times,mags,errs : np.array\n The input time-series of measurements and associated errors for which\n the eclipse model will be generated. The times will be used to generate\n model mags, and the input `times`, `mags`, and `errs` will be resorted\n by model phase and returned.\n\n Returns\n -------\n\n (modelmags, phase, ptimes, pmags, perrs) : tuple\n Returns the model mags and phase values. Also returns the input `times`,\n `mags`, and `errs` sorted by the model's phase."}
{"_id": "q_9215", "text": "Converts given J, H, Ks mags to a B magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted B band magnitude."}
{"_id": "q_9216", "text": "Converts given J, H, Ks mags to a V magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted V band magnitude."}
{"_id": "q_9217", "text": "Converts given J, H, Ks mags to an R magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted R band magnitude."}
{"_id": "q_9218", "text": "Converts given J, H, Ks mags to an I magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted I band magnitude."}
{"_id": "q_9219", "text": "Converts given J, H, Ks mags to an SDSS u magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted SDSS u band magnitude."}
{"_id": "q_9220", "text": "Converts given J, H, Ks mags to an SDSS g magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted SDSS g band magnitude."}
{"_id": "q_9221", "text": "Converts given J, H, Ks mags to an SDSS i magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted SDSS i band magnitude."}
{"_id": "q_9222", "text": "Converts given J, H, Ks mags to an SDSS z magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted SDSS z band magnitude."}
{"_id": "q_9223", "text": "Calculates the Schwarzenberg-Czerny AoV statistic at a test frequency.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input time-series and associated errors.\n\n frequency : float\n The test frequency to calculate the theta statistic at.\n\n binsize : float\n The phase bin size to use.\n\n minbin : int\n The minimum number of items in a phase bin to consider in the\n calculation of the statistic.\n\n Returns\n -------\n\n theta_aov : float\n The value of the AoV statistic at the specified `frequency`."}
{"_id": "q_9224", "text": "This just puts all of the period-finders on a single periodogram.\n\n This will renormalize all of the periodograms so their values lie between 0\n and 1, with values lying closer to 1 being more significant. Periodograms\n that give the same best periods will have their peaks line up together.\n\n Parameters\n ----------\n\n pflist : list of dict\n This is a list of result dicts from any of the period-finders in\n periodbase. To use your own period-finders' results here, make sure the\n result dict is of the form and has at least the keys below::\n\n {'periods': np.array of all periods searched by the period-finder,\n 'lspvals': np.array of periodogram power value for each period,\n 'bestperiod': a float value that is the period with the highest\n peak in the periodogram, i.e. the most-likely actual\n period,\n 'method': a three-letter code naming the period-finder used; must\n be one of the keys in the\n `astrobase.periodbase.METHODLABELS` dict,\n 'nbestperiods': a list of the periods corresponding to periodogram\n peaks (`nbestlspvals` below) to annotate on the\n periodogram plot so they can be called out\n visually,\n 'nbestlspvals': a list of the power values associated with\n periodogram peaks to annotate on the periodogram\n plot so they can be called out visually; should be\n the same length as `nbestperiods` above,\n 'kwargs': dict of kwargs passed to your own period-finder function}\n\n outfile : str\n This is the output file to write the output to. NOTE: EPS/PS won't work\n because we use alpha transparency to better distinguish between the\n various periodograms.\n\n addmethods : bool\n If this is True, will add all of the normalized periodograms together,\n then renormalize them to between 0 and 1. In this way, if all of the\n period-finders agree on something, it'll stand out easily. FIXME:\n implement this kwarg.\n\n Returns\n -------\n\n str\n The name of the generated plot file."}
{"_id": "q_9225", "text": "This returns a BATMAN planetary transit model.\n\n Parameters\n ----------\n\n times : np.array\n The times at which the model will be evaluated.\n\n t0 : float\n The time of periastron for the transit.\n\n per : float\n The orbital period of the planet.\n\n rp : float\n The stellar radius of the planet's star (in Rsun).\n\n a : float\n The semi-major axis of the planet's orbit (in Rsun).\n\n inc : float\n The orbital inclination (in degrees).\n\n ecc : float\n The eccentricity of the orbit.\n\n w : float\n The longitude of periastron (in degrees).\n\n u : list of floats\n The limb darkening coefficients specific to the limb darkening model\n used.\n\n limb_dark : {\"uniform\", \"linear\", \"quadratic\", \"square-root\", \"logarithmic\", \"exponential\", \"power2\", \"custom\"}\n The type of limb darkening model to use. See the full list here:\n\n https://www.cfa.harvard.edu/~lkreidberg/batman/tutorial.html#limb-darkening-options\n\n exp_time_minutes : float\n The amount of time to 'smear' the transit LC points over to simulate a\n long exposure time.\n\n supersample_factor: int\n The number of supersampled time data points to average the lightcurve\n model over.\n\n Returns\n -------\n\n (params, batman_model) : tuple\n The returned tuple contains the params list and the generated\n `batman.TransitModel` object."}
{"_id": "q_9226", "text": "Assume priors on all parameters have uniform probability."}
{"_id": "q_9227", "text": "This runs the TRILEGAL query for decimal equatorial coordinates.\n\n Parameters\n ----------\n\n ra,decl : float\n These are the center equatorial coordinates in decimal degrees\n\n filtersystem : str\n This is a key in the TRILEGAL_FILTER_SYSTEMS dict. Use the function\n :py:func:`astrobase.services.trilegal.list_trilegal_filtersystems` to\n see a nicely formatted table with the key and description for each of\n these.\n\n field_deg2 : float\n The area of the simulated field in square degrees. This is in the\n Galactic coordinate system.\n\n usebinaries : bool\n If this is True, binaries will be present in the model results.\n\n extinction_sigma : float\n This is the applied std dev around the `Av_extinction` value for the\n galactic coordinates requested.\n\n magnitude_limit : float\n This is the limiting magnitude of the simulation in the\n `maglim_filtercol` band index of the filter system chosen.\n\n maglim_filtercol : int\n The index in the filter system list of the magnitude limiting band.\n\n trilegal_version : float\n This is the the version of the TRILEGAL form to use. This can usually be\n left as-is.\n\n extraparams : dict or None\n This is a dict that can be used to override parameters of the model\n other than the basic ones used for input to this function. All\n parameters are listed in `TRILEGAL_DEFAULT_PARAMS` above. See:\n\n http://stev.oapd.inaf.it/cgi-bin/trilegal\n\n for explanations of these parameters.\n\n forcefetch : bool\n If this is True, the query will be retried even if cached results for\n it exist.\n\n cachedir : str\n This points to the directory where results will be downloaded.\n\n verbose : bool\n If True, will indicate progress and warn of any issues.\n\n timeout : float\n This sets the amount of time in seconds to wait for the service to\n respond to our initial request.\n\n refresh : float\n This sets the amount of time in seconds to wait before checking if the\n result file is available. If the results file isn't available after\n `refresh` seconds have elapsed, the function will wait for `refresh`\n seconds continuously, until `maxtimeout` is reached or the results file\n becomes available.\n\n maxtimeout : float\n The maximum amount of time in seconds to wait for a result to become\n available after submitting our query request.\n\n Returns\n -------\n\n dict\n This returns a dict of the form::\n\n {'params':the input param dict used,\n 'extraparams':any extra params used,\n 'provenance':'cached' or 'new download',\n 'tablefile':the path on disk to the downloaded model text file}"}
{"_id": "q_9228", "text": "This reads a downloaded TRILEGAL model file.\n\n Parameters\n ----------\n\n modelfile : str\n Path to the downloaded model file to read.\n\n Returns\n -------\n\n np.recarray\n Returns the model table as a Numpy record array."}
{"_id": "q_9229", "text": "This compares two values in constant time.\n\n Taken from tornado:\n\n https://github.com/tornadoweb/tornado/blob/\n d4eb8eb4eb5cc9a6677e9116ef84ded8efba8859/tornado/web.py#L3060"}
{"_id": "q_9230", "text": "Overrides the default serializer for `JSONEncoder`.\n\n This can serialize the following objects in addition to what\n `JSONEncoder` can already do.\n\n - `np.array`\n - `bytes`\n - `complex`\n - `np.float64` and other `np.dtype` objects\n\n Parameters\n ----------\n\n obj : object\n A Python object to serialize to JSON.\n\n Returns\n -------\n\n str\n A JSON encoded representation of the input object."}
{"_id": "q_9231", "text": "This handles GET requests to the index page.\n\n TODO: provide the correct baseurl from the checkplotserver options dict,\n so the frontend JS can just read that off immediately."}
{"_id": "q_9232", "text": "This handles GET requests for the current checkplot-list.json file.\n\n Used with AJAX from frontend."}
{"_id": "q_9233", "text": "This smooths the magseries with a Savitsky-Golay filter.\n\n Parameters\n ----------\n\n mags : np.array\n The input mags/flux time-series to smooth.\n\n windowsize : int\n This is a odd integer containing the smoothing window size.\n\n polyorder : int\n This is an integer containing the polynomial degree order to use when\n generating the Savitsky-Golay filter.\n\n Returns\n -------\n\n np.array\n The smoothed mag/flux time-series array."}
{"_id": "q_9234", "text": "Detrends a magnitude series given in mag using accompanying values of S in\n fsv, D in fdv, K in fkv, x coords in xcc, y coords in ycc, background in\n bgv, and background error in bge. smooth is used to set a smoothing\n parameter for the fit function. Does EPD voodoo."}
{"_id": "q_9235", "text": "This is the EPD function to fit using a smoothed mag-series."}
{"_id": "q_9236", "text": "Detrends a magnitude series using External Parameter Decorrelation.\n\n Requires a set of external parameters similar to those present in HAT light\n curves. At the moment, the HAT light-curve-specific external parameters are:\n\n - S: the 'fsv' column in light curves,\n - D: the 'fdv' column in light curves,\n - K: the 'fkv' column in light curves,\n - x coords: the 'xcc' column in light curves,\n - y coords: the 'ycc' column in light curves,\n - background value: the 'bgv' column in light curves,\n - background error: the 'bge' column in light curves,\n - hour angle: the 'iha' column in light curves,\n - zenith distance: the 'izd' column in light curves\n\n S, D, and K are defined as follows:\n\n - S -> measure of PSF sharpness (~1/sigma^2 sosmaller S = wider PSF)\n - D -> measure of PSF ellipticity in xy direction\n - K -> measure of PSF ellipticity in cross direction\n\n S, D, K are related to the PSF's variance and covariance, see eqn 30-33 in\n A. Pal's thesis: https://arxiv.org/abs/0906.3486\n\n NOTE: The errs are completely ignored and returned unchanged (except for\n sigclip and finite filtering).\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input mag/flux time-series to detrend.\n\n fsv : np.array\n Array containing the external parameter `S` of the same length as times.\n\n fdv : np.array\n Array containing the external parameter `D` of the same length as times.\n\n fkv : np.array\n Array containing the external parameter `K` of the same length as times.\n\n xcc : np.array\n Array containing the external parameter `x-coords` of the same length as\n times.\n\n ycc : np.array\n Array containing the external parameter `y-coords` of the same length as\n times.\n\n bgv : np.array\n Array containing the external parameter `background value` of the same\n length as times.\n\n bge : np.array\n Array containing the external parameter `background error` of the same\n length as times.\n\n iha : np.array\n Array containing the external parameter `hour angle` of the same length\n as times.\n\n izd : np.array\n Array containing the external parameter `zenith distance` of the same\n length as times.\n\n magsarefluxes : bool\n Set this to True if `mags` actually contains fluxes.\n\n epdsmooth_sigclip : float or int or sequence of two floats/ints or None\n This specifies how to sigma-clip the input LC before fitting the EPD\n function to it.\n\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n epdsmooth_windowsize : int\n This is the number of LC points to smooth over to generate a smoothed\n light curve that will be used to fit the EPD function.\n\n epdsmooth_func : Python function\n This sets the smoothing filter function to use. A Savitsky-Golay filter\n is used to smooth the light curve by default. The functions that can be\n used with this kwarg are listed in `varbase.trends`. If you want to use\n your own function, it MUST have the following signature::\n\n def smoothfunc(mags_array, window_size, **extraparams)\n\n and return a numpy array of the same size as `mags_array` with the\n smoothed time-series. Any extra params can be provided using the\n `extraparams` dict.\n\n epdsmooth_extraparams : dict\n This is a dict of any extra filter params to supply to the smoothing\n function.\n\n Returns\n -------\n\n dict\n Returns a dict of the following form::\n\n {'times':the input times after non-finite elems removed,\n 'mags':the EPD detrended mag values (the EPD mags),\n 'errs':the errs after non-finite elems removed,\n 'fitcoeffs':EPD fit coefficient values,\n 'fitinfo':the full tuple returned by scipy.leastsq,\n 'fitmags':the EPD fit function evaluated at times,\n 'mags_median': this is median of the EPD mags,\n 'mags_mad': this is the MAD of EPD mags}"}
{"_id": "q_9237", "text": "This uses a `RandomForestRegressor` to de-correlate the given magseries.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input mag/flux time-series to run EPD on.\n\n externalparam_arrs : list of np.arrays\n This is a list of ndarrays of external parameters to decorrelate\n against. These should all be the same size as `times`, `mags`, `errs`.\n\n epdsmooth : bool\n If True, sets the training LC for the RandomForestRegress to be a\n smoothed version of the sigma-clipped light curve provided in `times`,\n `mags`, `errs`.\n\n epdsmooth_sigclip : float or int or sequence of two floats/ints or None\n This specifies how to sigma-clip the input LC before smoothing it and\n fitting the EPD function to it. The actual LC will not be sigma-clipped.\n\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n epdsmooth_windowsize : int\n This is the number of LC points to smooth over to generate a smoothed\n light curve that will be used to fit the EPD function.\n\n epdsmooth_func : Python function\n This sets the smoothing filter function to use. A Savitsky-Golay filter\n is used to smooth the light curve by default. The functions that can be\n used with this kwarg are listed in `varbase.trends`. If you want to use\n your own function, it MUST have the following signature::\n\n def smoothfunc(mags_array, window_size, **extraparams)\n\n and return a numpy array of the same size as `mags_array` with the\n smoothed time-series. Any extra params can be provided using the\n `extraparams` dict.\n\n epdsmooth_extraparams : dict\n This is a dict of any extra filter params to supply to the smoothing\n function.\n\n rf_subsample : float\n Defines the fraction of the size of the `mags` array to use for\n training the random forest regressor.\n\n rf_ntrees : int\n This is the number of trees to use for the `RandomForestRegressor`.\n\n rf_extraprams : dict\n This is a dict of any extra kwargs to provide to the\n `RandomForestRegressor` instance used.\n\n Returns\n -------\n\n dict\n Returns a dict with decorrelated mags and the usual info from the\n `RandomForestRegressor`: variable importances, etc."}
{"_id": "q_9238", "text": "This calculates the Stellingwerf PDM theta value at a test frequency.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input time-series and associated errors.\n\n frequency : float\n The test frequency to calculate the theta statistic at.\n\n binsize : float\n The phase bin size to use.\n\n minbin : int\n The minimum number of items in a phase bin to consider in the\n calculation of the statistic.\n\n Returns\n -------\n\n theta_pdm : float\n The value of the theta statistic at the specified `frequency`."}
{"_id": "q_9239", "text": "Converts magnitude measurements in Kepler band to SDSS r band.\n\n Parameters\n ----------\n\n keplermag : float or array-like\n The Kepler magnitude value(s) to convert to fluxes.\n\n kic_sdssg,kic_sdssr : float or array-like\n The SDSS g and r magnitudes of the object(s) from the Kepler Input\n Catalog. The .llc.fits MAST light curve file for a Kepler object\n contains these values in the FITS extension 0 header.\n\n Returns\n -------\n\n float or array-like\n SDSS r band magnitude(s) converted from the Kepler band magnitude."}
{"_id": "q_9240", "text": "This filters the Kepler `lcdict`, removing nans and bad\n observations.\n\n By default, this function removes points in the Kepler LC that have ANY\n quality flags set.\n\n Parameters\n ----------\n\n lcdict : lcdict\n An `lcdict` produced by `consolidate_kepler_fitslc` or\n `read_kepler_fitslc`.\n\n filterflags : bool\n If True, will remove any measurements that have non-zero quality flags\n present. This usually indicates an issue with the instrument or\n spacecraft.\n\n nanfilter : {'sap','pdc','sap,pdc'}\n Indicates the flux measurement type(s) to apply the filtering to.\n\n timestoignore : list of tuples or None\n This is of the form::\n\n [(time1_start, time1_end), (time2_start, time2_end), ...]\n\n and indicates the start and end times to mask out of the final\n lcdict. Use this to remove anything that wasn't caught by the quality\n flags.\n\n Returns\n -------\n\n lcdict\n Returns an `lcdict` (this is useable by most astrobase functions for LC\n processing). The `lcdict` is filtered IN PLACE!"}
{"_id": "q_9241", "text": "After running `detrend_centroid`, this gets positions of centroids during\n transits, and outside of transits.\n\n These positions can then be used in a false positive analysis.\n\n This routine requires knowing the ingress and egress times for every\n transit of interest within the quarter this routine is being called for.\n There is currently no astrobase routine that automates this for periodic\n transits (it must be done in a calling routine).\n\n To get out of transit centroids, this routine takes points outside of the\n \"buffer\" set by `oot_buffer_time`, sampling 3x as many points on either\n side of the transit as are in the transit (or however many are specified by\n `sample_factor`).\n\n Parameters\n ----------\n\n lcd : lcdict\n An `lcdict` generated by the `read_kepler_fitslc` function. We assume\n that the `detrend_centroid` function has been run on this `lcdict`.\n\n t_ing_egr : list of tuples\n This is of the form::\n\n [(ingress time of i^th transit, egress time of i^th transit)]\n\n for i the transit number index in this quarter (starts at zero at the\n beginning of every quarter). Assumes units of BJD.\n\n oot_buffer_time : float\n Number of days away from ingress and egress times to begin sampling \"out\n of transit\" centroid points. The number of out of transit points to take\n per transit is 3x the number of points in transit.\n\n sample_factor : float\n The size of out of transit window from which to sample.\n\n Returns\n -------\n\n dict\n This is a dictionary keyed by transit number (i.e., the same index as\n `t_ing_egr`), where each key contains the following value::\n\n {'ctd_x_in_tra':ctd_x_in_tra,\n 'ctd_y_in_tra':ctd_y_in_tra,\n 'ctd_x_oot':ctd_x_oot,\n 'ctd_y_oot':ctd_y_oot,\n 'npts_in_tra':len(ctd_x_in_tra),\n 'npts_oot':len(ctd_x_oot),\n 'in_tra_times':in_tra_times,\n 'oot_times':oot_times}"}
{"_id": "q_9242", "text": "This is a helper function for centroid detrending."}
{"_id": "q_9243", "text": "This bins the given light curve file in time using the specified bin size.\n\n Parameters\n ----------\n\n lcfile : str\n The file name to process.\n\n binsizesec : float\n The time bin-size in seconds.\n\n outdir : str or None\n If this is a str, the output LC will be written to `outdir`. If this is\n None, the output LC will be written to the same directory as `lcfile`.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curve file.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n timecols,magcols,errcols : lists of str\n The keys in the lcdict produced by your light curve reader function that\n correspond to the times, mags/fluxes, and associated measurement errors\n that will be used as inputs to the binning process. If these are None,\n the default values for `timecols`, `magcols`, and `errcols` for your\n light curve format will be used here.\n\n minbinelems : int\n The minimum number of time-bin elements required to accept a time-bin as\n valid for the output binned light curve.\n\n Returns\n -------\n\n str\n The name of the output pickle file with the binned LC.\n\n Writes the output binned light curve to a pickle that contains the\n lcdict with an added `lcdict['binned'][magcol]` key, which contains the\n binned times, mags/fluxes, and errs as\n `lcdict['binned'][magcol]['times']`, `lcdict['binned'][magcol]['mags']`,\n and `lcdict['epd'][magcol]['errs']` for each `magcol` provided in the\n input or default `magcols` value for this light curve format."}
{"_id": "q_9244", "text": "This time bins all the light curves in the specified directory.\n\n Parameters\n ----------\n\n lcdir : list of str\n Directory containing the input LCs to process.\n\n binsizesec : float\n The time bin size to use in seconds.\n\n maxobjects : int or None\n If provided, LC processing will stop at `lclist[maxobjects]`.\n\n outdir : str or None\n The directory where output LCs will be written. If None, will write to\n the same directory as the input LCs.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curve file.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n timecols,magcols,errcols : lists of str\n The keys in the lcdict produced by your light curve reader function that\n correspond to the times, mags/fluxes, and associated measurement errors\n that will be used as inputs to the binning process. If these are None,\n the default values for `timecols`, `magcols`, and `errcols` for your\n light curve format will be used here.\n\n minbinelems : int\n The minimum number of time-bin elements required to accept a time-bin as\n valid for the output binned light curve.\n\n nworkers : int\n Number of parallel workers to launch.\n\n maxworkertasks : int\n The maximum number of tasks a parallel worker will complete before being\n replaced to guard against memory leaks.\n\n Returns\n -------\n\n dict\n The returned dict contains keys = input LCs, vals = output LCs."}
{"_id": "q_9245", "text": "This runs variable feature extraction in parallel for all LCs in `lclist`.\n\n Parameters\n ----------\n\n lclist : list of str\n The list of light curve file names to process.\n\n outdir : str\n The directory where the output varfeatures pickle files will be written.\n\n maxobjects : int\n The number of LCs to process from `lclist`.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in calculating the features.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in calculating the features.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in calculating the features.\n\n mindet : int\n The minimum number of LC points required to generate variability\n features.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n nworkers : int\n The number of parallel workers to launch.\n\n Returns\n -------\n\n dict\n A dict with key:val pairs of input LC file name : the generated\n variability features pickles for each of the input LCs, with results for\n each magcol in the input `magcol` or light curve format's default\n `magcol` list."}
{"_id": "q_9246", "text": "This runs parallel variable feature extraction for a directory of LCs.\n\n Parameters\n ----------\n\n lcdir : str\n The directory of light curve files to process.\n\n outdir : str\n The directory where the output varfeatures pickle files will be written.\n\n fileglob : str or None\n The file glob to use when looking for light curve files in `lcdir`. If\n None, the default file glob associated for this LC format will be used.\n\n maxobjects : int\n The number of LCs to process from `lclist`.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in calculating the features.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in calculating the features.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in calculating the features.\n\n mindet : int\n The minimum number of LC points required to generate variability\n features.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n nworkers : int\n The number of parallel workers to launch.\n\n Returns\n -------\n\n dict\n A dict with key:val pairs of input LC file name : the generated\n variability features pickles for each of the input LCs, with results for\n each magcol in the input `magcol` or light curve format's default\n `magcol` list."}
{"_id": "q_9247", "text": "This is just a shortened form of the function above for convenience.\n\n This only handles pickle files as input.\n\n Parameters\n ----------\n\n checkplotin : str\n File name of a checkplot pickle file to convert to a PNG.\n\n extrarows : list of tuples\n This is a list of 4-element tuples containing paths to PNG files that\n will be added to the end of the rows generated from the checkplotin\n pickle/dict. Each tuple represents a row in the final output PNG\n file. If there are less than 4 elements per tuple, the missing elements\n will be filled in with white-space. If there are more than 4 elements\n per tuple, only the first four will be used.\n\n The purpose of this kwarg is to incorporate periodograms and phased LC\n plots (in the form of PNGs) generated from an external period-finding\n function or program (like VARTOOLS) to allow for comparison with\n astrobase results.\n\n NOTE: the PNG files specified in `extrarows` here will be added to those\n already present in the input `checkplotdict['externalplots']` if that is\n None because you passed in a similar list of external plots to the\n :py:func:`astrobase.checkplot.pkl.checkplot_pickle` function earlier. In\n this case, `extrarows` can be used to add even more external plots if\n desired.\n\n Each external plot PNG will be resized to 750 x 480 pixels to fit into\n an output image cell.\n\n By convention, each 4-element tuple should contain:\n\n - a periodiogram PNG\n - phased LC PNG with 1st best peak period from periodogram\n - phased LC PNG with 2nd best peak period from periodogram\n - phased LC PNG with 3rd best peak period from periodogram\n\n Example of extrarows::\n\n [('/path/to/external/bls-periodogram.png',\n '/path/to/external/bls-phasedlc-plot-bestpeak.png',\n '/path/to/external/bls-phasedlc-plot-peak2.png',\n '/path/to/external/bls-phasedlc-plot-peak3.png'),\n ('/path/to/external/pdm-periodogram.png',\n '/path/to/external/pdm-phasedlc-plot-bestpeak.png',\n '/path/to/external/pdm-phasedlc-plot-peak2.png',\n '/path/to/external/pdm-phasedlc-plot-peak3.png'),\n ...]\n\n Returns\n -------\n\n str\n The absolute path to the generated checkplot PNG."}
{"_id": "q_9248", "text": "This is a flare model function, similar to Kowalski+ 2011.\n\n From the paper by Pitkin+ 2014:\n http://adsabs.harvard.edu/abs/2014MNRAS.445.2268P\n\n Parameters\n ----------\n\n flareparams : list of float\n This defines the flare model::\n\n [amplitude,\n flare_peak_time,\n rise_gaussian_stdev,\n decay_time_constant]\n\n where:\n\n `amplitude`: the maximum flare amplitude in mags or flux. If flux, then\n amplitude should be positive. If mags, amplitude should be negative.\n\n `flare_peak_time`: time at which the flare maximum happens.\n\n `rise_gaussian_stdev`: the stdev of the gaussian describing the rise of\n the flare.\n\n `decay_time_constant`: the time constant of the exponential fall of the\n flare.\n\n times,mags,errs : np.array\n The input time-series of measurements and associated errors for which\n the model will be generated. The times will be used to generate\n model mags.\n\n Returns\n -------\n\n (modelmags, times, mags, errs) : tuple\n Returns the model mags evaluated at the input time values. Also returns\n the input `times`, `mags`, and `errs`."}
{"_id": "q_9249", "text": "This checks the AWS instance data URL to see if there's a pending\n shutdown for the instance.\n\n This is useful for AWS spot instances. If there is a pending shutdown posted\n to the instance data URL, we'll use the result of this function break out of\n the processing loop and shut everything down ASAP before the instance dies.\n\n Returns\n -------\n\n bool\n - True if the instance is going to die soon.\n - False if the instance is still safe."}
{"_id": "q_9250", "text": "This wraps the function above to allow for loading previous state from a\n file.\n\n Parameters\n ----------\n\n use_saved_state : str or None\n This is the path to the saved state pickle file produced by a previous\n run of `runcp_producer_loop`. Will get all of the arguments to run\n another instance of the loop from that pickle file. If this is None, you\n MUST provide all of the appropriate arguments to that function.\n\n lightcurve_list : str or list of str or None\n This is either a string pointing to a file containing a list of light\n curves filenames to process or the list itself. The names must\n correspond to the full filenames of files stored on S3, including all\n prefixes, but not include the 's3://<bucket name>/' bit (these will be\n added automatically).\n\n input_queue : str or None\n This is the name of the SQS queue which will receive processing tasks\n generated by this function. The queue URL will automatically be obtained\n from AWS.\n\n input_bucket : str or None\n The name of the S3 bucket containing the light curve files to process.\n\n result_queue : str or None\n This is the name of the SQS queue that this function will listen to for\n messages from the workers as they complete processing on their input\n elements. This function will attempt to match input sent to the\n `input_queue` with results coming into the `result_queue` so it knows\n how many objects have been successfully processed. If this function\n receives task results that aren't in its own input queue, it will\n acknowledge them so they complete successfully, but not download them\n automatically. This handles leftover tasks completing from a previous\n run of this function.\n\n result_bucket : str or None\n The name of the S3 bucket which will receive the results from the\n workers.\n\n pfresult_list : list of str or None\n This is a list of periodfinder result pickle S3 URLs associated with\n each light curve. If provided, this will be used to add in phased light\n curve plots to each checkplot pickle. If this is None, the worker loop\n will produce checkplot pickles that only contain object information,\n neighbor information, and unphased light curves.\n\n runcp_kwargs : dict or None\n This is a dict used to pass any extra keyword arguments to the\n `lcproc.checkplotgen.runcp` function that will be run by the worker\n loop.\n\n process_list_slice : list or None\n This is used to index into the input light curve list so a subset of the\n full list can be processed in this specific run of this function.\n\n Use None for a slice index elem to emulate single slice spec behavior:\n\n process_list_slice = [10, None] -> lightcurve_list[10:]\n process_list_slice = [None, 500] -> lightcurve_list[:500]\n\n purge_queues_when_done : bool or None\n If this is True, and this function exits (either when all done, or when\n it is interrupted with a Ctrl+C), all outstanding elements in the\n input/output queues that have not yet been acknowledged by workers or by\n this function will be purged. This effectively cancels all outstanding\n work.\n\n delete_queues_when_done : bool or None\n If this is True, and this function exits (either when all done, or when\n it is interrupted with a Ctrl+C'), all outstanding work items will be\n purged from the input/queues and the queues themselves will be deleted.\n\n download_when_done : bool or None\n If this is True, the generated checkplot pickle for each input work item\n will be downloaded immediately to the current working directory when the\n worker functions report they're done with it.\n\n save_state_when_done : bool or None\n If this is True, will save the current state of the work item queue and\n the work items acknowledged as completed to a pickle in the current\n working directory. Call the `runcp_producer_loop_savedstate` function\n below to resume processing from this saved state later.\n\n s3_client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its S3 download operations. Alternatively, pass in an existing\n `boto3.Client` instance to re-use it here.\n\n sqs_client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its SQS operations. Alternatively, pass in an existing\n `boto3.Client` instance to re-use it here.\n\n Returns\n -------\n\n dict or str\n Returns the current work state as a dict or str path to the generated\n work state pickle depending on if `save_state_when_done` is True."}
{"_id": "q_9251", "text": "This is the worker for running checkplots.\n\n Parameters\n ----------\n\n task : tuple\n This is of the form: (pfpickle, outdir, lcbasedir, kwargs).\n\n Returns\n -------\n\n list of str\n The list of checkplot pickles returned by the `runcp` function."}
{"_id": "q_9252", "text": "This drives the parallel execution of `runcp` for a list of periodfinding\n result pickles.\n\n Parameters\n ----------\n\n pfpicklelist : list of str or list of Nones\n This is the list of the filenames of the period-finding result pickles\n to process. To make checkplots using the light curves directly, set this\n to a list of Nones with the same length as the list of light curve files\n that you provide in `lcfnamelist`.\n\n outdir : str\n The directory the checkplot pickles will be written to.\n\n lcbasedir : str\n The base directory that this function will look in to find the light\n curves pointed to by the period-finding result files. If you're using\n `lcfnamelist` to provide a list of light curve filenames directly, this\n arg is ignored.\n\n lcfnamelist : list of str or None\n If this is provided, it must be a list of the input light curve\n filenames to process. These can either be associated with each input\n period-finder result pickle, or can be provided standalone to make\n checkplots without phased LC plots in them. In the second case, you must\n set `pfpicklelist` to a list of Nones that matches the length of\n `lcfnamelist`.\n\n cprenorm : bool\n Set this to True if the light curves should be renormalized by\n `checkplot.checkplot_pickle`. This is set to False by default because we\n do our own normalization in this function using the light curve's\n registered normalization function and pass the normalized times, mags,\n errs to the `checkplot.checkplot_pickle` function.\n\n lclistpkl : str or dict\n This is either the filename of a pickle or the actual dict produced by\n lcproc.make_lclist. This is used to gather neighbor information.\n\n nbrradiusarcsec : float\n The radius in arcseconds to use for a search conducted around the\n coordinates of this object to look for any potential confusion and\n blending of variability amplitude caused by their proximity.\n\n maxnumneighbors : int\n The maximum number of neighbors that will have their light curves and\n magnitudes noted in this checkplot as potential blends with the target\n object.\n\n makeneighborlcs : bool\n If True, will make light curve and phased light curve plots for all\n neighbors found in the object collection for each input object.\n\n fast_mode : bool or float\n This runs the external catalog operations in a \"fast\" mode, with short\n timeouts and not trying to hit external catalogs that take a long time\n to respond.\n\n If this is set to True, the default settings for the external requests\n will then become::\n\n skyview_lookup = False\n skyview_timeout = 10.0\n skyview_retry_failed = False\n dust_timeout = 10.0\n gaia_submit_timeout = 7.0\n gaia_max_timeout = 10.0\n gaia_submit_tries = 2\n complete_query_later = False\n search_simbad = False\n\n If this is a float, will run in \"fast\" mode with the provided timeout\n value in seconds and the following settings::\n\n skyview_lookup = True\n skyview_timeout = fast_mode\n skyview_retry_failed = False\n dust_timeout = fast_mode\n gaia_submit_timeout = 0.66*fast_mode\n gaia_max_timeout = fast_mode\n gaia_submit_tries = 2\n complete_query_later = False\n search_simbad = False\n\n gaia_max_timeout : float\n Sets the timeout in seconds to use when waiting for the GAIA service to\n respond to our request for the object's information. Note that if\n `fast_mode` is set, this is ignored.\n\n gaia_mirror : str or None\n This sets the GAIA mirror to use. This is a key in the\n `services.gaia.GAIA_URLS` dict which defines the URLs to hit for each\n mirror.\n\n xmatchinfo : str or dict\n This is either the xmatch dict produced by the function\n `load_xmatch_external_catalogs` above, or the path to the xmatch info\n pickle file produced by that function.\n\n xmatchradiusarcsec : float\n This is the cross-matching radius to use in arcseconds.\n\n minobservations : int\n The minimum of observations the input object's mag/flux time-series must\n have for this function to plot its light curve and phased light\n curve. If the object has less than this number, no light curves will be\n plotted, but the checkplotdict will still contain all of the other\n information.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in generating this checkplot.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in generating this checkplot.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in generating this checkplot.\n\n skipdone : bool\n This indicates if this function will skip creating checkplots that\n already exist corresponding to the current `objectid` and `magcol`. If\n `skipdone` is set to True, this will be done.\n\n done_callback : Python function or None\n This is used to provide a function to execute after the checkplot\n pickles are generated. This is useful if you want to stream the results\n of checkplot making to some other process, e.g. directly running an\n ingestion into an LCC-Server collection. The function will always get\n the list of the generated checkplot pickles as its first arg, and all of\n the kwargs for runcp in the kwargs dict. Additional args and kwargs can\n be provided by giving a list in the `done_callbacks_args` kwarg and a\n dict in the `done_callbacks_kwargs` kwarg.\n\n NOTE: the function you pass in here should be pickleable by normal\n Python if you want to use it with the parallel_cp and parallel_cp_lcdir\n functions below.\n\n done_callback_args : tuple or None\n If not None, contains any args to pass into the `done_callback`\n function.\n\n done_callback_kwargs : dict or None\n If not None, contains any kwargs to pass into the `done_callback`\n function.\n\n liststartindex : int\n The index of the `pfpicklelist` (and `lcfnamelist` if provided) to start\n working at.\n\n maxobjects : int\n The maximum number of objects to process in this run. Use this with\n `liststartindex` to effectively distribute working on a large list of\n input period-finding result pickles (and light curves if `lcfnamelist`\n is also provided) over several sessions or machines.\n\n nworkers : int\n The number of parallel workers that will work on the checkplot\n generation process.\n\n Returns\n -------\n\n dict\n This returns a dict with keys = input period-finding pickles and vals =\n list of the corresponding checkplot pickles produced."}
{"_id": "q_9253", "text": "This drives the parallel execution of `runcp` for a directory of\n periodfinding pickles.\n\n Parameters\n ----------\n\n pfpickledir : str\n This is the directory containing all of the period-finding pickles to\n process.\n\n outdir : str\n The directory the checkplot pickles will be written to.\n\n lcbasedir : str\n The base directory that this function will look in to find the light\n curves pointed to by the period-finding result files. If you're using\n `lcfnamelist` to provide a list of light curve filenames directly, this\n arg is ignored.\n\n pkpickleglob : str\n This is a UNIX file glob to select period-finding result pickles in the\n specified `pfpickledir`.\n\n lclistpkl : str or dict\n This is either the filename of a pickle or the actual dict produced by\n lcproc.make_lclist. This is used to gather neighbor information.\n\n cprenorm : bool\n Set this to True if the light curves should be renormalized by\n `checkplot.checkplot_pickle`. This is set to False by default because we\n do our own normalization in this function using the light curve's\n registered normalization function and pass the normalized times, mags,\n errs to the `checkplot.checkplot_pickle` function.\n\n nbrradiusarcsec : float\n The radius in arcseconds to use for a search conducted around the\n coordinates of this object to look for any potential confusion and\n blending of variability amplitude caused by their proximity.\n\n maxnumneighbors : int\n The maximum number of neighbors that will have their light curves and\n magnitudes noted in this checkplot as potential blends with the target\n object.\n\n makeneighborlcs : bool\n If True, will make light curve and phased light curve plots for all\n neighbors found in the object collection for each input object.\n\n fast_mode : bool or float\n This runs the external catalog operations in a \"fast\" mode, with short\n timeouts and not trying to hit external catalogs that take a long time\n to respond.\n\n If this is set to True, the default settings for the external requests\n will then become::\n\n skyview_lookup = False\n skyview_timeout = 10.0\n skyview_retry_failed = False\n dust_timeout = 10.0\n gaia_submit_timeout = 7.0\n gaia_max_timeout = 10.0\n gaia_submit_tries = 2\n complete_query_later = False\n search_simbad = False\n\n If this is a float, will run in \"fast\" mode with the provided timeout\n value in seconds and the following settings::\n\n skyview_lookup = True\n skyview_timeout = fast_mode\n skyview_retry_failed = False\n dust_timeout = fast_mode\n gaia_submit_timeout = 0.66*fast_mode\n gaia_max_timeout = fast_mode\n gaia_submit_tries = 2\n complete_query_later = False\n search_simbad = False\n\n gaia_max_timeout : float\n Sets the timeout in seconds to use when waiting for the GAIA service to\n respond to our request for the object's information. Note that if\n `fast_mode` is set, this is ignored.\n\n gaia_mirror : str or None\n This sets the GAIA mirror to use. This is a key in the\n `services.gaia.GAIA_URLS` dict which defines the URLs to hit for each\n mirror.\n\n xmatchinfo : str or dict\n This is either the xmatch dict produced by the function\n `load_xmatch_external_catalogs` above, or the path to the xmatch info\n pickle file produced by that function.\n\n xmatchradiusarcsec : float\n This is the cross-matching radius to use in arcseconds.\n\n minobservations : int\n The minimum of observations the input object's mag/flux time-series must\n have for this function to plot its light curve and phased light\n curve. If the object has less than this number, no light curves will be\n plotted, but the checkplotdict will still contain all of the other\n information.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in generating this checkplot.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in generating this checkplot.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in generating this checkplot.\n\n skipdone : bool\n This indicates if this function will skip creating checkplots that\n already exist corresponding to the current `objectid` and `magcol`. If\n `skipdone` is set to True, this will be done.\n\n done_callback : Python function or None\n This is used to provide a function to execute after the checkplot\n pickles are generated. This is useful if you want to stream the results\n of checkplot making to some other process, e.g. directly running an\n ingestion into an LCC-Server collection. The function will always get\n the list of the generated checkplot pickles as its first arg, and all of\n the kwargs for runcp in the kwargs dict. Additional args and kwargs can\n be provided by giving a list in the `done_callbacks_args` kwarg and a\n dict in the `done_callbacks_kwargs` kwarg.\n\n NOTE: the function you pass in here should be pickleable by normal\n Python if you want to use it with the parallel_cp and parallel_cp_lcdir\n functions below.\n\n done_callback_args : tuple or None\n If not None, contains any args to pass into the `done_callback`\n function.\n\n done_callback_kwargs : dict or None\n If not None, contains any kwargs to pass into the `done_callback`\n function.\n\n maxobjects : int\n The maximum number of objects to process in this run.\n\n nworkers : int\n The number of parallel workers that will work on the checkplot\n generation process.\n\n Returns\n -------\n\n dict\n This returns a dict with keys = input period-finding pickles and vals =\n list of the corresponding checkplot pickles produced."}
{"_id": "q_9254", "text": "This runs the runpf function."}
{"_id": "q_9255", "text": "This collects variability features into arrays for use with the classifer.\n\n Parameters\n ----------\n\n featuresdir : str\n This is the directory where all the varfeatures pickles are. Use\n `pklglob` to specify the glob to search for. The `varfeatures` pickles\n contain objectids, a light curve magcol, and features as dict\n key-vals. The :py:mod:`astrobase.lcproc.lcvfeatures` module can be used\n to produce these.\n\n magcol : str\n This is the key in each varfeatures pickle corresponding to the magcol\n of the light curve the variability features were extracted from.\n\n outfile : str\n This is the filename of the output pickle that will be written\n containing a dict of all the features extracted into np.arrays.\n\n pklglob : str\n This is the UNIX file glob to use to search for varfeatures pickle files\n in `featuresdir`.\n\n featurestouse : list of str\n Each varfeatures pickle can contain any combination of non-periodic,\n stellar, and periodic features; these must have the same names as\n elements in the list of strings provided in `featurestouse`. This tries\n to get all the features listed in NONPERIODIC_FEATURES_TO_COLLECT by\n default. If `featurestouse` is provided as a list, gets only the\n features listed in this kwarg instead.\n\n maxobjects : int or None\n The controls how many pickles from the featuresdir to process. If None,\n will process all varfeatures pickles.\n\n labeldict : dict or None\n If this is provided, it must be a dict with the following key:val list::\n\n '<objectid>':<label value>\n\n for each objectid collected from the varfeatures pickles. This will turn\n the collected information into a training set for classifiers.\n\n Example: to carry out non-periodic variable feature collection of fake\n LCS prepared by :py:mod:`astrobase.fakelcs.generation`, use the value\n of the 'isvariable' dict elem from the `fakelcs-info.pkl` here, like\n so::\n\n labeldict={x:y for x,y in zip(fakelcinfo['objectid'],\n fakelcinfo['isvariable'])}\n\n labeltype : {'binary', 'classes'}\n This is either 'binary' or 'classes' for binary/multi-class\n classification respectively.\n\n Returns\n -------\n\n dict\n This returns a dict with all of the features collected into np.arrays,\n ready to use as input to a scikit-learn classifier."}
{"_id": "q_9256", "text": "This gets the best RF classifier after running cross-validation.\n\n - splits the training set into test/train samples\n - does `KFold` stratified cross-validation using `RandomizedSearchCV`\n - gets the `RandomForestClassifier` with the best performance after CV\n - gets the confusion matrix for the test set\n\n Runs on the output dict from functions that produce dicts similar to that\n produced by `collect_nonperiodic_features` above.\n\n Parameters\n ----------\n\n collected_features : dict or str\n This is either the dict produced by a `collect_*_features` function or\n the pickle produced by the same.\n\n test_fraction : float\n This sets the fraction of the input set that will be used as the\n test set after training.\n\n n_crossval_iterations : int\n This sets the number of iterations to use when running the\n cross-validation.\n\n n_kfolds : int\n This sets the number of K-folds to use on the data when doing a\n test-train split.\n\n crossval_scoring_metric : str\n This is a string that describes how the cross-validation score is\n calculated for each iteration. See the URL below for how to specify this\n parameter:\n\n http://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter\n By default, this is tuned for binary classification and uses the F1\n scoring metric. Change the `crossval_scoring_metric` to another metric\n (probably 'accuracy') for multi-class classification, e.g. for periodic\n variable classification.\n\n classifier_to_pickle : str\n If this is a string indicating the name of a pickle file to write, will\n write the trained classifier to the pickle that can be later loaded and\n used to classify data.\n\n nworkers : int\n This is the number of parallel workers to use in the\n RandomForestClassifier. Set to -1 to use all CPUs on your machine.\n\n Returns\n -------\n\n dict\n A dict containing the trained classifier, cross-validation results, the\n input data set, and all input kwargs used is returned, along with\n cross-validation score metrics."}
{"_id": "q_9257", "text": "This applys an RF classifier trained using `train_rf_classifier`\n to varfeatures pickles in `varfeaturesdir`.\n\n Parameters\n ----------\n\n classifier : dict or str\n This is the output dict or pickle created by `get_rf_classifier`. This\n will contain a `features_name` key that will be used to collect the same\n features used to train the classifier from the varfeatures pickles in\n varfeaturesdir.\n\n varfeaturesdir : str\n The directory containing the varfeatures pickles for objects that will\n be classified by the trained `classifier`.\n\n outpickle : str\n This is a filename for the pickle that will be written containing the\n result dict from this function.\n\n maxobjects : int\n This sets the number of objects to process in `varfeaturesdir`.\n\n Returns\n -------\n\n dict\n The classification results after running the trained `classifier` as\n returned as a dict. This contains predicted labels and their prediction\n probabilities."}
{"_id": "q_9258", "text": "This returns a summed Fourier cosine series.\n\n Parameters\n ----------\n\n fourierparams : list\n This MUST be a list of the following form like so::\n\n [period,\n epoch,\n [amplitude_1, amplitude_2, amplitude_3, ..., amplitude_X],\n [phase_1, phase_2, phase_3, ..., phase_X]]\n\n where X is the Fourier order.\n\n phase,mags : np.array\n The input phase and magnitude areas to use as the basis for the cosine\n series. The phases are used directly to generate the values of the\n function, while the mags array is used to generate the zeroth order\n amplitude coefficient.\n\n Returns\n -------\n\n np.array\n The Fourier cosine series function evaluated over `phase`."}
{"_id": "q_9259", "text": "This is the chisq objective function to be minimized by `scipy.minimize`.\n\n The parameters are the same as `_fourier_func` above. `errs` is used to\n calculate the chisq value."}
{"_id": "q_9260", "text": "Makes a plot of periodograms obtained from `periodbase` functions.\n\n This takes the output dict produced by any `astrobase.periodbase`\n period-finder function or a pickle filename containing such a dict and makes\n a periodogram plot.\n\n Parameters\n ----------\n\n lspinfo : dict or str\n If lspinfo is a dict, it must be a dict produced by an\n `astrobase.periodbase` period-finder function or a dict from your own\n period-finder function or routine that is of the form below with at\n least these keys::\n\n {'periods': np.array of all periods searched by the period-finder,\n 'lspvals': np.array of periodogram power value for each period,\n 'bestperiod': a float value that is the period with the highest\n peak in the periodogram, i.e. the most-likely actual\n period,\n 'method': a three-letter code naming the period-finder used; must\n be one of the keys in the `METHODLABELS` dict above,\n 'nbestperiods': a list of the periods corresponding to periodogram\n peaks (`nbestlspvals` below) to annotate on the\n periodogram plot so they can be called out\n visually,\n 'nbestlspvals': a list of the power values associated with\n periodogram peaks to annotate on the periodogram\n plot so they can be called out visually; should be\n the same length as `nbestperiods` above}\n\n If lspinfo is a str, then it must be a path to a pickle file that\n contains a dict of the form described above.\n\n outfile : str or None\n If this is a str, will write the periodogram plot to the file specified\n by this string. If this is None, will write to a file called\n 'lsp-plot.png' in the current working directory.\n\n plotdpi : int\n Sets the resolution in DPI of the output periodogram plot PNG file.\n\n Returns\n -------\n\n str\n Absolute path to the periodogram plot file created."}
{"_id": "q_9261", "text": "This just writes the lcdict to a pickle.\n\n If outfile is None, then will try to get the name from the\n lcdict['objectid'] and write to <objectid>-hptxtlc.pkl. If that fails, will\n write to a file named hptxtlc.pkl'."}
{"_id": "q_9262", "text": "This just reads a pickle LC. Returns an lcdict."}
{"_id": "q_9263", "text": "This concatenates a list of light curves.\n\n Does not care about overlaps or duplicates. The light curves must all be\n from the same aperture.\n\n The intended use is to concatenate light curves across CCDs or instrument\n changes for a single object. These can then be normalized later using\n standard astrobase tools to search for variablity and/or periodicity.\n\n sortby is a column to sort the final concatenated light curve by in\n ascending order.\n\n If normalize is True, then each light curve's magnitude columns are\n normalized to zero.\n\n The returned lcdict has an extra column: 'lcn' that tracks which measurement\n belongs to which input light curve. This can be used with\n lcdict['concatenated'] which relates input light curve index to input light\n curve filepath. Finally, there is an 'nconcatenated' key in the lcdict that\n contains the total number of concatenated light curves."}
{"_id": "q_9264", "text": "This reads the binned LC and writes it out to a pickle."}
{"_id": "q_9265", "text": "This generates the binnedlc pkls for a directory of such files.\n\n FIXME: finish this"}
{"_id": "q_9266", "text": "Adds catalog info to objectinfo key of all pklcs in lcdir.\n\n If fovcatalog, fovcatalog_columns, fovcatalog_colnames are provided, uses\n them to find all the additional information listed in the fovcatalog_colname\n keys, and writes this info to the objectinfo key of each lcdict. This makes\n it easier for astrobase tools to work on these light curve.\n\n The default set up for fovcatalog is to use a text file generated by the\n HATPI pipeline before auto-calibrating a field. The format is specified as\n above in _columns, _colnames, and _colformats."}
{"_id": "q_9267", "text": "This converts the base64 encoded string to a file.\n\n Parameters\n ----------\n\n b64str : str\n A base64 encoded strin that is the output of `base64.b64encode`.\n\n outfpath : str\n The path to where the file will be written. This should include an\n appropriate extension for the file (e.g. a base64 encoded string that\n represents a PNG should have its `outfpath` end in a '.png') so the OS\n can open these files correctly.\n\n writetostrio : bool\n If this is True, will return a StringIO object with the binary stream\n decoded from the base64-encoded input string `b64str`. This can be\n useful to embed these into other files without having to write them to\n disk.\n\n Returns\n -------\n\n str or StringIO object\n If `writetostrio` is False, will return the output file's path as a\n str. If it is True, will return a StringIO object directly. If writing\n the file fails in either case, will return None."}
{"_id": "q_9268", "text": "This reads a checkplot gzipped pickle file back into a dict.\n\n NOTE: the try-except is for Python 2 pickles that have numpy arrays in\n them. Apparently, these aren't compatible with Python 3. See here:\n\n http://stackoverflow.com/q/11305790\n\n The workaround is noted in this answer:\n\n http://stackoverflow.com/a/41366785\n\n Parameters\n ----------\n\n checkplotpickle : str\n The path to a checkplot pickle file. This can be a gzipped file (in\n which case the file extension should end in '.gz')\n\n Returns\n -------\n\n dict\n This returns a checkplotdict."}
{"_id": "q_9269", "text": "This makes a plot of the LC model fit.\n\n Parameters\n ----------\n\n phase,pmags,perrs : np.array\n The actual mag/flux time-series.\n\n fitmags : np.array\n The model fit time-series.\n\n period : float\n The period at which the phased LC was generated.\n\n mintime : float\n The minimum time value.\n\n magseriesepoch : float\n The value of time around which the phased LC was folded.\n\n plotfit : str\n The name of a file to write the plot to.\n\n magsarefluxes : bool\n Set this to True if the values in `pmags` and `fitmags` are actually\n fluxes.\n\n wrap : bool\n If True, will wrap the phased LC around 0.0 to make some phased LCs\n easier to look at.\n\n model_over_lc : bool\n Usually, this function will plot the actual LC over the model LC. Set\n this to True to plot the model over the actual LC; this is most useful\n when you have a very dense light curve and want to be able to see how it\n follows the model.\n\n Returns\n -------\n\n Nothing."}
{"_id": "q_9270", "text": "This queries the GAIA TAP service for a list of objects in an equatorial\n coordinate box.\n\n Parameters\n ----------\n\n radeclbox : sequence of four floats\n This defines the box to search in::\n\n [ra_min, ra_max, decl_min, decl_max]\n\n gaia_mirror : {'gaia','heidelberg','vizier'} or None\n This is the key used to select a GAIA catalog mirror from the\n `GAIA_URLS` dict above. If set, the specified mirror will be used. If\n None, a random mirror chosen from that dict will be used.\n\n columns : sequence of str\n This indicates which columns from the GAIA table to request for the\n objects found within the search radius.\n\n extra_filter: str or None\n If this is provided, must be a valid ADQL filter string that is used to\n further filter the cone-search results.\n\n returnformat : {'csv','votable','json'}\n The returned file format to request from the GAIA catalog service.\n\n forcefetch : bool\n If this is True, the query will be retried even if cached results for\n it exist.\n\n cachedir : str\n This points to the directory where results will be downloaded.\n\n verbose : bool\n If True, will indicate progress and warn of any issues.\n\n timeout : float\n This sets the amount of time in seconds to wait for the service to\n respond to our initial request.\n\n refresh : float\n This sets the amount of time in seconds to wait before checking if the\n result file is available. If the results file isn't available after\n `refresh` seconds have elapsed, the function will wait for `refresh`\n seconds continuously, until `maxtimeout` is reached or the results file\n becomes available.\n\n maxtimeout : float\n The maximum amount of time in seconds to wait for a result to become\n available after submitting our query request.\n\n maxtries : int\n The maximum number of tries (across all mirrors tried) to make to either\n submit the request or download the results, before giving up.\n\n completequerylater : bool\n If set to True, a submitted query that does not return a result before\n `maxtimeout` has passed will be cancelled but its input request\n parameters and the result URL provided by the service will be saved. If\n this function is then called later with these same input request\n parameters, it will check if the query finally finished and a result is\n available. If so, will download the results instead of submitting a new\n query. If it's not done yet, will start waiting for results again. To\n force launch a new query with the same request parameters, set the\n `forcefetch` kwarg to True.\n\n Returns\n -------\n\n dict\n This returns a dict of the following form::\n\n {'params':dict of the input params used for the query,\n 'provenance':'cache' or 'new download',\n 'result':path to the file on disk with the downloaded data table}"}
{"_id": "q_9271", "text": "This queries the GAIA TAP service for a single GAIA source ID.\n\n Parameters\n ----------\n\n gaiaid : str\n The source ID of the object whose info will be collected.\n\n gaia_mirror : {'gaia','heidelberg','vizier'} or None\n This is the key used to select a GAIA catalog mirror from the\n `GAIA_URLS` dict above. If set, the specified mirror will be used. If\n None, a random mirror chosen from that dict will be used.\n\n columns : sequence of str\n This indicates which columns from the GAIA table to request for the\n objects found within the search radius.\n\n returnformat : {'csv','votable','json'}\n The returned file format to request from the GAIA catalog service.\n\n forcefetch : bool\n If this is True, the query will be retried even if cached results for\n it exist.\n\n cachedir : str\n This points to the directory where results will be downloaded.\n\n verbose : bool\n If True, will indicate progress and warn of any issues.\n\n timeout : float\n This sets the amount of time in seconds to wait for the service to\n respond to our initial request.\n\n refresh : float\n This sets the amount of time in seconds to wait before checking if the\n result file is available. If the results file isn't available after\n `refresh` seconds have elapsed, the function will wait for `refresh`\n seconds continuously, until `maxtimeout` is reached or the results file\n becomes available.\n\n maxtimeout : float\n The maximum amount of time in seconds to wait for a result to become\n available after submitting our query request.\n\n maxtries : int\n The maximum number of tries (across all mirrors tried) to make to either\n submit the request or download the results, before giving up.\n\n completequerylater : bool\n If set to True, a submitted query that does not return a result before\n `maxtimeout` has passed will be cancelled but its input request\n parameters and the result URL provided by the service will be saved. If\n this function is then called later with these same input request\n parameters, it will check if the query finally finished and a result is\n available. If so, will download the results instead of submitting a new\n query. If it's not done yet, will start waiting for results again. To\n force launch a new query with the same request parameters, set the\n `forcefetch` kwarg to True.\n\n Returns\n -------\n\n dict\n This returns a dict of the following form::\n\n {'params':dict of the input params used for the query,\n 'provenance':'cache' or 'new download',\n 'result':path to the file on disk with the downloaded data table}"}
{"_id": "q_9272", "text": "This calculates the peak associated with the spectral window function\n for times and at the specified omega.\n\n NOTE: this is classical Lomb-Scargle, not the Generalized\n Lomb-Scargle. `mags` and `errs` are silently ignored since we're calculating\n the periodogram of the observing window function. These are kept to present\n a consistent external API so the `pgen_lsp` function below can call this\n transparently.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The time-series to calculate the periodogram value for.\n\n omega : float\n The frequency to calculate the periodogram value at.\n\n Returns\n -------\n\n periodogramvalue : float\n The normalized periodogram at the specified test frequency `omega`."}
{"_id": "q_9273", "text": "This calculates the spectral window function.\n\n Wraps the `pgen_lsp` function above to use the specific worker for\n calculating the window-function.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The mag/flux time-series with associated measurement errors to run the\n period-finding on.\n\n magsarefluxes : bool\n If the input measurement values in `mags` and `errs` are in fluxes, set\n this to True.\n\n startp,endp : float or None\n The minimum and maximum periods to consider for the transit search.\n\n stepsize : float\n The step-size in frequency to use when constructing a frequency grid for\n the period search.\n\n autofreq : bool\n If this is True, the value of `stepsize` will be ignored and the\n :py:func:`astrobase.periodbase.get_frequency_grid` function will be used\n to generate a frequency grid based on `startp`, and `endp`. If these are\n None as well, `startp` will be set to 0.1 and `endp` will be set to\n `times.max() - times.min()`.\n\n nbestpeaks : int\n The number of 'best' peaks to return from the periodogram results,\n starting from the global maximum of the periodogram peak values.\n\n periodepsilon : float\n The fractional difference between successive values of 'best' periods\n when sorting by periodogram power to consider them as separate periods\n (as opposed to part of the same periodogram peak). This is used to avoid\n broad peaks in the periodogram and make sure the 'best' periods returned\n are all actually independent.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n nworkers : int\n The number of parallel workers to use when calculating the periodogram.\n\n glspfunc : Python function\n The worker function to use to calculate the periodogram. This is used to\n used to make the `pgen_lsp` function calculate the time-series sampling\n window function instead of the time-series measurements' GLS periodogram\n by passing in `_glsp_worker_specwindow` instead of the default\n `_glsp_worker` function.\n\n verbose : bool\n If this is True, will indicate progress and details about the frequency\n grid used for the period search.\n\n Returns\n -------\n\n dict\n This function returns a dict, referred to as an `lspinfo` dict in other\n astrobase functions that operate on periodogram results. This is a\n standardized format across all astrobase period-finders, and is of the\n form below::\n\n {'bestperiod': the best period value in the periodogram,\n 'bestlspval': the periodogram peak associated with the best period,\n 'nbestpeaks': the input value of nbestpeaks,\n 'nbestlspvals': nbestpeaks-size list of best period peak values,\n 'nbestperiods': nbestpeaks-size list of best periods,\n 'lspvals': the full array of periodogram powers,\n 'periods': the full array of periods considered,\n 'method':'win' -> the name of the period-finder method,\n 'kwargs':{ dict of all of the input kwargs for record-keeping}}"}
{"_id": "q_9274", "text": "This gets a new API key from the specified LCC-Server.\n\n NOTE: this only gets an anonymous API key. To get an API key tied to a user\n account (and associated privilege level), see the `import_apikey` function\n below.\n\n Parameters\n ----------\n\n lcc_server : str\n The base URL of the LCC-Server from where the API key will be fetched.\n\n Returns\n -------\n\n (apikey, expiry) : tuple\n This returns a tuple with the API key and its expiry date."}
{"_id": "q_9275", "text": "This imports an API key from text and writes it to the cache dir.\n\n Use this with the JSON text copied from the API key text box on your\n LCC-Server user home page. The API key will thus be tied to the privileges\n of that user account and can then access objects, datasets, and collections\n marked as private for the user only or shared with that user.\n\n Parameters\n ----------\n\n lcc_server : str\n The base URL of the LCC-Server to get the API key for.\n\n apikey_text_json : str\n The JSON string from the API key text box on the user's LCC-Server home\n page at `lcc_server/users/home`.\n\n Returns\n -------\n\n (apikey, expiry) : tuple\n This returns a tuple with the API key and its expiry date."}
{"_id": "q_9276", "text": "This submits a POST query to an LCC-Server search API endpoint.\n\n Handles streaming of the results, and returns the final JSON stream. Also\n handles results that time out.\n\n Parameters\n ----------\n\n url : str\n The URL of the search API endpoint to hit. This is something like\n `https://data.hatsurveys.org/api/conesearch`\n\n data : dict\n A dict of the search query parameters to pass to the search service.\n\n apikey : str\n The API key to use to access the search service. API keys are required\n for all POST request made to an LCC-Server's API endpoints.\n\n Returns\n -------\n\n (status_flag, data_dict, dataset_id) : tuple\n This returns a tuple containing the status of the request: ('complete',\n 'failed', 'background', etc.), a dict parsed from the JSON result of the\n request, and a dataset ID, which can be used to reconstruct the URL on\n the LCC-Server where the results can be browsed."}
{"_id": "q_9277", "text": "This runs a cross-match search query.\n\n Parameters\n ----------\n\n lcc_server : str\n This is the base URL of the LCC-Server to talk to. (e.g. for HAT, use:\n https://data.hatsurveys.org)\n\n file_to_upload : str\n This is the path to a text file containing objectid, RA, declination\n rows for the objects to cross-match against the LCC-Server\n collections. This should follow the format of the following example::\n\n # example object and coordinate list\n # objectid ra dec\n aaa 289.99698 44.99839\n bbb 293.358 -23.206\n ccc 294.197 +23.181\n ddd 19 25 27.9129 +42 47 03.693\n eee 19:25:27 -42:47:03.21\n # .\n # .\n # .\n # etc. lines starting with '#' will be ignored\n # (max 5000 objects)\n\n xmatch_dist_arcsec : float\n This is the maximum distance in arcseconds to consider when\n cross-matching objects in the uploaded file to the LCC-Server's\n collections. The maximum allowed distance is 30 arcseconds. Multiple\n matches to an uploaded object are possible and will be returned in order\n of increasing distance grouped by input `objectid`.\n\n result_visibility : {'private', 'unlisted', 'public'}\n This sets the visibility of the dataset produced from the search\n result::\n\n 'private' -> the dataset and its products are not visible or\n accessible by any user other than the one that\n created the dataset.\n\n 'unlisted' -> the dataset and its products are not visible in the\n list of public datasets, but can be accessed if the\n dataset URL is known\n\n 'public' -> the dataset and its products are visible in the list\n of public datasets and can be accessed by anyone.\n\n email_when_done : bool\n If True, the LCC-Server will email you when the search is complete. This\n will also set `download_data` to False. Using this requires an\n LCC-Server account and an API key tied to that account.\n\n collections : list of str or None\n This is a list of LC collections to search in. If this is None, all\n collections will be searched.\n\n columns : list of str or None\n This is a list of columns to return in the results. Matching objects'\n object IDs, RAs, DECs, and links to light curve files will always be\n returned so there is no need to specify these columns. If None, only\n these columns will be returned: 'objectid', 'ra', 'decl', 'lcfname'\n\n filters : str or None\n This is an SQL-like string to use to filter on database columns in the\n LCC-Server's collections. To see the columns available for a search,\n visit the Collections tab in the LCC-Server's browser UI. The filter\n operators allowed are::\n\n lt -> less than\n gt -> greater than\n ge -> greater than or equal to\n le -> less than or equal to\n eq -> equal to\n ne -> not equal to\n ct -> contains text\n isnull -> column value is null\n notnull -> column value is not null\n\n You may use the `and` and `or` operators between filter specifications\n to chain them together logically.\n\n Example filter strings::\n\n \"(propermotion gt 200.0) and (sdssr lt 11.0)\"\n \"(dered_jmag_kmag gt 2.0) and (aep_000_stetsonj gt 10.0)\"\n \"(gaia_status ct 'ok') and (propermotion gt 300.0)\"\n \"(simbad_best_objtype ct 'RR') and (dered_sdssu_sdssg lt 0.5)\"\n\n sortspec : tuple of two strs or None\n If not None, this should be a tuple of two items::\n\n ('column to sort by', 'asc|desc')\n\n This sets the column to sort the results by. For cone_search, the\n default column and sort order are 'dist_arcsec' and 'asc', meaning the\n distance from the search center in ascending order.\n\n samplespec : int or None\n If this is an int, will indicate how many rows from the initial search\n result will be uniformly random sampled and returned.\n\n limitspec : int or None\n If this is an int, will indicate how many rows from the initial search\n result to return in total.\n\n `sortspec`, `samplespec`, and `limitspec` are applied in this order:\n\n sample -> sort -> limit\n\n download_data : bool\n This sets if the accompanying data from the search results will be\n downloaded automatically. This includes the data table CSV, the dataset\n pickle file, and a light curve ZIP file. Note that if the search service\n indicates that your query is still in progress, this function will block\n until the light curve ZIP file becomes available. The maximum wait time\n in seconds is set by maxtimeout and the refresh interval is set by\n refresh.\n\n To avoid the wait block, set download_data to False and the function\n will write a pickle file to `~/.astrobase/lccs/query-[setid].pkl`\n containing all the information necessary to retrieve these data files\n later when the query is done. To do so, call the\n `retrieve_dataset_files` with the path to this pickle file (it will be\n returned).\n\n outdir : str or None\n If this is provided, sets the output directory of the downloaded dataset\n files. If None, they will be downloaded to the current directory.\n\n maxtimeout : float\n The maximum time in seconds to wait for the LCC-Server to respond with a\n result before timing out. You can use the `retrieve_dataset_files`\n function to get results later as needed.\n\n refresh : float\n The time to wait in seconds before pinging the LCC-Server to see if a\n search query has completed and dataset result files can be downloaded.\n\n Returns\n -------\n\n tuple\n Returns a tuple with the following elements::\n\n (search result status dict,\n search result CSV file path,\n search result LC ZIP path)"}
{"_id": "q_9278", "text": "This downloads a JSON form of a dataset from the specified lcc_server.\n\n If the dataset contains more than 1000 rows, it will be paginated, so you\n must use the `page` kwarg to get the page you want. The dataset JSON will\n contain the keys 'npages', 'currpage', and 'rows_per_page' to help with\n this. The 'rows' key contains the actual data rows as a list of tuples.\n\n The JSON contains metadata about the query that produced the dataset,\n information about the data table's columns, and links to download the\n dataset's products including the light curve ZIP and the dataset CSV.\n\n Parameters\n ----------\n\n lcc_server : str\n This is the base URL of the LCC-Server to talk to.\n\n dataset_id : str\n This is the unique setid of the dataset you want to get. In the results\n from the `*_search` functions above, this is the value of the\n `infodict['result']['setid']` key in the first item (the infodict) in\n the returned tuple.\n\n strformat : bool\n This sets if you want the returned data rows to be formatted in their\n string representations already. This can be useful if you're piping the\n returned JSON straight into some sort of UI and you don't want to deal\n with formatting floats, etc. To do this manually when strformat is set\n to False, look at the `coldesc` item in the returned dict, which gives\n the Python and Numpy string format specifiers for each column in the\n data table.\n\n page : int\n This sets which page of the dataset should be retrieved.\n\n Returns\n -------\n\n dict\n This returns the dataset JSON loaded into a dict."}
{"_id": "q_9279", "text": "This gets information on a single object from the LCC-Server.\n\n Returns a dict with all of the available information on an object, including\n finding charts, comments, object type and variability tags, and\n period-search results (if available).\n\n If you have an LCC-Server API key present in `~/.astrobase/lccs/` that is\n associated with an LCC-Server user account, objects that are visible to this\n user will be returned, even if they are not visible to the public. Use this\n to look up objects that have been marked as 'private' or 'shared'.\n\n NOTE: you can pass the result dict returned by this function directly into\n the `astrobase.checkplot.checkplot_pickle_to_png` function, e.g.::\n\n astrobase.checkplot.checkplot_pickle_to_png(result_dict,\n 'object-%s-info.png' %\n result_dict['objectid'])\n\n to generate a quick PNG overview of the object information.\n\n Parameters\n ----------\n\n lcc_server : str\n This is the base URL of the LCC-Server to talk to.\n\n objectid : str\n This is the unique database ID of the object to retrieve info for. This\n is always returned as the `db_oid` column in LCC-Server search results.\n\n db_collection_id : str\n This is the collection ID which will be searched for the object. This is\n always returned as the `collection` column in LCC-Server search results.\n\n Returns\n -------\n\n dict\n A dict containing the object info is returned. Some important items in\n the result dict:\n\n - `objectinfo`: all object magnitude, color, GAIA cross-match, and\n object type information available for this object\n\n - `objectcomments`: comments on the object's variability if available\n\n - `varinfo`: variability comments, variability features, type tags,\n period and epoch information if available\n\n - `neighbors`: information on the neighboring objects of this object in\n its parent light curve collection\n\n - `xmatch`: information on any cross-matches to external catalogs\n (e.g. KIC, EPIC, TIC, APOGEE, etc.)\n\n - `finderchart`: a base-64 encoded PNG image of the object's DSS2 RED\n finder chart. To convert this to an actual PNG, try the function:\n `astrobase.checkplot.pkl_io._b64_to_file`.\n\n - `magseries`: a base-64 encoded PNG image of the object's light\n curve. To convert this to an actual PNG, try the function:\n `astrobase.checkplot.pkl_io._b64_to_file`.\n\n - `pfmethods`: a list of period-finding methods applied to the object if\n any. If this list is present, use the keys in it to get to the actual\n period-finding results for each method. These will contain base-64\n encoded PNGs of the periodogram and phased light curves using the best\n three peaks in the periodogram, as well as period and epoch\n information."}
{"_id": "q_9280", "text": "This lists recent publicly visible datasets available on the LCC-Server.\n\n If you have an LCC-Server API key present in `~/.astrobase/lccs/` that is\n associated with an LCC-Server user account, datasets that belong to this\n user will be returned as well, even if they are not visible to the public.\n\n Parameters\n ----------\n\n lcc_server : str\n This is the base URL of the LCC-Server to talk to.\n\n nrecent : int\n This indicates how many recent public datasets you want to list. This is\n always capped at 1000.\n\n Returns\n -------\n\n list of dicts\n Returns a list of dicts, with each dict containing info on each dataset."}
{"_id": "q_9281", "text": "This lists all light curve collections made available on the LCC-Server.\n\n If you have an LCC-Server API key present in `~/.astrobase/lccs/` that is\n associated with an LCC-Server user account, light curve collections visible\n to this user will be returned as well, even if they are not visible to the\n public.\n\n Parameters\n ----------\n\n lcc_server : str\n The base URL of the LCC-Server to talk to.\n\n Returns\n -------\n\n dict\n Returns a dict containing lists of info items per collection. This\n includes collection_ids, lists of columns, lists of indexed columns,\n lists of full-text indexed columns, detailed column descriptions, number\n of objects in each collection, collection sky coverage, etc."}
{"_id": "q_9282", "text": "This calculates the Stetson index for the magseries, based on consecutive\n pairs of observations.\n\n Based on Nicole Loncke's work for her Planets and Life certificate at\n Princeton in 2014.\n\n Parameters\n ----------\n\n ftimes,fmags,ferrs : np.array\n The input mag/flux time-series with all non-finite elements removed.\n\n weightbytimediff : bool\n If this is True, the Stetson index for any pair of mags will be\n reweighted by the difference in times between them using the scheme in\n Fruth+ 2012 and Zhange+ 2003 (as seen in Sokolovsky+ 2017)::\n\n w_i = exp(- (t_i+1 - t_i)/ delta_t )\n\n Returns\n -------\n\n float\n The calculated Stetson J variability index."}
{"_id": "q_9283", "text": "This calculates the weighted mean, stdev, median, MAD, percentiles, skew,\n kurtosis, fraction of LC beyond 1-stdev, and IQR.\n\n Parameters\n ----------\n\n ftimes,fmags,ferrs : np.array\n The input mag/flux time-series with all non-finite elements removed.\n\n Returns\n -------\n\n dict\n A dict with all of the light curve moments calculated."}
{"_id": "q_9284", "text": "This calculates percentiles and percentile ratios of the flux.\n\n Parameters\n ----------\n\n ftimes,fmags,ferrs : np.array\n The input mag/flux time-series with all non-finite elements removed.\n\n magsarefluxes : bool\n If the `fmags` array actually contains fluxes, will not convert `mags`\n to fluxes before calculating the percentiles.\n\n Returns\n -------\n\n dict\n A dict with all of the light curve flux percentiles and percentile\n ratios calculated."}
{"_id": "q_9285", "text": "This rolls up the feature functions above and returns a single dict.\n\n NOTE: this doesn't calculate the CDPP to save time since binning and\n smoothing takes a while for dense light curves.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input mag/flux time-series to calculate CDPP for.\n\n magsarefluxes : bool\n If True, indicates `mags` is actually an array of flux values.\n\n stetson_weightbytimediff : bool\n If this is True, the Stetson index for any pair of mags will be\n reweighted by the difference in times between them using the scheme in\n Fruth+ 2012 and Zhange+ 2003 (as seen in Sokolovsky+ 2017)::\n\n w_i = exp(- (t_i+1 - t_i)/ delta_t )\n\n Returns\n -------\n\n dict\n Returns a dict with all of the variability features."}
{"_id": "q_9286", "text": "This wraps the BLS function for the parallel driver below.\n\n Parameters\n ----------\n\n tasks : tuple\n This is of the form::\n\n task[0] = times\n task[1] = mags\n task[2] = nfreq\n task[3] = freqmin\n task[4] = stepsize\n task[5] = nbins\n task[6] = minduration\n task[7] = maxduration\n\n Returns\n -------\n\n dict\n Returns a dict of the form::\n\n {\n 'power': the periodogram power array,\n 'bestperiod': the best period found,\n 'bestpower': the highest peak of the periodogram power,\n 'transdepth': transit depth found by eebls.f,\n 'transduration': transit duration found by eebls.f,\n 'transingressbin': transit ingress bin found by eebls.f,\n 'transegressbin': transit egress bin found by eebls.f,\n }"}
{"_id": "q_9287", "text": "This calculates the SNR, depth, duration, a refit period, and time of\n center-transit for a single period.\n\n The equation used for SNR is::\n\n SNR = (transit model depth / RMS of LC with transit model subtracted)\n * sqrt(number of points in transit)\n\n NOTE: you should set the kwargs `sigclip`, `nphasebins`,\n `mintransitduration`, `maxtransitduration` to what you used for an initial\n BLS run to detect transits in the input light curve to match those input\n conditions.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n These contain the magnitude/flux time-series and any associated errors.\n\n period : float\n The period to search around and refit the transits. This will be used to\n calculate the start and end periods of a rerun of BLS to calculate the\n stats.\n\n magsarefluxes : bool\n Set to True if the input measurements in `mags` are actually fluxes and\n not magnitudes.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n perioddeltapercent : float\n The fraction of the period provided to use to search around this\n value. This is a percentage. The period range searched will then be::\n\n [period - (perioddeltapercent/100.0)*period,\n period + (perioddeltapercent/100.0)*period]\n\n nphasebins : int\n The number of phase bins to use in the BLS run.\n\n mintransitduration : float\n The minimum transit duration in phase to consider.\n\n maxtransitduration : float\n The maximum transit duration to consider.\n\n ingressdurationfraction : float\n The fraction of the transit duration to use to generate an initial value\n of the transit ingress duration for the BLS model refit. This will be\n fit by this function.\n\n verbose : bool\n If True, will indicate progress and any problems encountered.\n\n Returns\n -------\n\n dict\n A dict of the following form is returned::\n\n {'period': the refit best period,\n 'epoch': the refit epoch (i.e. mid-transit time),\n 'snr':the SNR of the transit,\n 'transitdepth':the depth of the transit,\n 'transitduration':the duration of the transit,\n 'nphasebins':the input value of nphasebins,\n 'transingressbin':the phase bin containing transit ingress,\n 'transegressbin':the phase bin containing transit egress,\n 'blsmodel':the full BLS model used along with its parameters,\n 'subtractedmags':BLS model - phased light curve,\n 'phasedmags':the phase light curve,\n 'phases': the phase values}"}
{"_id": "q_9288", "text": "This is a parallel worker that reforms light curves for TFA.\n\n task[0] = lcfile\n task[1] = lcformat\n task[2] = lcformatdir\n task[3] = timecol\n task[4] = magcol\n task[5] = errcol\n task[6] = timebase\n task[7] = interpolate_type\n task[8] = sigclip"}
{"_id": "q_9289", "text": "This applies TFA in parallel to all LCs in the given list of file names.\n\n Parameters\n ----------\n\n lclist : str\n This is a list of light curve files to apply TFA correction to.\n\n templateinfo : dict or str\n This is either the dict produced by `tfa_templates_lclist` or the pickle\n produced by the same function.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in applying TFA corrections.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in applying TFA corrections.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in applying TFA corrections.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n interp : str\n This is passed to scipy.interpolate.interp1d as the kind of\n interpolation to use when reforming the light curves to the timebase of\n the TFA templates.\n\n sigclip : float or sequence of two floats or None\n This is the sigma clip to apply to the light curves before running TFA\n on it.\n\n mintemplatedist_arcmin : float\n This sets the minimum distance required from the target object for\n objects in the TFA template ensemble. Objects closer than this distance\n will be removed from the ensemble.\n\n nworkers : int\n The number of parallel workers to launch\n\n maxworkertasks : int\n The maximum number of tasks per worker allowed before it's replaced by a\n fresh one.\n\n Returns\n -------\n\n dict\n Contains the input file names and output TFA light curve filenames per\n input file organized by each `magcol` in `magcols`."}
{"_id": "q_9290", "text": "This just reads a light curve pickle file.\n\n Parameters\n ----------\n\n lcfile : str\n The file name of the pickle to open.\n\n Returns\n -------\n\n dict\n This returns an lcdict."}
{"_id": "q_9291", "text": "This imports the module specified.\n\n Used to dynamically import Python modules that are needed to support LC\n formats not natively supported by astrobase.\n\n Parameters\n ----------\n\n module : str\n This is either:\n\n - a Python module import path, e.g. 'astrobase.lcproc.catalogs' or\n - a path to a Python file, e.g. '/astrobase/hatsurveys/hatlc.py'\n\n that contains the Python module that contains functions used to open\n (and optionally normalize) a custom LC format that's not natively\n supported by astrobase.\n\n formatkey : str\n A str used as the unique ID of this LC format for all lcproc functions\n and can be used to look it up later and import the correct functions\n needed to support it for lcproc operations. For example, we use\n 'kep-fits' as a the specifier for Kepler FITS light curves, which can be\n read by the `astrobase.astrokep.read_kepler_fitslc` function as\n specified by the `<astrobase install path>/data/lcformats/kep-fits.json`\n LC format specification JSON.\n\n Returns\n -------\n\n Python module\n This returns a Python module if it's able to successfully import it."}
{"_id": "q_9292", "text": "This opens an SSH connection to the EC2 instance at `ip_address`.\n\n Parameters\n ----------\n\n ip_address : str\n IP address of the AWS EC2 instance to connect to.\n\n keypem_file : str\n The path to the keypair PEM file generated by AWS to allow SSH\n connections.\n\n username : str\n The username to use to login to the EC2 instance.\n\n raiseonfail : bool\n If True, will re-raise whatever Exception caused the operation to fail\n and break out immediately.\n\n Returns\n -------\n\n paramiko.SSHClient\n This has all the usual `paramiko` functionality:\n\n - Use `SSHClient.exec_command(command, environment=None)` to exec a\n shell command.\n\n - Use `SSHClient.open_sftp()` to get a `SFTPClient` for the server. Then\n call SFTPClient.get() and .put() to copy files from and to the server."}
{"_id": "q_9293", "text": "This gets a file from an S3 bucket.\n\n Parameters\n ----------\n\n bucket : str\n The AWS S3 bucket name.\n\n filename : str\n The full filename of the file to get from the bucket\n\n local_file : str\n Path to where the downloaded file will be stored.\n\n altexts : None or list of str\n If not None, this is a list of alternate extensions to try for the file\n other than the one provided in `filename`. For example, to get anything\n that's an .sqlite where .sqlite.gz is expected, use altexts=[''] to\n strip the .gz.\n\n client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its operations. Alternatively, pass in an existing `boto3.Client`\n instance to re-use it here.\n\n raiseonfail : bool\n If True, will re-raise whatever Exception caused the operation to fail\n and break out immediately.\n\n Returns\n -------\n\n str\n Path to the downloaded filename or None if the download was\n unsuccessful."}
{"_id": "q_9294", "text": "This deletes a file from S3.\n\n Parameters\n ----------\n\n bucket : str\n The AWS S3 bucket to delete the file from.\n\n filename : str\n The full file name of the file to delete, including any prefixes.\n\n client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its operations. Alternatively, pass in an existing `boto3.Client`\n instance to re-use it here.\n\n raiseonfail : bool\n If True, will re-raise whatever Exception caused the operation to fail\n and break out immediately.\n\n Returns\n -------\n\n str or None\n If the file was successfully deleted, will return the delete-marker\n (https://docs.aws.amazon.com/AmazonS3/latest/dev/DeleteMarker.html). If\n it wasn't, returns None"}
{"_id": "q_9295", "text": "This deletes an SQS queue given its URL\n\n Parameters\n ----------\n\n queue_url : str\n The SQS URL of the queue to delete.\n\n client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its operations. Alternatively, pass in an existing `boto3.Client`\n instance to re-use it here.\n\n Returns\n -------\n\n bool\n True if the queue was deleted successfully. False otherwise."}
{"_id": "q_9296", "text": "This pushes a dict serialized to JSON to the specified SQS queue.\n\n Parameters\n ----------\n\n queue_url : str\n The SQS URL of the queue to push the object to.\n\n item : dict\n The dict passed in here will be serialized to JSON.\n\n delay_seconds : int\n The amount of time in seconds the pushed item will be held before going\n 'live' and being visible to all queue consumers.\n\n client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its operations. Alternatively, pass in an existing `boto3.Client`\n instance to re-use it here.\n\n raiseonfail : bool\n If True, will re-raise whatever Exception caused the operation to fail\n and break out immediately.\n\n Returns\n -------\n\n boto3.Response or None\n If the item was successfully put on the queue, will return the response\n from the service. If it wasn't, will return None."}
{"_id": "q_9297", "text": "This gets a single item from the SQS queue.\n\n The `queue_url` is composed of some internal SQS junk plus a\n `queue_name`. For our purposes (`lcproc_aws.py`), the queue name will be\n something like::\n\n lcproc_queue_<action>\n\n where action is one of::\n\n runcp\n runpf\n\n The item is always a JSON object::\n\n {'target': S3 bucket address of the file to process,\n 'action': the action to perform on the file ('runpf', 'runcp', etc.)\n 'args': the action's args as a tuple (not including filename, which is\n generated randomly as a temporary local file),\n 'kwargs': the action's kwargs as a dict,\n 'outbucket: S3 bucket to write the result to,\n 'outqueue': SQS queue to write the processed item's info to (optional)}\n\n The action MUST match the <action> in the queue name for this item to be\n processed.\n\n Parameters\n ----------\n\n queue_url : str\n The SQS URL of the queue to get messages from.\n\n max_items : int\n The number of items to pull from the queue in this request.\n\n wait_time_seconds : int\n This specifies how long the function should block until a message is\n received on the queue. If the timeout expires, an empty list will be\n returned. If the timeout doesn't expire, the function will return a list\n of items received (up to `max_items`).\n\n client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its operations. Alternatively, pass in an existing `boto3.Client`\n instance to re-use it here.\n\n raiseonfail : bool\n If True, will re-raise whatever Exception caused the operation to fail\n and break out immediately.\n\n Returns\n -------\n\n list of dicts or None\n For each item pulled from the queue in this request (up to `max_items`),\n a dict will be deserialized from the retrieved JSON, containing the\n message items and various metadata. The most important item of the\n metadata is the `receipt_handle`, which can be used to acknowledge\n receipt of all items in this request (see `sqs_delete_item` below).\n\n If the queue pull fails outright, returns None. If no messages are\n available for this queue pull, returns an empty list."}
{"_id": "q_9298", "text": "This deletes EC2 nodes and terminates the instances.\n\n Parameters\n ----------\n\n instance_id_list : list of str\n A list of EC2 instance IDs to terminate.\n\n client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its operations. Alternatively, pass in an existing `boto3.Client`\n instance to re-use it here.\n\n Returns\n -------\n\n Nothing."}
{"_id": "q_9299", "text": "This deletes a spot-fleet cluster.\n\n Parameters\n ----------\n\n spot_fleet_reqid : str\n The fleet request ID returned by `make_spot_fleet_cluster`.\n\n client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its operations. Alternatively, pass in an existing `boto3.Client`\n instance to re-use it here.\n\n Returns\n -------\n\n Nothing."}
{"_id": "q_9300", "text": "This puts a single file into a Google Cloud Storage bucket.\n\n Parameters\n ----------\n\n local_file : str\n Path to the file to upload to GCS.\n\n bucket : str\n The GCS bucket to upload the file to.\n\n service_account_json : str\n Path to a downloaded GCS credentials JSON file.\n\n client : google.cloud.storage.Client instance\n The instance of the Client to use to perform the download operation. If\n this is None, a new Client will be used. If this is None and\n `service_account_json` points to a downloaded JSON file with GCS\n credentials, a new Client with the provided credentials will be used. If\n this is not None, the existing Client instance will be used.\n\n raiseonfail : bool\n If True, will re-raise whatever Exception caused the operation to fail\n and break out immediately.\n\n Returns\n -------\n\n str or None\n If the file upload is successful, returns the gs:// URL of the uploaded\n file. If it failed, will return None."}
{"_id": "q_9301", "text": "This runs `lcproc.lcvfeatures.parallel_varfeatures` on fake LCs in\n `simbasedir`.\n\n Parameters\n ----------\n\n simbasedir : str\n The directory containing the fake LCs to process.\n\n mindet : int\n The minimum number of detections needed to accept an LC and process it.\n\n nworkers : int or None\n The number of parallel workers to use when extracting variability\n features from the input light curves.\n\n Returns\n -------\n\n str\n The path to the `varfeatures` pickle created after running the\n `lcproc.lcvfeatures.parallel_varfeatures` function."}
{"_id": "q_9302", "text": "This calculates precision.\n\n https://en.wikipedia.org/wiki/Precision_and_recall\n\n Parameters\n ----------\n\n ntp : int\n The number of true positives.\n\n nfp : int\n The number of false positives.\n\n Returns\n -------\n\n float\n The precision calculated using `ntp/(ntp + nfp)`."}
{"_id": "q_9303", "text": "This calculates recall.\n\n https://en.wikipedia.org/wiki/Precision_and_recall\n\n Parameters\n ----------\n\n ntp : int\n The number of true positives.\n\n nfn : int\n The number of false negatives.\n\n Returns\n -------\n\n float\n The precision calculated using `ntp/(ntp + nfn)`."}
{"_id": "q_9304", "text": "This runs a variable index grid search per magbin.\n\n For each magbin, this does a grid search using the stetson and inveta ranges\n provided and tries to optimize the Matthews Correlation Coefficient (best\n value is +1.0), indicating the best possible separation of variables\n vs. nonvariables. The thresholds on these two variable indexes that produce\n the largest coeff for the collection of fake LCs will probably be the ones\n that work best for actual variable classification on the real LCs.\n\n https://en.wikipedia.org/wiki/Matthews_correlation_coefficient\n\n For each grid-point, calculates the true positives, false positives, true\n negatives, false negatives. Then gets the precision and recall, confusion\n matrix, and the ROC curve for variable vs. nonvariable.\n\n Once we've identified the best thresholds to use, we can then calculate\n variable object numbers:\n\n - as a function of magnitude\n - as a function of period\n - as a function of number of detections\n - as a function of amplitude of variability\n\n\n Writes everything back to `simbasedir/fakevar-recovery.pkl`. Use the\n plotting function below to make plots for the results.\n\n Parameters\n ----------\n\n simbasedir : str\n The directory where the fake LCs are located.\n\n stetson_stdev_range : sequence of 2 floats\n The min and max values of the Stetson J variability index to generate a\n grid over these to test for the values of this index that produce the\n 'best' recovery rate for the injected variable stars.\n\n inveta_stdev_range : sequence of 2 floats\n The min and max values of the 1/eta variability index to generate a\n grid over these to test for the values of this index that produce the\n 'best' recovery rate for the injected variable stars.\n\n iqr_stdev_range : sequence of 2 floats\n The min and max values of the IQR variability index to generate a\n grid over these to test for the values of this index that produce the\n 'best' recovery rate for the injected variable stars.\n\n ngridpoints : int\n The number of grid points for each variability index grid. Remember that\n this function will be searching in 3D and will require lots of time to\n run if ngridpoints is too large.\n\n For the default number of grid points and 25000 simulated light curves,\n this takes about 3 days to run on a 40 (effective) core machine with 2 x\n Xeon E5-2650v3 CPUs.\n\n ngridworkers : int or None\n The number of parallel grid search workers that will be launched.\n\n Returns\n -------\n\n dict\n The returned dict contains a list of recovery stats for each magbin and\n each grid point in the variability index grids that were used. This dict\n can be passed to the plotting function below to plot the results."}
{"_id": "q_9305", "text": "This runs periodfinding using several period-finders on a collection of\n fake LCs.\n\n As a rough benchmark, 25000 fake LCs with 10000--50000 points per LC take\n about 26 days in total to run on an invocation of this function using\n GLS+PDM+BLS and 10 periodworkers and 4 controlworkers (so all 40 'cores') on\n a 2 x Xeon E5-2660v3 machine.\n\n Parameters\n ----------\n\n pfmethods : sequence of str\n This is used to specify which periodfinders to run. These must be in the\n `lcproc.periodsearch.PFMETHODS` dict.\n\n pfkwargs : sequence of dict\n This is used to provide optional kwargs to the period-finders.\n\n getblssnr : bool\n If this is True, will run BLS SNR calculations for each object and\n magcol. This takes a while to run, so it's disabled (False) by default.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n nperiodworkers : int\n This is the number of parallel period-finding worker processes to use.\n\n ncontrolworkers : int\n This is the number of parallel period-finding control workers to\n use. Each control worker will launch `nperiodworkers` worker processes.\n\n liststartindex : int\n The starting index of processing. This refers to the filename list\n generated by running `glob.glob` on the fake LCs in `simbasedir`.\n\n maxobjects : int\n The maximum number of objects to process in this run. Use this with\n `liststartindex` to effectively distribute working on a large list of\n input light curves over several sessions or machines.\n\n Returns\n -------\n\n str\n The path to the output summary pickle produced by\n `lcproc.periodsearch.parallel_pf`"}
{"_id": "q_9306", "text": "This runs a TESS Input Catalog cone search on MAST.\n\n If you use this, please cite the TIC paper (Stassun et al 2018;\n http://adsabs.harvard.edu/abs/2018AJ....156..102S). Also see the \"living\"\n TESS input catalog docs:\n\n https://docs.google.com/document/d/1zdiKMs4Ld4cXZ2DW4lMX-fuxAF6hPHTjqjIwGqnfjqI\n\n Also see: https://mast.stsci.edu/api/v0/_t_i_cfields.html for the fields\n returned by the service and present in the result JSON file.\n\n Parameters\n ----------\n\n ra,decl : float\n The center coordinates of the cone-search in decimal degrees.\n\n radius_arcmin : float\n The cone-search radius in arcminutes.\n\n apiversion : str\n The API version of the MAST service to use. This sets the URL that this\n function will call, using `apiversion` as key into the `MAST_URLS` dict\n above.\n\n forcefetch : bool\n If this is True, the query will be retried even if cached results for\n it exist.\n\n cachedir : str\n This points to the directory where results will be downloaded.\n\n verbose : bool\n If True, will indicate progress and warn of any issues.\n\n timeout : float\n This sets the amount of time in seconds to wait for the service to\n respond to our initial request.\n\n refresh : float\n This sets the amount of time in seconds to wait before checking if the\n result file is available. If the results file isn't available after\n `refresh` seconds have elapsed, the function will wait for `refresh`\n seconds continuously, until `maxtimeout` is reached or the results file\n becomes available.\n\n maxtimeout : float\n The maximum amount of time in seconds to wait for a result to become\n available after submitting our query request.\n\n maxtries : int\n The maximum number of tries (across all mirrors tried) to make to either\n submit the request or download the results, before giving up.\n\n jitter : float\n This is used to control the scale of the random wait in seconds before\n starting the query. Useful in parallelized situations.\n\n raiseonfail : bool\n If this is True, the function will raise an Exception if something goes\n wrong, instead of returning None.\n\n Returns\n -------\n\n dict\n This returns a dict of the following form::\n\n {'params':dict of the input params used for the query,\n 'provenance':'cache' or 'new download',\n 'result':path to the file on disk with the downloaded data table}"}
{"_id": "q_9307", "text": "This runs a TIC search for a specified TIC ID.\n\n Parameters\n ----------\n\n objectid : str\n The object ID to look up information for.\n\n idcol_to_use : str\n This is the name of the object ID column to use when looking up the\n provided `objectid`. This is one of {'ID', 'HIP', 'TYC', 'UCAC',\n 'TWOMASS', 'ALLWISE', 'SDSS', 'GAIA', 'APASS', 'KIC'}.\n\n apiversion : str\n The API version of the MAST service to use. This sets the URL that this\n function will call, using `apiversion` as key into the `MAST_URLS` dict\n above.\n\n forcefetch : bool\n If this is True, the query will be retried even if cached results for\n it exist.\n\n cachedir : str\n This points to the directory where results will be downloaded.\n\n verbose : bool\n If True, will indicate progress and warn of any issues.\n\n timeout : float\n This sets the amount of time in seconds to wait for the service to\n respond to our initial request.\n\n refresh : float\n This sets the amount of time in seconds to wait before checking if the\n result file is available. If the results file isn't available after\n `refresh` seconds have elapsed, the function will wait for `refresh`\n seconds continuously, until `maxtimeout` is reached or the results file\n becomes available.\n\n maxtimeout : float\n The maximum amount of time in seconds to wait for a result to become\n available after submitting our query request.\n\n maxtries : int\n The maximum number of tries (across all mirrors tried) to make to either\n submit the request or download the results, before giving up.\n\n jitter : float\n This is used to control the scale of the random wait in seconds before\n starting the query. Useful in parallelized situations.\n\n raiseonfail : bool\n If this is True, the function will raise an Exception if something goes\n wrong, instead of returning None.\n\n Returns\n -------\n\n dict\n This returns a dict of the following form::\n\n {'params':dict of the input params used for the query,\n 'provenance':'cache' or 'new download',\n 'result':path to the file on disk with the downloaded data table}"}
{"_id": "q_9308", "text": "This sends an email to addresses, informing them about events.\n\n The email account settings are retrieved from the settings file as described\n above.\n\n Parameters\n ----------\n\n sender : str\n The name of the sender to use in the email header.\n\n subject : str\n Subject of the email.\n\n content : str\n Content of the email.\n\n email_recipient list : list of str\n This is a list of email recipient names of the form:\n `['Example Person 1', 'Example Person 1', ...]`\n\n email_recipient list : list of str\n This is a list of email recipient addresses of the form:\n `['example1@example.com', 'example2@example.org', ...]`\n\n email_user : str\n The username of the email server account that will send the emails. If\n this is None, the value of EMAIL_USER from the\n ~/.astrobase/.emailsettings file will be used. If that is None as well,\n this function won't work.\n\n email_pass : str\n The password of the email server account that will send the emails. If\n this is None, the value of EMAIL_PASS from the\n ~/.astrobase/.emailsettings file will be used. If that is None as well,\n this function won't work.\n\n email_server : str\n The address of the email server that will send the emails. If this is\n None, the value of EMAIL_USER from the ~/.astrobase/.emailsettings file\n will be used. If that is None as well, this function won't work.\n\n Returns\n -------\n\n bool\n True if email sending succeeded. False if email sending failed."}
{"_id": "q_9309", "text": "This generates a sinusoidal light curve using a Fourier cosine series.\n\n Parameters\n ----------\n\n fourierparams : list\n This MUST be a list of the following form like so::\n\n [period,\n epoch,\n [amplitude_1, amplitude_2, amplitude_3, ..., amplitude_X],\n [phase_1, phase_2, phase_3, ..., phase_X]]\n\n where X is the Fourier order.\n\n times,mags,errs : np.array\n The input time-series of measurements and associated errors for which\n the model will be generated. The times will be used to generate model\n mags, and the input `times`, `mags`, and `errs` will be resorted by\n model phase and returned.\n\n Returns\n -------\n\n (modelmags, phase, ptimes, pmags, perrs) : tuple\n Returns the model mags and phase values. Also returns the input `times`,\n `mags`, and `errs` sorted by the model's phase."}
{"_id": "q_9310", "text": "Makes the mag-series plot tile for `checkplot_png` and\n `twolsp_checkplot_png`.\n\n axes : matplotlib.axes.Axes object\n The Axes object where the generated plot will go.\n\n stimes,smags,serrs : np.array\n The mag/flux time-series arrays along with associated errors. These\n should all have been run through nan-stripping and sigma-clipping\n beforehand.\n\n magsarefluxes : bool\n If True, indicates the input time-series is fluxes and not mags so the\n plot y-axis direction and range can be set appropriately.\n\n ms : float\n The `markersize` kwarg to use when making the mag-series plot.\n\n Returns\n -------\n\n Does not return anything, works on the input Axes object directly."}
{"_id": "q_9311", "text": "Precesses target coordinates `ra`, `dec` from `epoch_one` to `epoch_two`.\n\n This takes into account the jd of the observations, as well as the proper\n motion of the target mu_ra, mu_dec. Adapted from J. D. Hartman's\n VARTOOLS/converttime.c [coordprecess].\n\n Parameters\n ----------\n\n ra,dec : float\n The equatorial coordinates of the object at `epoch_one` to precess in\n decimal degrees.\n\n epoch_one : float\n Origin epoch to precess from to target epoch. This is a float, like:\n 1985.0, 2000.0, etc.\n\n epoch_two : float\n Target epoch to precess from origin epoch. This is a float, like:\n 2000.0, 2018.0, etc.\n\n jd : float\n The full Julian date to use along with the propermotions in `mu_ra`, and\n `mu_dec` to handle proper motion along with the coordinate frame\n precession. If one of `jd`, `mu_ra`, or `mu_dec` is missing, the proper\n motion will not be used to calculate the final precessed coordinates.\n\n mu_ra,mu_dec : float\n The proper motion in mas/yr in right ascension and declination. If these\n are provided along with `jd`, the total proper motion of the object will\n be taken into account to calculate the final precessed coordinates.\n\n outscalar : bool\n If True, converts the output coordinates from one-element np.arrays to\n scalars.\n\n Returns\n -------\n\n precessed_ra, precessed_dec : float\n A tuple of precessed equatorial coordinates in decimal degrees at\n `epoch_two` taking into account proper motion if `jd`, `mu_ra`, and\n `mu_dec` are provided."}
{"_id": "q_9312", "text": "This returns True if only one True-ish element exists in `iterable`.\n\n Parameters\n ----------\n\n iterable : iterable\n\n Returns\n -------\n\n bool\n True if only one True-ish element exists in `iterable`. False otherwise."}
{"_id": "q_9313", "text": "This calculates the future epochs for a transit, given a period and a\n starting epoch\n\n The equation used is::\n\n t_mid = period*epoch + t0\n\n Default behavior if no kwargs are used is to define `t0` as the median\n finite time of the passed `t_mid` array.\n\n Only one of `err_t_mid` or `t0_fixed` should be passed.\n\n Parameters\n ----------\n\n t_mid : np.array\n A np.array of transit mid-time measurements\n\n period : float\n The period used to calculate epochs, per the equation above. For typical\n use cases, a period precise to ~1e-5 days is sufficient to get correct\n epochs.\n\n err_t_mid : None or np.array\n If provided, contains the errors of the transit mid-time\n measurements. The zero-point epoch is then set equal to the average of\n the transit times, weighted as `1/err_t_mid^2` . This minimizes the\n covariance between the transit epoch and the period (e.g., Gibson et\n al. 2013). For standard O-C analysis this is the best method.\n\n t0_fixed : None or float:\n If provided, use this t0 as the starting epoch. (Overrides all others).\n\n t0_percentile : None or float\n If provided, use this percentile of `t_mid` to define `t0`.\n\n Returns\n -------\n\n tuple\n This is the of the form `(integer_epoch_array, t0)`.\n `integer_epoch_array` is an array of integer epochs (float-type),\n of length equal to the number of *finite* mid-times passed."}
{"_id": "q_9314", "text": "This converts a UTC JD to a Python `datetime` object or ISO date string.\n\n Parameters\n ----------\n\n jd : float\n The Julian date measured at UTC.\n\n returniso : bool\n If False, returns a naive Python `datetime` object corresponding to\n `jd`. If True, returns the ISO format string corresponding to the date\n and time at UTC from `jd`.\n\n Returns\n -------\n\n datetime or str\n Depending on the value of `returniso`."}
{"_id": "q_9315", "text": "Returns BJD_TDB or HJD_TDB for input JD_UTC.\n\n The equation used is::\n\n BJD_TDB = JD_UTC + JD_to_TDB_corr + romer_delay\n\n where:\n\n - JD_to_TDB_corr is the difference between UTC and TDB JDs\n - romer_delay is the delay caused by finite speed of light from Earth-Sun\n\n This is based on the code at:\n\n https://mail.scipy.org/pipermail/astropy/2014-April/003148.html\n\n Note that this does not correct for:\n\n 1. precession of coordinates if the epoch is not 2000.0\n 2. precession of coordinates if the target has a proper motion\n 3. Shapiro delay\n 4. Einstein delay\n\n Parameters\n ----------\n\n jd : float or array-like\n The Julian date(s) measured at UTC.\n\n ra,dec : float\n The equatorial coordinates of the object in decimal degrees.\n\n obslon,obslat,obsalt : float or None\n The longitude, latitude of the observatory in decimal degrees and\n altitude of the observatory in meters. If these are not provided, the\n corrected JD will be calculated with respect to the center of the Earth.\n\n jd_type : {'bjd','hjd'}\n Conversion type to perform, either to Baryocentric Julian Date ('bjd')\n or to Heliocenter Julian Date ('hjd').\n\n Returns\n -------\n\n float or np.array\n The converted BJD or HJD."}
{"_id": "q_9316", "text": "This wraps `checkplotlist.checkplot_infokey_worker`.\n\n This is used to get the correct dtype for each element in retrieved results.\n\n Parameters\n ----------\n\n task : tuple\n task[0] = cpfile\n task[1] = keyspeclist (infokeys kwarg from `add_cpinfo_to_lclist`)\n\n Returns\n -------\n\n dict\n All of the requested keys from the checkplot are returned along with\n their values in a dict."}
{"_id": "q_9317", "text": "Handle changes from atom ContainerLists"}
{"_id": "q_9318", "text": "Create the underlying widget."}
{"_id": "q_9319", "text": "Initialize the underlying map options."}
{"_id": "q_9320", "text": "Add markers, polys, callouts, etc.."}
{"_id": "q_9321", "text": "Initialize the info window adapter. Should only be done if one of \n the markers defines a custom view."}
{"_id": "q_9322", "text": "Create the fragment and pull the map reference when it's loaded."}
{"_id": "q_9323", "text": "Remove the marker if it was added to the map when destroying"}
{"_id": "q_9324", "text": "If a child is added we have to make sure the map adapter exists"}
{"_id": "q_9325", "text": "Convert our options into the actual marker object"}
{"_id": "q_9326", "text": "Instantiate and configures backends.\n\n :arg list-of-dicts backends: the backend configuration as a list of dicts where\n each dict specifies a separate backend.\n\n Each backend dict consists of two things:\n\n 1. ``class`` with a value that is either a Python class or a dotted\n Python path to one\n\n 2. ``options`` dict with options for the backend in question to\n configure it\n\n See the documentation for the backends you're using to know what is\n configurable in the options dict.\n\n :arg raise_errors bool: whether or not to raise an exception if something\n happens in configuration; if it doesn't raise an exception, it'll log\n the exception\n\n For example, this sets up a\n :py:class:`markus.backends.logging.LoggingMetrics` backend::\n\n markus.configure([\n {\n 'class': 'markus.backends.logging.LoggingMetrics',\n 'options': {\n 'logger_name': 'metrics'\n }\n }\n ])\n\n\n You can set up as many backends as you like.\n\n .. Note::\n\n During application startup, Markus should get configured before the app\n starts generating metrics. Any metrics generated before Markus is\n configured will get dropped.\n\n However, anything can call :py:func:`markus.get_metrics` and get a\n :py:class:`markus.main.MetricsInterface` before Markus has been\n configured including at module load time."}
{"_id": "q_9327", "text": "Return MetricsInterface instance with specified name.\n\n The name is used as the prefix for all keys generated with this\n :py:class:`markus.main.MetricsInterface`.\n\n The :py:class:`markus.main.MetricsInterface` is not tied to metrics\n backends. The list of active backends are globally configured. This allows\n us to create :py:class:`markus.main.MetricsInterface` classes without\n having to worry about bootstrapping order of the app.\n\n :arg class/instance/str thing: The name to use as a key prefix.\n\n If this is a class, it uses the dotted Python path. If this is an\n instance, it uses the dotted Python path plus ``str(instance)``.\n\n :arg str extra: Any extra bits to add to the end of the name.\n\n :returns: a ``MetricsInterface`` instance\n\n Examples:\n\n >>> from markus import get_metrics\n\n Create a MetricsInterface with the name \"myapp\" and generate a count with\n stat \"myapp.thing1\" and value 1:\n\n >>> metrics = get_metrics('myapp')\n >>> metrics.incr('thing1', value=1)\n\n Create a MetricsInterface with the prefix of the Python module it's being\n called in:\n\n >>> metrics = get_metrics(__name__)\n\n Create a MetricsInterface with the prefix as the qualname of the class:\n\n >>> class Foo:\n ... def __init__(self):\n ... self.metrics = get_metrics(self)\n\n Create a prefix of the class path plus some identifying information:\n\n >>> class Foo:\n ... def __init__(self, myname):\n ... self.metrics = get_metrics(self, extra=myname)\n ...\n >>> foo = Foo('jim')\n\n Assume that ``Foo`` is defined in the ``myapp`` module. Then this will\n generate the name ``myapp.Foo.jim``."}
{"_id": "q_9328", "text": "Record a timing value.\n\n Record the length of time of something to be added to a set of values from\n which a statistical distribution is derived.\n\n Depending on the backend, you might end up with count, average, median,\n 95% and max for a set of timing values.\n\n This is useful for analyzing how long things take to occur. For\n example, how long it takes for a function to run, to upload files, or\n for a database query to execute.\n\n :arg string stat: A period delimited alphanumeric key.\n\n :arg int value: A timing in milliseconds.\n\n :arg list-of-strings tags: Each string in the tag consists of a key and\n a value separated by a colon. Tags can make it easier to break down\n metrics for analysis.\n\n For example ``['env:stage', 'compressed:yes']``.\n\n For example:\n\n >>> import time\n >>> import markus\n\n >>> metrics = markus.get_metrics('foo')\n >>> def upload_file(payload):\n ... start_time = time.perf_counter() # this is in seconds\n ... # upload the file\n ... timing = (time.perf_counter() - start_time) * 1000.0 # convert to ms\n ... metrics.timing('upload_file_time', value=timing)\n\n .. Note::\n\n If you're timing a function or a block of code, it's probably more\n convenient to use :py:meth:`markus.main.MetricsInterface.timer` or\n :py:meth:`markus.main.MetricsInterface.timer_decorator`."}
{"_id": "q_9329", "text": "Timer decorator for easily computing timings.\n\n :arg string stat: A period delimited alphanumeric key.\n\n :arg list-of-strings tags: Each string in the tag consists of a key and\n a value separated by a colon. Tags can make it easier to break down\n metrics for analysis.\n\n For example ``['env:stage', 'compressed:yes']``.\n\n For example:\n\n >>> mymetrics = get_metrics(__name__)\n\n >>> @mymetrics.timer_decorator('long_function')\n ... def long_function():\n ... # perform some thing we want to keep metrics on\n ... pass\n\n\n .. Note::\n\n All timings generated with this are in milliseconds."}
{"_id": "q_9330", "text": "Generate a tag for use with the tag backends.\n\n The key and value (if there is one) are sanitized according to the\n following rules:\n\n 1. after the first character, all characters must be alphanumeric,\n underscore, minus, period, or slash--invalid characters are converted\n to \"_\"\n 2. lowercase\n\n If a value is provided, the final tag is `key:value`.\n\n The final tag must start with a letter. If it doesn't, an \"a\" is prepended.\n\n The final tag is truncated to 200 characters.\n\n If the final tag is \"device\", \"host\", or \"source\", then a \"_\" will be\n appended the end.\n\n :arg str key: the key to use\n :arg str value: the value (if any)\n\n :returns: the final tag\n\n Examples:\n\n >>> generate_tag('yellow')\n 'yellow'\n >>> generate_tag('rule', 'is_yellow')\n 'rule:is_yellow'\n\n Example with ``incr``:\n\n >>> import markus\n >>> mymetrics = markus.get_metrics(__name__)\n\n >>> mymetrics.incr('somekey', value=1,\n ... tags=[generate_tag('rule', 'is_yellow')])"}
{"_id": "q_9331", "text": "Report a timing."}
{"_id": "q_9332", "text": "Roll up stats and log them."}
{"_id": "q_9333", "text": "Learn the vocabulary dictionary and return term-document matrix.\n This is equivalent to fit followed by transform, but more efficiently\n implemented.\n\n Parameters\n ----------\n raw_documents : iterable\n An iterable which yields either str, unicode or file objects.\n\n Returns\n -------\n X : array, [n_samples, n_features]\n Document-term matrix."}
{"_id": "q_9334", "text": "Add transformer to flow and apply transformer to data in flow\n\n Parameters\n ----------\n transformer : Transformer\n a transformer to transform data"}
{"_id": "q_9335", "text": "Train model with transformed data"}
{"_id": "q_9336", "text": "Export model and transformers to export_folder\n\n Parameters\n ----------\n model_name: string\n name of model to export\n export_folder: string\n folder to store exported model and transformers"}
{"_id": "q_9337", "text": "Fit linear model with Stochastic Gradient Descent.\n\n Parameters\n ----------\n X : {array-like, sparse matrix}, shape (n_samples, n_features)\n Training data\n\n y : numpy array, shape (n_samples,)\n Target values\n\n coef_init : array, shape (n_classes, n_features)\n The initial coefficients to warm-start the optimization.\n\n intercept_init : array, shape (n_classes,)\n The initial intercept to warm-start the optimization.\n\n sample_weight : array-like, shape (n_samples,), optional\n Weights applied to individual samples.\n If not provided, uniform weights are assumed. These weights will\n be multiplied with class_weight (passed through the\n constructor) if class_weight is specified\n\n Returns\n -------\n self : returns an instance of self."}
{"_id": "q_9338", "text": "pretty print for confusion matrixes"}
{"_id": "q_9339", "text": "Given a URL, look for the corresponding dataset in the local cache.\n If it's not there, download it. Then return the path to the cached file."}
{"_id": "q_9340", "text": "Predict class labels for samples in X.\n\n Parameters\n ----------\n X : {array-like, sparse matrix}, shape = [n_samples, n_features]\n Samples."}
{"_id": "q_9341", "text": "In order to obtain the most likely label for a list of text\n\n Parameters\n ----------\n X : list of string\n Raw texts\n\n Returns\n -------\n C : list of string\n List labels"}
{"_id": "q_9342", "text": "Make an annotation value that can be used to sort by an enum field.\n\n ``field``\n The name of an EnumChoiceField.\n\n ``members``\n An iterable of Enum members in the order to sort by.\n\n Use like:\n\n .. code-block:: python\n\n desired_order = [MyEnum.bar, MyEnum.baz, MyEnum.foo]\n ChoiceModel.objects\\\\\n .annotate(my_order=order_enum('choice', desired_order))\\\\\n .order_by('my_order')\n\n As Enums are iterable, ``members`` can be the Enum itself\n if the default ordering is desired:\n\n .. code-block:: python\n\n ChoiceModel.objects\\\\\n .annotate(my_order=order_enum('choice', MyEnum))\\\\\n .order_by('my_order')\n\n .. warning:: On Python 2, Enums may not have a consistent order,\n depending upon how they were defined.\n You can set an explicit order using ``__order__`` to fix this.\n See the ``enum34`` docs for more information.\n\n Any enum members not present in the list of members\n will be sorted to the end of the results."}
{"_id": "q_9343", "text": "Convert a string from the database into an Enum value"}
{"_id": "q_9344", "text": "Convert an Enum value into a string for the database"}
{"_id": "q_9345", "text": "Return the config files for an environment & cluster specific app."}
{"_id": "q_9346", "text": "Merge the configuration sources and return the resulting DotDict."}
{"_id": "q_9347", "text": "Merge dictionary d2 into d1, overriding entries in d1 with values from d2.\n\n d1 is mutated.\n\n _path is for internal, recursive use."}
{"_id": "q_9348", "text": "Return a subset of a dictionary using the specified keys."}
{"_id": "q_9349", "text": "Convert obj into a DotDict, or list of DotDict.\n\n Directly nested lists aren't supported.\n Returns the result"}
{"_id": "q_9350", "text": "Write configuration to the applicaiton directory."}
{"_id": "q_9351", "text": "`usls` is an iterable of usl.\n\n return a mapping term -> usl list"}
{"_id": "q_9352", "text": "Record an event with the meter. By default it will record one event.\n\n :param value: number of event to record"}
{"_id": "q_9353", "text": "Returns the mean rate of the events since the start of the process."}
{"_id": "q_9354", "text": "Send metric and its snapshot."}
{"_id": "q_9355", "text": "Compose a statsd compatible string for a metric's measurement."}
{"_id": "q_9356", "text": "Get method that raises MissingSetting if the value was unset.\n\n This differs from the SafeConfigParser which may raise either a\n NoOptionError or a NoSectionError.\n\n We take extra **kwargs because the Python 3.5 configparser extends the\n get method signature and it calls self with those parameters.\n\n def get(self, section, option, *, raw=False, vars=None,\n fallback=_UNSET):"}
{"_id": "q_9357", "text": "json.loads wants an unistr in Python3. Convert it."}
{"_id": "q_9358", "text": "Convert set of human codes and to a dict of code to exactonline\n guid mappings.\n\n Example::\n\n ret = inv.get_ledger_code_to_guid_map(['1234', '5555'])\n ret == {'1234': '<guid1_from_exactonline_ledgeraccounts>',\n '5555': '<guid2_from_exactonline_ledgeraccounts>'}"}
{"_id": "q_9359", "text": "Get the \"current\" division and return a dictionary of divisions\n so the user can select the right one."}
{"_id": "q_9360", "text": "Optionally supply a list of ExactOnline invoice numbers.\n\n Returns a dictionary of ExactOnline invoice numbers to foreign\n (YourRef) invoice numbers."}
{"_id": "q_9361", "text": "solve a Sudoku grid inplace"}
{"_id": "q_9362", "text": "Create Django class-based view from injector class."}
{"_id": "q_9363", "text": "Create Flask method based dispatching view from injector class."}
{"_id": "q_9364", "text": "Create DRF class-based API view from injector class."}
{"_id": "q_9365", "text": "Create DRF model view set from injector class."}
{"_id": "q_9366", "text": "Recieve a streamer for a given file descriptor."}
{"_id": "q_9367", "text": "Called by the event loop whenever the fd is ready for reading."}
{"_id": "q_9368", "text": "Finalize closing."}
{"_id": "q_9369", "text": "Stop watching a given rule."}
{"_id": "q_9370", "text": "Actual rule setup."}
{"_id": "q_9371", "text": "Start the watcher, registering new watches if any."}
{"_id": "q_9372", "text": "Fetch an event.\n\n This coroutine will swallow events for removed watches."}
{"_id": "q_9373", "text": "Return True if valid, raise ValueError if not"}
{"_id": "q_9374", "text": "Return the total downloads, and the downloads column"}
{"_id": "q_9375", "text": "Respond to ``nsqd`` that you need more time to process the message."}
{"_id": "q_9376", "text": "Update the timer to reflect a successfull call"}
{"_id": "q_9377", "text": "Update the timer to reflect a failed call"}
{"_id": "q_9378", "text": "Closes all connections stops all periodic callbacks"}
{"_id": "q_9379", "text": "Adds a connection to ``nsqd`` at the specified address.\n\n :param host: the address to connect to\n :param port: the port to connect to"}
{"_id": "q_9380", "text": "Trigger a query of the configured ``nsq_lookupd_http_addresses``."}
{"_id": "q_9381", "text": "Stop listening for the named event via the specified callback.\n\n :param name: the name of the event\n :type name: string\n\n :param callback: the callback that was originally used\n :type callback: callable"}
{"_id": "q_9382", "text": "Execute the callbacks for the listeners on the specified event with the\n supplied arguments.\n\n All extra arguments are passed through to each callback.\n\n :param name: the name of the event\n :type name: string"}
{"_id": "q_9383", "text": "Modify soup so Dash.app can generate TOCs on the fly."}
{"_id": "q_9384", "text": "Convert docs from SOURCE to Dash.app's docset format."}
{"_id": "q_9385", "text": "We use logging's levels as an easy-to-use verbosity controller."}
{"_id": "q_9386", "text": "Determine source and destination using the options."}
{"_id": "q_9387", "text": "Create boilerplate files & directories and copy vanilla docs inside.\n\n Return a tuple of path to resources and connection to sqlite db."}
{"_id": "q_9388", "text": "Add icon to docset"}
{"_id": "q_9389", "text": "Make prediction\n input test data\n output the prediction"}
{"_id": "q_9390", "text": "Theta sigmoid function"}
{"_id": "q_9391", "text": "Merges the default adapters file in the trimmomatic adapters directory\n\n Returns\n -------\n str\n Path with the merged adapters file."}
{"_id": "q_9392", "text": "Main executor of the trimmomatic template.\n\n Parameters\n ----------\n sample_id : str\n Sample Identification string.\n fastq_pair : list\n Two element list containing the paired FastQ files.\n trim_range : list\n Two element list containing the trimming range.\n trim_opts : list\n Four element list containing several trimmomatic options:\n [*SLIDINGWINDOW*; *LEADING*; *TRAILING*; *MINLEN*]\n phred : int\n Guessed phred score for the sample. The phred score is a generated\n output from :py:class:`templates.integrity_coverage`.\n adapters_file : str\n Path to adapters file. If not provided, or the path is not available,\n it will use the default adapters from Trimmomatic will be used\n clear : str\n Can be either 'true' or 'false'. If 'true', the input fastq files will\n be removed at the end of the run, IF they are in the working directory"}
{"_id": "q_9393", "text": "Function that parse samtools depth file and creates 3 dictionaries that\n will be useful to make the outputs of this script, both the tabular file\n and the json file that may be imported by pATLAS\n\n Parameters\n ----------\n depth_file: textIO\n the path to depth file for each sample\n\n Returns\n -------\n depth_dic_coverage: dict\n dictionary with the coverage per position for each plasmid"}
{"_id": "q_9394", "text": "Sets the path to the appropriate jinja template file\n\n When a Process instance is initialized, this method will fetch\n the location of the appropriate template file, based on the\n ``template`` argument. It will raise an exception is the template\n file is not found. Otherwise, it will set the\n :py:attr:`Process.template_path` attribute."}
{"_id": "q_9395", "text": "Returns the main raw channel for the process\n\n Provided with at least a channel name, this method returns the raw\n channel name and specification (the nextflow string definition)\n for the process. By default, it will fork from the raw input of\n the process' :attr:`~Process.input_type` attribute. However, this\n behaviour can be overridden by providing the ``input_type`` argument.\n\n If the specified or inferred input type exists in the\n :attr:`~Process.RAW_MAPPING` dictionary, the channel info dictionary\n will be retrieved along with the specified input channel. Otherwise,\n it will return None.\n\n An example of the returned dictionary is::\n\n {\"input_channel\": \"myChannel\",\n \"params\": \"fastq\",\n \"channel\": \"IN_fastq_raw\",\n \"channel_str\":\"IN_fastq_raw = Channel.fromFilePairs(params.fastq)\"\n }\n\n Returns\n -------\n dict or None\n Dictionary with the complete raw channel info. None if no\n channel is found."}
{"_id": "q_9396", "text": "Wrapper to the jinja2 render method from a template file\n\n Parameters\n ----------\n template : str\n Path to template file.\n context : dict\n Dictionary with kwargs context to populate the template"}
{"_id": "q_9397", "text": "Class property that returns a populated template string\n\n This property allows the template of a particular process to be\n dynamically generated and returned when doing ``Process.template_str``.\n\n Returns\n -------\n x : str\n String with the complete and populated process template"}
{"_id": "q_9398", "text": "General purpose method that sets the main channels\n\n This method will take a variable number of keyword arguments to\n set the :py:attr:`Process._context` attribute with the information\n on the main channels for the process. This is done by appending\n the process ID (:py:attr:`Process.pid`) attribute to the input,\n output and status channel prefix strings. In the output channel,\n the process ID is incremented by 1 to allow the connection with the\n channel in the next process.\n\n The ``**kwargs`` system for setting the :py:attr:`Process._context`\n attribute also provides additional flexibility. In this way,\n individual processes can provide additional information not covered\n in this method, without changing it.\n\n Parameters\n ----------\n kwargs : dict\n Dictionary with the keyword arguments for setting up the template\n context"}
{"_id": "q_9399", "text": "Updates the forks attribute with the sink channel destination\n\n Parameters\n ----------\n sink : str\n Channel onto which the main input will be forked to"}
{"_id": "q_9400", "text": "General purpose method for setting a secondary channel\n\n This method allows a given source channel to be forked into one or\n more channels and sets those forks in the :py:attr:`Process.forks`\n attribute. Both the source and the channels in the ``channel_list``\n argument must be the final channel strings, which means that this\n method should be called only after setting the main channels.\n\n If the source is not a main channel, this will simply create a fork\n or set for every channel in the ``channel_list`` argument list::\n\n SOURCE_CHANNEL_1.into{SINK_1;SINK_2}\n\n If the source is a main channel, this will apply some changes to\n the output channel of the process, to avoid overlapping main output\n channels. For instance, forking the main output channel for process\n 2 would create a ``MAIN_2.into{...}``. The issue here is that the\n ``MAIN_2`` channel is expected as the input of the next process, but\n now is being used to create the fork. To solve this issue, the output\n channel is modified into ``_MAIN_2``, and the fork is set to\n the channels provided channels plus the ``MAIN_2`` channel::\n\n _MAIN_2.into{MAIN_2;MAIN_5;...}\n\n Parameters\n ----------\n source : str\n String with the name of the source channel\n channel_list : list\n List of channels that will receive a fork of the secondary\n channel"}
{"_id": "q_9401", "text": "Sets the initial definition of the extra input channels.\n\n The ``channel_dict`` argument should contain the input type and\n destination channel of each parameter (which is the key)::\n\n channel_dict = {\n \"param1\": {\n \"input_type\": \"fasta\"\n \"channels\": [\"abricate_2_3\", \"chewbbaca_3_4\"]\n }\n }\n\n Parameters\n ----------\n channel_dict : dict\n Dictionary with the extra_input parameter as key, and a dictionary\n as a value with the input_type and destination channels"}
{"_id": "q_9402", "text": "Attempts to retrieve the coverage value from the header string.\n\n It splits the header by \"_\" and then screens the list backwards in\n search of the first float value. This will be interpreted as the\n coverage value. If it cannot find a float value, it returns None.\n This search methodology is based on the strings of assemblers\n like spades and skesa that put the mean kmer coverage for each\n contig in its corresponding fasta header.\n\n Parameters\n ----------\n header_str : str\n String\n\n Returns\n -------\n float or None\n The coverage value for the contig. None if it cannot find the\n value in the provide string."}
{"_id": "q_9403", "text": "Get GC content and proportions.\n\n Parameters\n ----------\n sequence : str\n The complete sequence of the contig.\n length : int\n The length of the sequence contig.\n\n Returns\n -------\n x : dict\n Dictionary with the at/gc/n counts and proportions"}
{"_id": "q_9404", "text": "Filters the contigs of the assembly according to user provided\\\n comparisons.\n\n The comparisons must be a list of three elements with the\n :py:attr:`~Assembly.contigs` key, operator and test value. For\n example, to filter contigs with a minimum length of 250, a comparison\n would be::\n\n self.filter_contigs([\"length\", \">=\", 250])\n\n The filtered contig ids will be stored in the\n :py:attr:`~Assembly.filtered_ids` list.\n\n The result of the test for all contigs will be stored in the\n :py:attr:`~Assembly.report` dictionary.\n\n Parameters\n ----------\n comparisons : list\n List with contig key, operator and value to test."}
{"_id": "q_9405", "text": "Returns the length of the assembly, without the filtered contigs.\n\n Returns\n -------\n x : int\n Total length of the assembly."}
{"_id": "q_9406", "text": "Writes a report with the test results for the current assembly\n\n Parameters\n ----------\n output_file : str\n Name of the output assembly file."}
{"_id": "q_9407", "text": "This function performs two sanity checks in the pipeline string. The first\n check, assures that each fork contains a lane token '|', while the second\n check looks for duplicated processes within the same fork.\n\n Parameters\n ----------\n pipeline_string: str\n String with the definition of the pipeline, e.g.::\n 'processA processB processC(ProcessD | ProcessE)'"}
{"_id": "q_9408", "text": "Wrapper that performs all sanity checks on the pipeline string\n\n Parameters\n ----------\n pipeline_str : str\n String with the pipeline definition"}
{"_id": "q_9409", "text": "Returns the lane of the last process that matches fork_process\n\n Parameters\n ----------\n fork_process : list\n List of processes before the fork.\n pipeline_list : list\n List with the pipeline connection dictionaries.\n\n Returns\n -------\n int\n Lane of the last process that matches fork_process"}
{"_id": "q_9410", "text": "From a raw pipeline string, get a list of lanes from the start\n of the current fork.\n\n When the pipeline is being parsed, it will be split at every fork\n position. The string at the right of the fork position will be provided\n to this function. It's job is to retrieve the lanes that result\n from that fork, ignoring any nested forks.\n\n Parameters\n ----------\n lanes_str : str\n Pipeline string after a fork split\n\n Returns\n -------\n lanes : list\n List of lists, with the list of processes for each lane"}
{"_id": "q_9411", "text": "Connects a linear list of processes into a list of dictionaries\n\n Parameters\n ----------\n plist : list\n List with process names. This list should contain at least two entries.\n lane : int\n Corresponding lane of the processes\n\n Returns\n -------\n res : list\n List of dictionaries with the links between processes"}
{"_id": "q_9412", "text": "Makes the connection between a process and the first processes in the\n lanes to which it forks.\n\n The ``lane`` argument should correspond to the lane of the source process.\n For each lane in ``sink``, the lane counter will increase.\n\n Parameters\n ----------\n source : str\n Name of the process that is forking\n sink : list\n List of the processes where the source will fork to. Each element\n corresponds to the start of a lane.\n source_lane : int\n Lane of the forking process\n lane : int\n Lane of the source process\n\n Returns\n -------\n res : list\n List of dictionaries with the links between processes"}
{"_id": "q_9413", "text": "Returns the pipeline string with unique identifiers and a dictionary with\n references between the unique keys and the original values\n\n Parameters\n ----------\n pipeline_str : str\n Pipeline string\n\n Returns\n -------\n str\n Pipeline string with unique identifiers\n dict\n Match between process unique values and original names"}
{"_id": "q_9414", "text": "Removes unique identifiers and add the original process names to the\n already parsed pipelines\n\n Parameters\n ----------\n identifiers_to_tags : dict\n Match between unique process identifiers and process names\n pipeline_links: list\n Parsed pipeline list with unique identifiers\n\n Returns\n -------\n list\n Pipeline list with original identifiers"}
{"_id": "q_9415", "text": "Parses the .nextflow.log file and retrieves the complete list\n of processes\n\n This method searches for specific signatures at the beginning of the\n .nextflow.log file::\n\n Apr-19 19:07:32.660 [main] DEBUG nextflow.processor\n TaskProcessor - Creating operator > report_corrupt_1_1 --\n maxForks: 4\n\n When a line with the .*Creating operator.* signature is found, the\n process name is retrieved and populates the :attr:`processes` attribute"}
{"_id": "q_9416", "text": "Clears inspect attributes when re-executing a pipeline"}
{"_id": "q_9417", "text": "Checks whether the channels to each process have been closed."}
{"_id": "q_9418", "text": "Method used to retrieve the contents of a log file into a list.\n\n Parameters\n ----------\n path\n\n Returns\n -------\n list or None\n Contents of the provided file, each line as a list entry"}
{"_id": "q_9419", "text": "Assess whether the cpu load or memory usage is above the allocation\n\n Parameters\n ----------\n process : str\n Process name\n vals : vals\n List of trace information for each tag of that process\n\n Returns\n -------\n cpu_warnings : dict\n Keys are tags and values are the excessive cpu load\n mem_warnings : dict\n Keys are tags and values are the excessive rss"}
{"_id": "q_9420", "text": "Updates the process stats with the information from the processes\n\n This method is called at the end of each static parsing of the nextflow\n trace file. It re-populates the :attr:`process_stats` dictionary\n with the new stat metrics."}
{"_id": "q_9421", "text": "Method that parses the nextflow log file once and updates the\n submitted number of samples for each process"}
{"_id": "q_9422", "text": "Provides curses scroll functionality."}
{"_id": "q_9423", "text": "Provides curses horizontal padding"}
{"_id": "q_9424", "text": "Prepares the first batch of information, containing static\n information such as the pipeline file, and configuration files\n\n Returns\n -------\n dict\n Dict with the static information for the first POST request"}
{"_id": "q_9425", "text": "Function that opens the dotfile named .treeDag.json in the current\n working directory\n\n Returns\n -------\n Returns a dictionary with the dag object to be used in the post\n instance available through the method _establish_connection"}
{"_id": "q_9426", "text": "Gets the nextflow file path from the nextflow log file. It searches for\n the nextflow run command throughout the file.\n\n Parameters\n ----------\n log_file : str\n Path for the .nextflow.log file\n\n Returns\n -------\n str\n Path for the nextflow file"}
{"_id": "q_9427", "text": "Parses a nextflow trace file, searches for processes with a specific tag\n and sends a JSON report with the relevant information\n\n The expected fields for the trace file are::\n\n 0. task_id\n 1. process\n 2. tag\n 3. status\n 4. exit code\n 5. start timestamp\n 6. container\n 7. cpus\n 8. duration\n 9. realtime\n 10. queue\n 11. cpu percentage\n 12. memory percentage\n 13. real memory size of the process\n 14. virtual memory size of the process\n\n Parameters\n ----------\n trace_file : str\n Path to the nextflow trace file"}
{"_id": "q_9428", "text": "Brews a given list of processes according to the recipe\n\n Parameters\n ----------\n args : argparse.Namespace\n The arguments passed through argparser that will be used to check the\n the recipe, tasks and brew the process\n\n Returns\n -------\n str\n The final pipeline string, ready for the engine.\n list\n List of process strings."}
{"_id": "q_9429", "text": "Method that iterates over all available recipes and prints their\n information to the standard output\n\n Parameters\n ----------\n full : bool\n If true, it will provide the pipeline string along with the recipe name"}
{"_id": "q_9430", "text": "Validate pipeline string\n\n Validates the pipeline string by searching for forbidden characters\n\n Parameters\n ----------\n pipeline_string : str\n STring with the processes provided\n\n Returns\n -------"}
{"_id": "q_9431", "text": "Builds the upstream pipeline of the current process\n\n Checks for the upstream processes to the current process and\n adds them to the current pipeline fragment if they were provided in\n the process list.\n\n Parameters\n ----------\n process_descriptions : dict\n Information of processes input, output and if is forkable\n task : str\n Current process\n all_tasks : list\n A list of all provided processes\n task_pipeline : list\n Current pipeline fragment\n count_forks : int\n Current number of forks\n total_tasks : str\n All space separated processes\n forks : list\n Current forks\n Returns\n -------\n list : resulting pipeline fragment"}
{"_id": "q_9432", "text": "Builds the possible forks and connections between the provided\n processes\n\n This method loops through all the provided tasks and builds the\n upstream and downstream pipeline if required. It then returns all\n possible forks than need to be merged \u00e0 posteriori`\n\n Parameters\n ----------\n process_descriptions : dict\n Information of processes input, output and if is forkable\n tasks : str\n Space separated processes\n check_upstream : bool\n If is to build the upstream pipeline of the current task\n check_downstream : bool\n If is to build the downstream pipeline of the current task\n count_forks : int\n Number of current forks\n total_tasks : str\n All space separated processes\n forks : list\n Current forks\n\n Returns\n -------\n list : List with all the possible pipeline forks"}
{"_id": "q_9433", "text": "Generates a component string based on the provided parameters and\n directives\n\n Parameters\n ----------\n component : str\n Component name\n params : dict\n Dictionary with parameter information\n directives : dict\n Dictionary with directives information\n\n Returns\n -------\n str\n Component string with the parameters and directives, ready for\n parsing by flowcraft engine"}
{"_id": "q_9434", "text": "Public method for parsing abricate output files.\n\n This method is called at at class instantiation for the provided\n output files. Additional abricate output files can be added using\n this method after the class instantiation.\n\n Parameters\n ----------\n fls : list\n List of paths to Abricate files"}
{"_id": "q_9435", "text": "Parser for a single abricate output file.\n\n This parser will scan a single Abricate output file and populate\n the :py:attr:`Abricate.storage` attribute.\n\n Parameters\n ----------\n fl : str\n Path to abricate output file\n\n Notes\n -----\n This method will populate the :py:attr:`Abricate.storage` attribute\n with all compliant lines in the abricate output file. Entries are\n inserted using an arbitrary key that is set by the\n :py:attr:`Abricate._key` attribute."}
{"_id": "q_9436", "text": "General purpose filter iterator.\n\n This general filter iterator allows the filtering of entries based\n on one or more custom filters. These filters must contain\n an entry of the `storage` attribute, a comparison operator, and the\n test value. For example, to filter out entries with coverage below 80::\n\n my_filter = [\"coverage\", \">=\", 80]\n\n Filters should always be provide as a list of lists::\n\n iter_filter([[\"coverage\", \">=\", 80]])\n # or\n my_filters = [[\"coverage\", \">=\", 80],\n [\"identity\", \">=\", 50]]\n\n iter_filter(my_filters)\n\n As a convenience, a list of the desired databases can be directly\n specified using the `database` argument, which will only report\n entries for the specified databases::\n\n iter_filter(my_filters, databases=[\"plasmidfinder\"])\n\n By default, this method will yield the complete entry record. However,\n the returned filters can be specified using the `fields` option::\n\n iter_filter(my_filters, fields=[\"reference\", \"coverage\"])\n\n Parameters\n ----------\n filters : list\n List of lists with the custom filter. Each list should have three\n elements. (1) the key from the entry to be compared; (2) the\n comparison operator; (3) the test value. Example:\n ``[[\"identity\", \">\", 80]]``.\n databases : list\n List of databases that should be reported.\n fields : list\n List of fields from each individual entry that are yielded.\n filter_behavior : str\n options: ``'and'`` ``'or'``\n Sets the behaviour of the filters, if multiple filters have been\n provided. By default it is set to ``'and'``, which means that an\n entry has to pass all filters. It can be set to ``'or'``, in which\n case one one of the filters has to pass.\n\n yields\n ------\n dic : dict\n Dictionary object containing a :py:attr:`Abricate.storage` entry\n that passed the filters."}
{"_id": "q_9437", "text": "Writes the JSON report to a json file"}
{"_id": "q_9438", "text": "Main executor of the assembly_report template.\n\n Parameters\n ----------\n sample_id : str\n Sample Identification string.\n assembly_file : str\n Path to assembly file in Fasta format."}
{"_id": "q_9439", "text": "Generates a CSV report with summary statistics about the assembly\n\n The calculated statistics are:\n\n - Number of contigs\n - Average contig size\n - N50\n - Total assembly length\n - Average GC content\n - Amount of missing data\n\n Parameters\n ----------\n output_csv: str\n Name of the output CSV file."}
{"_id": "q_9440", "text": "Get proportion of GC from a string\n\n Parameters\n ----------\n s : str\n Arbitrary string\n\n Returns\n -------\n x : float\n GC proportion."}
{"_id": "q_9441", "text": "Calculates a sliding window of the GC content for the assembly\n\n\n Returns\n -------\n gc_res : list\n List of GC proportion floats for each data point in the sliding\n window"}
{"_id": "q_9442", "text": "Main executor of the skesa template.\n\n Parameters\n ----------\n sample_id : str\n Sample Identification string.\n fastq_pair : list\n Two element list containing the paired FastQ files.\n clear : str\n Can be either 'true' or 'false'. If 'true', the input fastq files will\n be removed at the end of the run, IF they are in the working directory"}
{"_id": "q_9443", "text": "Writes the report\n\n Parameters\n ----------\n data1\n data2\n\n Returns\n -------"}
{"_id": "q_9444", "text": "Returns the trim index from a ``bool`` list\n\n Provided with a list of ``bool`` elements (``[False, False, True, True]``),\n this function will assess the index of the list that minimizes the number\n of True elements (biased positions) at the extremities. To do so,\n it will iterate over the boolean list and find an index position where\n there are two consecutive ``False`` elements after a ``True`` element. This\n will be considered as an optimal trim position. For example, in the\n following list::\n\n [True, True, False, True, True, False, False, False, False, ...]\n\n The optimal trim index will be the 4th position, since it is the first\n occurrence of a ``True`` element with two False elements after it.\n\n If the provided ``bool`` list has no ``True`` elements, then the 0 index is\n returned.\n\n Parameters\n ----------\n biased_list: list\n List of ``bool`` elements, where ``True`` means a biased site.\n\n Returns\n -------\n x : index position of the biased list for the optimal trim."}
{"_id": "q_9445", "text": "Assess the optimal trim range for a given FastQC data file.\n\n This function will parse a single FastQC data file, namely the\n *'Per base sequence content'* category. It will retrieve the A/T and G/C\n content for each nucleotide position in the reads, and check whether the\n G/C and A/T proportions are between 80% and 120%. If they are, that\n nucleotide position is marked as biased for future removal.\n\n Parameters\n ----------\n data_file: str\n Path to FastQC data file.\n\n Returns\n -------\n trim_nt: list\n List containing the range with the best trimming positions for the\n corresponding FastQ file. The first element is the 5' end trim index\n and the second element is the 3' end trim index."}
{"_id": "q_9446", "text": "Get the optimal read trim range from data files of paired FastQ reads.\n\n Given the FastQC data report files for paired-end FastQ reads, this\n function will assess the optimal trim range for the 3' and 5' ends of\n the paired-end reads. This assessment will be based on the *'Per sequence\n GC content'*.\n\n Parameters\n ----------\n p1_data: str\n Path to FastQC data report file from pair 1\n p2_data: str\n Path to FastQC data report file from pair 2\n\n Returns\n -------\n optimal_5trim: int\n Optimal trim index for the 5' end of the reads\n optima_3trim: int\n Optimal trim index for the 3' end of the reads\n\n See Also\n --------\n trim_range"}
{"_id": "q_9447", "text": "Parse a bowtie log file.\n\n This is a bowtie log parsing method that populates the\n :py:attr:`self.n_reads, self.align_0x, self.align_1x, self.align_mt1x and self.overall_rate` attributes with\n data from the log file.\n\n Disclamer: THIS METHOD IS HORRIBLE BECAUSE THE BOWTIE LOG IS HORRIBLE.\n\n The insertion of data on the attribytes is done by the\n :py:meth:`set_attribute method.\n\n Parameters\n ----------\n bowtie_log : str\n Path to the boetie log file."}
{"_id": "q_9448", "text": "Parses the process string and returns the process name and its\n directives\n\n Process strings my contain directive information with the following\n syntax::\n\n proc_name={'directive':'val'}\n\n This method parses this string and returns the process name as a\n string and the directives information as a dictionary.\n\n Parameters\n ----------\n name_str : str\n Raw string with process name and, potentially, directive\n information\n\n Returns\n -------\n str\n Process name\n dict or None\n Process directives"}
{"_id": "q_9449", "text": "Searches the process tree backwards in search of a provided process\n\n The search takes into consideration the provided parent lanes and\n searches only those\n\n Parameters\n ----------\n template : str\n Name of the process template attribute being searched\n parent_lanes : list\n List of integers with the parent lanes to be searched\n\n Returns\n -------\n bool\n Returns True when the template is found. Otherwise returns False."}
{"_id": "q_9450", "text": "Adds the footer template to the master template string"}
{"_id": "q_9451", "text": "Sets the secondary channels for the pipeline\n\n This will iterate over the\n :py:attr:`NextflowGenerator.secondary_channels` dictionary that is\n populated when executing\n :func:`~NextflowGenerator._update_secondary_channels` method."}
{"_id": "q_9452", "text": "Compiles all status channels for the status compiler process"}
{"_id": "q_9453", "text": "Returns the nextflow resources string from a dictionary object\n\n If the dictionary has at least on of the resource directives, these\n will be compiled for each process in the dictionary and returned\n as a string read for injection in the nextflow config file template.\n\n This dictionary should be::\n\n dict = {\"processA\": {\"cpus\": 1, \"memory\": \"4GB\"},\n \"processB\": {\"cpus\": 2}}\n\n Parameters\n ----------\n res_dict : dict\n Dictionary with the resources for processes.\n pid : int\n Unique identified of the process\n\n Returns\n -------\n str\n nextflow config string"}
{"_id": "q_9454", "text": "Returns the nextflow containers string from a dictionary object\n\n If the dictionary has at least on of the container directives, these\n will be compiled for each process in the dictionary and returned\n as a string read for injection in the nextflow config file template.\n\n This dictionary should be::\n\n dict = {\"processA\": {\"container\": \"asd\", \"version\": \"1.0.0\"},\n \"processB\": {\"container\": \"dsd\"}}\n\n Parameters\n ----------\n cont_dict : dict\n Dictionary with the containers for processes.\n pid : int\n Unique identified of the process\n\n Returns\n -------\n str\n nextflow config string"}
{"_id": "q_9455", "text": "Returns the nextflow params string from a dictionary object.\n\n The params dict should be a set of key:value pairs with the\n parameter name, and the default parameter value::\n\n self.params = {\n \"genomeSize\": 2.1,\n \"minCoverage\": 15\n }\n\n The values are then added to the string as they are. For instance,\n a ``2.1`` float will appear as ``param = 2.1`` and a\n ``\"'teste'\" string will appear as ``param = 'teste'`` (Note the\n string).\n\n Returns\n -------\n str\n Nextflow params configuration string"}
{"_id": "q_9456", "text": "Returns the nextflow manifest config string to include in the\n config file from the information on the pipeline.\n\n Returns\n -------\n str\n Nextflow manifest configuration string"}
{"_id": "q_9457", "text": "Writes dag to output file\n\n Parameters\n ----------\n dict_viz: dict\n Tree like dictionary that is used to export tree data of processes\n to html file and here for the dotfile .treeDag.json"}
{"_id": "q_9458", "text": "Wrapper method that writes all configuration files to the pipeline\n directory"}
{"_id": "q_9459", "text": "Export pipeline params as a JSON to stdout\n\n This run mode iterates over the pipeline processes and exports the\n params dictionary of each component as a JSON to stdout."}
{"_id": "q_9460", "text": "Export pipeline directives as a JSON to stdout"}
{"_id": "q_9461", "text": "Export all dockerhub tags associated with each component given by\n the -t flag."}
{"_id": "q_9462", "text": "Main pipeline builder\n\n This method is responsible for building the\n :py:attr:`NextflowGenerator.template` attribute that will contain\n the nextflow code of the pipeline.\n\n First it builds the header, then sets the main channels, the\n secondary inputs, secondary channels and finally the\n status channels. When the pipeline is built, is writes the code\n to a nextflow file."}
{"_id": "q_9463", "text": "Returns a kmer list based on the provided kmer option and max read len.\n\n Parameters\n ----------\n kmer_opt : str\n The k-mer option. Can be either ``'auto'``, ``'default'`` or a\n sequence of space separated integers, ``'23, 45, 67'``.\n max_read_len : int\n The maximum read length of the current sample.\n\n Returns\n -------\n kmers : list\n List of k-mer values that will be provided to Spades."}
{"_id": "q_9464", "text": "Returns a hash of the reports JSON file"}
{"_id": "q_9465", "text": "Parses the nextflow trace file and retrieves the path of report JSON\n files that have not been sent to the service yet."}
{"_id": "q_9466", "text": "Parses nextflow log file and updates the run status"}
{"_id": "q_9467", "text": "Sends a PUT request with the report JSON files currently in the\n report_queue attribute.\n\n Parameters\n ----------\n report_id : str\n Hash of the report JSON as retrieved from :func:`~_get_report_hash`"}
{"_id": "q_9468", "text": "Sends a delete request for the report JSON hash\n\n Parameters\n ----------\n report_id : str\n Hash of the report JSON as retrieved from :func:`~_get_report_hash`"}
{"_id": "q_9469", "text": "Generates an adapter file for FastQC from a fasta file.\n\n The provided adapters file is assumed to be a simple fasta file with the\n adapter's name as header and the corresponding sequence::\n\n >TruSeq_Universal_Adapter\n AATGATACGGCGACCACCGAGATCTACACTCTTTCCCTACACGACGCTCTTCCGATCT\n >TruSeq_Adapter_Index 1\n GATCGGAAGAGCACACGTCTGAACTCCAGTCACATCACGATCTCGTATGCCGTCTTCTGCTTG\n\n Parameters\n ----------\n adapter_fasta : str\n Path to Fasta file with adapter sequences.\n\n Returns\n -------\n adapter_out : str or None\n The path to the reformatted adapter file. Returns ``None`` if the\n adapters file does not exist or the path is incorrect."}
{"_id": "q_9470", "text": "Send dictionary to output json file\n This function sends master_dict dictionary to a json file if master_dict is\n populated with entries, otherwise it won't create the file\n\n Parameters\n ----------\n master_dict: dict\n dictionary that stores all entries for a specific query sequence\n in multi-fasta given to mash dist as input against patlas database\n last_seq: str\n string that stores the last sequence that was parsed before writing to\n file and therefore after the change of query sequence between different\n rows on the input file\n mash_output: str\n the name/path of input file to main function, i.e., the name/path of\n the mash dist output txt file.\n sample_id: str\n The name of the sample being parse to .report.json file\n\n Returns\n -------"}
{"_id": "q_9471", "text": "Main function that allows to dump a mash dist txt file to a json file\n\n Parameters\n ----------\n mash_output: str\n A string with the input file.\n hash_cutoff: str\n the percentage cutoff for the percentage of shared hashes between query\n and plasmid in database that is allowed for the plasmid to be reported\n to the results outputs\n sample_id: str\n The name of the sample."}
{"_id": "q_9472", "text": "Writes versions JSON for a template file\n\n This method creates the JSON file ``.versions`` based on the metadata\n and specific functions that are present in a given template script.\n\n It starts by fetching the template metadata, which can be specified\n via the ``__version__``, ``__template__`` and ``__build__``\n attributes. If all of these attributes exist, it starts to populate\n a JSON/dict array (Note that the absence of any one of them will\n prevent the version from being written).\n\n Then, it will search the\n template scope for functions that start with the substring\n ``__set_version`` (For example ``def __set_version_fastqc()`).\n These functions should gather the version of\n an arbitrary program and return a JSON/dict object with the following\n information::\n\n {\n \"program\": <program_name>,\n \"version\": <version>\n \"build\": <build>\n }\n\n This JSON/dict object is then written in the ``.versions`` file."}
{"_id": "q_9473", "text": "This function enables users to add a color to the print. It also enables\n to pass end_char to print allowing to print several strings in the same line\n in different prints.\n\n Parameters\n ----------\n color_string: str\n The color code to pass to the function, which enables color change as\n well as background color change.\n msg: str\n The actual text to be printed\n end_char: str\n The character in which each print should finish. By default it will be\n \"\\n\"."}
{"_id": "q_9474", "text": "This function handles the dictionary of attributes of each Process class\n to print to stdout lists of all the components or the components which the\n user specifies in the -t flag.\n\n Parameters\n ----------\n procs_dict: dict\n A dictionary with the class attributes for all the components (or\n components that are used by the -t flag), that allow to create\n both the short_list and detailed_list. Dictionary example:\n {\"abyss\": {'input_type': 'fastq', 'output_type': 'fasta',\n 'dependencies': [], 'directives': {'abyss': {'cpus': 4,\n 'memory': '{ 5.GB * task.attempt }', 'container': 'flowcraft/abyss',\n 'version': '2.1.1', 'scratch': 'true'}}}"}
{"_id": "q_9475", "text": "Function that collects all processes available and stores a dictionary of\n the required arguments of each process class to be passed to\n procs_dict_parser\n\n Parameters\n ----------\n process_map: dict\n The dictionary with the Processes currently available in flowcraft\n and their corresponding classes as values\n args: argparse.Namespace\n The arguments passed through argparser that will be access to check the\n type of list to be printed\n pipeline_string: str\n the pipeline string"}
{"_id": "q_9476", "text": "Guesses the compression of an input file.\n\n This function guesses the compression of a given file by checking for\n a binary signature at the beginning of the file. These signatures are\n stored in the :py:data:`MAGIC_DICT` dictionary. The supported compression\n formats are gzip, bzip2 and zip. If none of the signatures in this\n dictionary are found at the beginning of the file, it returns ``None``.\n\n Parameters\n ----------\n file_path : str\n Path to input file.\n magic_dict : dict, optional\n Dictionary containing the signatures of the compression types. The\n key should be the binary signature and the value should be the\n compression format. If left ``None``, it falls back to\n :py:data:`MAGIC_DICT`.\n\n Returns\n -------\n file_type : str or None\n If a compression type is detected, returns a string with the format.\n If not, returns ``None``."}
{"_id": "q_9477", "text": "Get range of the Unicode encode range for a given string of characters.\n\n The encoding is determined from the result of the :py:func:`ord` built-in.\n\n Parameters\n ----------\n qual_str : str\n Arbitrary string.\n\n Returns\n -------\n x : tuple\n (Minimum Unicode code, Maximum Unicode code)."}
{"_id": "q_9478", "text": "Returns the valid encodings for a given encoding range.\n\n The encoding ranges are stored in the :py:data:`RANGES` dictionary, with\n the encoding name as a string and a list as a value containing the\n phred score and a tuple with the encoding range. For a given encoding\n range provided via the two first arguments, this function will return\n all possible encodings and phred scores.\n\n Parameters\n ----------\n rmin : int\n Minimum Unicode code in range.\n rmax : int\n Maximum Unicode code in range.\n\n Returns\n -------\n valid_encodings : list\n List of all possible encodings for the provided range.\n valid_phred : list\n List of all possible phred scores."}
{"_id": "q_9479", "text": "Parses a file with coverage information into objects.\n\n This function parses a TSV file containing coverage results for\n all contigs in a given assembly and will build an ``OrderedDict``\n with the information about their coverage and length. The length\n information is actually gathered from the contig header using a\n regular expression that assumes the usual header produced by Spades::\n\n contig_len = int(re.search(\"length_(.+?)_\", line).group(1))\n\n Parameters\n ----------\n coverage_file : str\n Path to TSV file containing the coverage results.\n\n Returns\n -------\n coverage_dict : OrderedDict\n Contains the coverage and length information for each contig.\n total_size : int\n Total size of the assembly in base pairs.\n total_cov : int\n Sum of coverage values across all contigs."}
{"_id": "q_9480", "text": "Generates a filtered assembly file.\n\n This function generates a filtered assembly file based on an original\n assembly and a minimum coverage threshold.\n\n Parameters\n ----------\n assembly_file : str\n Path to original assembly file.\n minimum_coverage : int or float\n Minimum coverage required for a contig to pass the filter.\n coverage_info : OrderedDict or dict\n Dictionary containing the coverage information for each contig.\n output_file : str\n Path where the filtered assembly file will be generated."}
{"_id": "q_9481", "text": "Main executor of the process_assembly_mapping template.\n\n Parameters\n ----------\n sample_id : str\n Sample Identification string.\n assembly_file : str\n Path to assembly file in Fasta format.\n coverage_file : str\n Path to TSV file with coverage information for each contig.\n coverage_bp_file : str\n Path to TSV file with coverage information for each base.\n bam_file : str\n Path to BAM file.\n opts : list\n List of options for processing assembly mapping.\n gsize : int\n Expected genome size"}
{"_id": "q_9482", "text": "Convers a CamelCase string into a snake_case one\n\n Parameters\n ----------\n name : str\n An arbitrary string that may be CamelCase\n\n Returns\n -------\n str\n The input string converted into snake_case"}
{"_id": "q_9483", "text": "Collects Process classes and return dict mapping templates to classes\n\n This function crawls through the components module and retrieves all\n classes that inherit from the Process class. Then, it converts the name\n of the classes (which should be CamelCase) to snake_case, which is used\n as the template name.\n\n Returns\n -------\n dict\n Dictionary mapping the template name (snake_case) to the corresponding\n process class."}
{"_id": "q_9484", "text": "Main executor of the process_newick template.\n\n Parameters\n ----------\n newick : str\n path to the newick file."}
{"_id": "q_9485", "text": "Find data points on the convex hull of a supplied data set\n\n Args:\n sample: data points as column vectors n x d\n n - number samples\n d - data dimension (should be two)\n\n Returns:\n a k x d matrix containint the convex hull data points"}
{"_id": "q_9486", "text": "Return data points that are most similar to basis vectors W"}
{"_id": "q_9487", "text": "Creates a gaussian kernel following Foote's paper."}
{"_id": "q_9488", "text": "Computes the self-similarity matrix of X."}
{"_id": "q_9489", "text": "Gaussian filter along the first axis of the feature matrix X."}
{"_id": "q_9490", "text": "Computes the novelty curve from the structural features."}
{"_id": "q_9491", "text": "Time-delay embedding with m dimensions and tau delays."}
{"_id": "q_9492", "text": "Formats the plot with the correct axis labels, title, ticks, and\n so on."}
{"_id": "q_9493", "text": "Plots all the boundaries.\n\n Parameters\n ----------\n all_boundaries: list\n A list of np.arrays containing the times of the boundaries, one array\n for each algorithm.\n est_file: str\n Path to the estimated file (JSON file)\n algo_ids : list\n List of algorithm ids to to read boundaries from.\n If None, all algorithm ids are read.\n title : str\n Title of the plot. If None, the name of the file is printed instead."}
{"_id": "q_9494", "text": "Plots all the labels.\n\n Parameters\n ----------\n all_labels: list\n A list of np.arrays containing the labels of the boundaries, one array\n for each algorithm.\n gt_times: np.array\n Array with the ground truth boundaries.\n est_file: str\n Path to the estimated file (JSON file)\n algo_ids : list\n List of algorithm ids to to read boundaries from.\n If None, all algorithm ids are read.\n title : str\n Title of the plot. If None, the name of the file is printed instead."}
{"_id": "q_9495", "text": "Plots the results of one track, with ground truth if it exists."}
{"_id": "q_9496", "text": "From a list of feature segments, return a list of 2D-Fourier Magnitude\n Coefs using the maximum segment size as main size and zero pad the rest.\n\n Parameters\n ----------\n feat_segments: list\n List of segments, one for each boundary interval.\n offset: int >= 0\n Number of frames to ignore from beginning and end of each segment.\n\n Returns\n -------\n fmcs: np.ndarray\n Tensor containing the 2D-FMC matrices, one matrix per segment."}
{"_id": "q_9497", "text": "Main function to compute the segment similarity of file file_struct.\n\n Parameters\n ----------\n F: np.ndarray\n Matrix containing one feature vector per row.\n bound_idxs: np.ndarray\n Array with the indeces of the segment boundaries.\n dirichlet: boolean\n Whether to use the dirichlet estimator of the number of unique labels.\n xmeans: boolean\n Whether to use the xmeans estimator of the number of unique labels.\n k: int > 0\n If the other two predictors are `False`, use fixed number of labels.\n offset: int >= 0\n Number of frames to ignore from beginning and end of each segment.\n\n Returns\n -------\n labels_est: np.ndarray\n Estimated labels, containing integer identifiers."}
{"_id": "q_9498", "text": "Fit the OLDA model\n\n Parameters\n ----------\n X : array-like, shape [n_samples]\n Training data: each example is an n_features-by-* data array\n\n Y : array-like, shape [n_samples]\n Training labels: each label is an array of change-points\n (eg, a list of segment boundaries)\n\n Returns\n -------\n self : object"}
{"_id": "q_9499", "text": "Partial-fit the OLDA model\n\n Parameters\n ----------\n X : array-like, shape [n_samples]\n Training data: each example is an n_features-by-* data array\n\n Y : array-like, shape [n_samples]\n Training labels: each label is an array of change-points\n (eg, a list of segment boundaries)\n\n Returns\n -------\n self : object"}
{"_id": "q_9500", "text": "Reads the boundary times and the labels.\n\n Parameters\n ----------\n audio_path : str\n Path to the audio file\n\n Returns\n -------\n ref_times : list\n List of boundary times\n ref_labels : list\n List of labels\n\n Raises\n ------\n IOError: if `audio_path` doesn't exist."}
{"_id": "q_9501", "text": "Finds the correct estimation from all the estimations contained in a\n JAMS file given the specified arguments.\n\n Parameters\n ----------\n jam : jams.JAMS\n JAMS object.\n boundaries_id : str\n Identifier of the algorithm used to compute the boundaries.\n labels_id : str\n Identifier of the algorithm used to compute the labels.\n params : dict\n Additional search parameters. E.g. {\"feature\" : \"pcp\"}.\n\n Returns\n -------\n ann : jams.Annotation\n Found estimation.\n `None` if it couldn't be found."}
{"_id": "q_9502", "text": "Saves the segment estimations in a JAMS file.\n\n Parameters\n ----------\n file_struct : FileStruct\n Object with the different file paths of the current file.\n times : np.array or list\n Estimated boundary times.\n If `list`, estimated hierarchical boundaries.\n labels : np.array(N, 2)\n Estimated labels (None in case we are only storing boundary\n evaluations).\n boundaries_id : str\n Boundary algorithm identifier.\n labels_id : str\n Labels algorithm identifier.\n params : dict\n Dictionary with additional parameters for both algorithms."}
{"_id": "q_9503", "text": "Gets all the possible boundary algorithms in MSAF.\n\n Returns\n -------\n algo_ids : list\n List of all the IDs of boundary algorithms (strings)."}
{"_id": "q_9504", "text": "Gets the files of the given dataset."}
{"_id": "q_9505", "text": "Reads hierarchical references from a jams file.\n\n Parameters\n ----------\n jams_file : str\n Path to the jams file.\n annotation_id : int > 0\n Identifier of the annotator to read from.\n exclude_levels: list\n List of levels to exclude. Empty list to include all levels.\n\n Returns\n -------\n hier_bounds : list\n List of the segment boundary times in seconds for each level.\n hier_labels : list\n List of the segment labels for each level.\n hier_levels : list\n List of strings for the level identifiers."}
{"_id": "q_9506", "text": "Reads the duration of a given features file.\n\n Parameters\n ----------\n features_file: str\n Path to the JSON file containing the features.\n\n Returns\n -------\n dur: float\n Duration of the analyzed file."}
{"_id": "q_9507", "text": "Gets the desired dataset file."}
{"_id": "q_9508", "text": "Load a ground-truth segmentation, and align times to the nearest\n detected beats.\n\n Arguments:\n beat_times -- array\n song -- path to the audio file\n\n Returns:\n segment_beats -- array\n beat-aligned segment boundaries\n\n segment_times -- array\n true segment times\n\n segment_labels -- array\n list of segment labels"}
{"_id": "q_9509", "text": "Reads the annotated beats if available.\n\n Returns\n -------\n times: np.array\n Times of annotated beats in seconds.\n frames: np.array\n Frame indeces of annotated beats."}
{"_id": "q_9510", "text": "Returns the parameter names for these features, avoiding\n the global parameters."}
{"_id": "q_9511", "text": "Computes the framesync times based on the framesync features."}
{"_id": "q_9512", "text": "This getter returns the frame times, for the corresponding type of\n features."}
{"_id": "q_9513", "text": "This getter will compute the actual features if they haven't\n been computed yet.\n\n Returns\n -------\n features: np.array\n The actual features. Each row corresponds to a feature vector."}
{"_id": "q_9514", "text": "Selects the features from the given parameters.\n\n Parameters\n ----------\n features_id: str\n The identifier of the features (it must be a key inside the\n `features_registry`)\n file_struct: msaf.io.FileStruct\n The file struct containing the files to extract the features from\n annot_beats: boolean\n Whether to use annotated (`True`) or estimated (`False`) beats\n framesync: boolean\n Whether to use framesync (`True`) or beatsync (`False`) features\n\n Returns\n -------\n features: obj\n The actual features object that inherits from `msaf.Features`"}
{"_id": "q_9515", "text": "Print all the results.\n\n Parameters\n ----------\n results: pd.DataFrame\n Dataframe with all the results"}
{"_id": "q_9516", "text": "Computes the information gain of the est_file from the annotated\n intervals and the estimated intervals."}
{"_id": "q_9517", "text": "Computes the features for the selected dataset or file."}
{"_id": "q_9518", "text": "Return the average log-likelihood of data under a standard normal"}
{"_id": "q_9519", "text": "Normalizes features such that each vector is between floor to 1."}
{"_id": "q_9520", "text": "Normalizes the given matrix of features.\n\n Parameters\n ----------\n X: np.array\n Each row represents a feature vector.\n norm_type: {\"min_max\", \"log\", np.inf, -np.inf, 0, float > 0, None}\n - `\"min_max\"`: Min/max scaling is performed\n - `\"log\"`: Logarithmic scaling is performed\n - `np.inf`: Maximum absolute value\n - `-np.inf`: Mininum absolute value\n - `0`: Number of non-zeros\n - float: Corresponding l_p norm.\n - None : No normalization is performed\n\n Returns\n -------\n norm_X: np.array\n Normalized `X` according the the input parameters."}
{"_id": "q_9521", "text": "Gets the time frames and puts them in a numpy array."}
{"_id": "q_9522", "text": "Removes empty segments if needed."}
{"_id": "q_9523", "text": "Synchronizes the labels from the old_bound_idxs to the new_bound_idxs.\n\n Parameters\n ----------\n new_bound_idxs: np.array\n New indeces to synchronize with.\n old_bound_idxs: np.array\n Old indeces, same shape as labels + 1.\n old_labels: np.array\n Labels associated to the old_bound_idxs.\n N: int\n Total number of frames.\n\n Returns\n -------\n new_labels: np.array\n New labels, synchronized to the new boundary indeces."}
{"_id": "q_9524", "text": "compute distances of a specific data point to all other samples"}
{"_id": "q_9525", "text": "Estimates the K using K-means and BIC, by sweeping various K and\n choosing the optimal BIC."}
{"_id": "q_9526", "text": "Returns the data with a specific label_index, using the previously\n learned labels."}
{"_id": "q_9527", "text": "Runs k-means and returns the labels assigned to the data."}
{"_id": "q_9528", "text": "Computes the Bayesian Information Criterion."}
{"_id": "q_9529", "text": "Magnitude of a complex matrix."}
{"_id": "q_9530", "text": "Extracts the boundaries from a json file and puts them into\n an np array."}
{"_id": "q_9531", "text": "Extracts the boundaries from a bounds json file and puts them into\n an np array."}
{"_id": "q_9532", "text": "Extracts the labels from a json file and puts them into\n an np array."}
{"_id": "q_9533", "text": "Extracts the beats from the beats_json_file and puts them into\n an np array."}
{"_id": "q_9534", "text": "Computes the labels using the bounds."}
{"_id": "q_9535", "text": "Filters the activation matrix G, and returns a flattened copy."}
{"_id": "q_9536", "text": "Obtains the boundaries module given a boundary algorithm identificator.\n\n Parameters\n ----------\n boundaries_id: str\n Boundary algorithm identificator (e.g., foote, sf).\n\n Returns\n -------\n module: object\n Object containing the selected boundary module.\n None for \"ground truth\"."}
{"_id": "q_9537", "text": "Obtains the label module given a label algorithm identificator.\n\n Parameters\n ----------\n labels_id: str\n Label algorithm identificator (e.g., fmc2d, cnmf).\n\n Returns\n -------\n module: object\n Object containing the selected label module.\n None for not computing the labeling part of music segmentation."}
{"_id": "q_9538", "text": "Runs hierarchical algorithms with the specified identifiers on the\n audio_file. See run_algorithm for more information."}
{"_id": "q_9539", "text": "Runs the flat algorithms with the specified identifiers on the\n audio_file. See run_algorithm for more information."}
{"_id": "q_9540", "text": "Main process to segment a file or a collection of files.\n\n Parameters\n ----------\n in_path: str\n Input path. If a directory, MSAF will function in collection mode.\n If audio file, MSAF will be in single file mode.\n annot_beats: bool\n Whether to use annotated beats or not.\n feature: str\n String representing the feature to be used (e.g. pcp, mfcc, tonnetz)\n framesync: str\n Whether to use framesync features or not (default: False -> beatsync)\n boundaries_id: str\n Identifier of the boundaries algorithm (use \"gt\" for groundtruth)\n labels_id: str\n Identifier of the labels algorithm (use None to not compute labels)\n hier : bool\n Whether to compute a hierarchical or flat segmentation.\n sonify_bounds: bool\n Whether to write an output audio file with the annotated boundaries\n or not (only available in Single File Mode).\n plot: bool\n Whether to plot the boundaries and labels against the ground truth.\n n_jobs: int\n Number of processes to run in parallel. Only available in collection\n mode.\n annotator_id: int\n Annotator identificator in the ground truth.\n config: dict\n Dictionary containing custom configuration parameters for the\n algorithms. If None, the default parameters are used.\n out_bounds: str\n Path to the output for the sonified boundaries (only in single file\n mode, when sonify_bounds is True.\n out_sr : int\n Sampling rate for the sonified bounds.\n\n Returns\n -------\n results : list\n List containing tuples of (est_times, est_labels) of estimated\n boundary times and estimated labels.\n If labels_id is None, est_labels will be a list of -1."}
{"_id": "q_9541", "text": "alternating least squares step, update W under the convexity\n constraint"}
{"_id": "q_9542", "text": "Initializes coroutine essentially priming it to the yield statement.\n Used as a decorator over functions that generate coroutines.\n\n .. code-block:: python\n\n # Basic coroutine producer/consumer pattern\n from translate import coroutine\n\n @coroutine\n def coroutine_foo(bar):\n try:\n while True:\n baz = (yield)\n bar.send(baz)\n\n except GeneratorExit:\n bar.close()\n\n :param func: Unprimed Generator\n :type func: Function\n\n :return: Initialized Coroutine\n :rtype: Function"}
{"_id": "q_9543", "text": "Task Setter Coroutine\n\n End point destination coroutine of a purely consumer type.\n Delegates Text IO to the `write_stream` function.\n\n :param translation_function: Translator\n :type translation_function: Function\n\n :param translit: Transliteration Switch\n :type translit: Boolean"}
{"_id": "q_9544", "text": "Consumes text streams and spools them together for more io\n efficient processes.\n\n :param iterable: Sends text stream for further processing\n :type iterable: Coroutine\n\n :param maxlen: Maximum query string size\n :type maxlen: Integer"}
{"_id": "q_9545", "text": "Decorates a function returning the url of translation API.\n Creates and maintains HTTP connection state\n\n Returns a dict response object from the server containing the translated\n text and metadata of the request body\n\n :param interface: Callable Request Interface\n :type interface: Function"}
{"_id": "q_9546", "text": "Returns the url encoded string that will be pushed to the translation\n server for parsing.\n\n List of acceptable language codes for source and target languages\n can be found as a JSON file in the etc directory.\n\n Some source languages are limited in scope of the possible target languages\n that are available.\n\n .. code-block:: python\n\n >>> from translate import translator\n >>> translator('en', 'zh-TW', 'Hello World!')\n '\u4f60\u597d\u4e16\u754c\uff01'\n\n :param source: Language code for translation source\n :type source: String\n\n :param target: Language code that source will be translate into\n :type target: String\n\n :param phrase: Text body string that will be url encoded and translated\n :type phrase: String\n\n :return: Request Interface\n :rtype: Dictionary"}
{"_id": "q_9547", "text": "Opens up file located under the etc directory containing language\n codes and prints them out.\n\n :param file: Path to location of json file\n :type file: str\n\n :return: language codes\n :rtype: dict"}
{"_id": "q_9548", "text": "Generates a formatted table of language codes"}
{"_id": "q_9549", "text": "Save a Network's data to a Pandas HDFStore.\n\n Parameters\n ----------\n network : pandana.Network\n filename : str\n rm_nodes : array_like\n A list, array, Index, or Series of node IDs that should *not*\n be saved as part of the Network."}
{"_id": "q_9550", "text": "Characterize urban space with a variable that is related to nodes in\n the network.\n\n Parameters\n ----------\n node_ids : Pandas Series, int\n A series of node_ids which are usually computed using\n get_node_ids on this object.\n variable : Pandas Series, numeric, optional\n A series which represents some variable defined in urban space.\n It could be the location of buildings, or the income of all\n households - just about anything can be aggregated using the\n network queries provided here and this provides the api to set\n the variable at its disaggregate locations. Note that node_id\n and variable should have the same index (although the index is\n not actually used). If variable is not set, then it is assumed\n that the variable is all \"ones\" at the location specified by\n node_ids. This could be, for instance, the location of all\n coffee shops which don't really have a variable to aggregate. The\n variable is connected to the closest node in the Pandana network\n which assumes no impedance between the location of the variable\n and the location of the closest network node.\n name : string, optional\n Name the variable. This is optional in the sense that if you don't\n specify it, the default name will be used. Since the same\n default name is used by aggregate on this object, you can\n alternate between characterize and aggregate calls without\n setting names.\n\n Returns\n -------\n Nothing"}
{"_id": "q_9551", "text": "Aggregate information for every source node in the network - this is\n really the main purpose of this library. This allows you to touch\n the data specified by calling set and perform some aggregation on it\n within the specified distance. For instance, summing the population\n within 1000 meters.\n\n Parameters\n ----------\n distance : float\n The maximum distance to aggregate data within. 'distance' can\n represent any impedance unit that you have set as your edge\n weight. This will usually be a distance unit in meters however\n if you have customized the impedance this could be in other\n units such as utility or time etc.\n type : string\n The type of aggregation, can be one of \"ave\", \"sum\", \"std\",\n \"count\", and now \"min\", \"25pct\", \"median\", \"75pct\", and \"max\" will\n compute the associated quantiles. (Quantiles are computed by\n sorting so might be slower than the others.)\n decay : string\n The type of decay to apply, which makes things that are further\n away count less in the aggregation - must be one of \"linear\",\n \"exponential\" or \"flat\" (which means no decay). Linear is the\n fastest computation to perform. When performing an \"ave\",\n the decay is typically \"flat\"\n imp_name : string, optional\n The impedance name to use for the aggregation on this network.\n Must be one of the impedance names passed in the constructor of\n this object. If not specified, there must be only one impedance\n passed in the constructor, which will be used.\n name : string, optional\n The variable to aggregate. This variable will have been created\n and named by a call to set. If not specified, the default\n variable name will be used so that the most recent call to set\n without giving a name will be the variable used.\n\n Returns\n -------\n agg : Pandas Series\n Returns a Pandas Series for every origin node in the network,\n with the index which is the same as the node_ids passed to the\n init method and the values are the aggregations for each source\n node in the network."}
{"_id": "q_9552", "text": "Assign node_ids to data specified by x_col and y_col\n\n Parameters\n ----------\n x_col : Pandas series (float)\n A Pandas Series where values specify the x (e.g. longitude)\n location of dataset.\n y_col : Pandas series (float)\n A Pandas Series where values specify the y (e.g. latitude)\n location of dataset. x_col and y_col should use the same index.\n mapping_distance : float, optional\n The maximum distance that will be considered a match between the\n x, y data and the nearest node in the network. This will usually\n be a distance unit in meters however if you have customized the\n impedance this could be in other units such as utility or time\n etc. If not specified, every x, y coordinate will be mapped to\n the nearest node.\n\n Returns\n -------\n node_ids : Pandas series (int)\n Returns a Pandas Series of node_ids for each x, y in the\n input data. The index is the same as the indexes of the\n x, y input data, and the values are the mapped node_ids.\n If mapping distance is not passed and if there are no nans in the\n x, y data, this will be the same length as the x, y data.\n If the mapping is imperfect, this function returns all the\n input x, y's that were successfully mapped to node_ids."}
{"_id": "q_9553", "text": "Plot an array of data on a map using matplotlib and Basemap,\n automatically matching the data to the Pandana network node positions.\n\n Keyword arguments are passed to the plotting routine.\n\n Parameters\n ----------\n data : pandas.Series\n Numeric data with the same length and index as the nodes\n in the network.\n bbox : tuple, optional\n (lat_min, lng_min, lat_max, lng_max)\n plot_type : {'hexbin', 'scatter'}, optional\n fig_kwargs : dict, optional\n Keyword arguments that will be passed to\n matplotlib.pyplot.subplots. Use this to specify things like\n figure size or background color.\n bmap_kwargs : dict, optional\n Keyword arguments that will be passed to the Basemap constructor.\n This can be used to specify a projection or coastline resolution.\n plot_kwargs : dict, optional\n Keyword arguments that will be passed to the matplotlib plotting\n command used. Use this to control plot styles and color maps used.\n cbar_kwargs : dict, optional\n Keyword arguments passed to the Basemap.colorbar method.\n Use this to control color bar location and label.\n\n Returns\n -------\n bmap : Basemap\n fig : matplotlib.Figure\n ax : matplotlib.Axes"}
{"_id": "q_9554", "text": "Make a request to OSM and return the parsed JSON.\n\n Parameters\n ----------\n query : str\n A string in the Overpass QL format.\n\n Returns\n -------\n data : dict"}
{"_id": "q_9555", "text": "Build the string for a node-based OSM query.\n\n Parameters\n ----------\n lat_min, lng_min, lat_max, lng_max : float\n tags : str or list of str, optional\n Node tags that will be used to filter the search.\n See http://wiki.openstreetmap.org/wiki/Overpass_API/Language_Guide\n for information about OSM Overpass queries\n and http://wiki.openstreetmap.org/wiki/Map_Features\n for a list of tags.\n\n Returns\n -------\n query : str"}
{"_id": "q_9556", "text": "Search for OSM nodes within a bounding box that match given tags.\n\n Parameters\n ----------\n lat_min, lng_min, lat_max, lng_max : float\n tags : str or list of str, optional\n Node tags that will be used to filter the search.\n See http://wiki.openstreetmap.org/wiki/Overpass_API/Language_Guide\n for information about OSM Overpass queries\n and http://wiki.openstreetmap.org/wiki/Map_Features\n for a list of tags.\n\n Returns\n -------\n nodes : pandas.DataFrame\n Will have 'lat' and 'lon' columns, plus other columns for the\n tags associated with the node (these will vary based on the query).\n Index will be the OSM node IDs."}
{"_id": "q_9557", "text": "Run the Cell code using the IPython globals and locals\n\n Args:\n cell (str): Python code to be executed"}
{"_id": "q_9558", "text": "Return a new dict with specified keys excluded from the origional dict\n\n Args:\n d (dict): origional dict\n exclude (list): The keys that are excluded"}
{"_id": "q_9559", "text": "Redirect the stdout\n\n Args:\n new_stdout (io.StringIO): New stdout to use instead"}
{"_id": "q_9560", "text": "Returns private spend key. None if wallet is view-only.\n\n :rtype: str or None"}
{"_id": "q_9561", "text": "Sends a batch of transfers from the default account. Returns a list of resulting\n transactions.\n\n :param destinations: a list of destination and amount pairs: [(address, amount), ...]\n :param priority: transaction priority, implies fee. The priority can be a number\n from 1 to 4 (unimportant, normal, elevated, priority) or a constant\n from `monero.prio`.\n :param payment_id: ID for the payment (must be None if\n :class:`IntegratedAddress <monero.address.IntegratedAddress>`\n is used as a destination)\n :param unlock_time: the extra unlock delay\n :param relay: if `True`, the wallet will relay the transaction(s) to the network\n immediately; when `False`, it will only return the transaction(s)\n so they might be broadcasted later\n :rtype: list of :class:`Transaction <monero.transaction.Transaction>`"}
{"_id": "q_9562", "text": "Returns specified balance.\n\n :param unlocked: if `True`, return the unlocked balance, otherwise return total balance\n :rtype: Decimal"}
{"_id": "q_9563", "text": "Sends a transfer. Returns a list of resulting transactions.\n\n :param address: destination :class:`Address <monero.address.Address>` or subtype\n :param amount: amount to send\n :param priority: transaction priority, implies fee. The priority can be a number\n from 1 to 4 (unimportant, normal, elevated, priority) or a constant\n from `monero.prio`.\n :param payment_id: ID for the payment (must be None if\n :class:`IntegratedAddress <monero.address.IntegratedAddress>`\n is used as the destination)\n :param unlock_time: the extra unlock delay\n :param relay: if `True`, the wallet will relay the transaction(s) to the network\n immediately; when `False`, it will only return the transaction(s)\n so they might be broadcasted later\n :rtype: list of :class:`Transaction <monero.transaction.Transaction>`"}
{"_id": "q_9564", "text": "Sends a batch of transfers. Returns a list of resulting transactions.\n\n :param destinations: a list of destination and amount pairs:\n [(:class:`Address <monero.address.Address>`, `Decimal`), ...]\n :param priority: transaction priority, implies fee. The priority can be a number\n from 1 to 4 (unimportant, normal, elevated, priority) or a constant\n from `monero.prio`.\n :param payment_id: ID for the payment (must be None if\n :class:`IntegratedAddress <monero.address.IntegratedAddress>`\n is used as the destination)\n :param unlock_time: the extra unlock delay\n :param relay: if `True`, the wallet will relay the transaction(s) to the network\n immediately; when `False`, it will only return the transaction(s)\n so they might be broadcasted later\n :rtype: list of :class:`Transaction <monero.transaction.Transaction>`"}
{"_id": "q_9565", "text": "Discover the proper class and return instance for a given Monero address.\n\n :param addr: the address as a string-like object\n :param label: a label for the address (defaults to `None`)\n\n :rtype: :class:`Address`, :class:`SubAddress` or :class:`IntegratedAddress`"}
{"_id": "q_9566", "text": "Integrates payment id into the address.\n\n :param payment_id: int, hexadecimal string or :class:`PaymentID <monero.numbers.PaymentID>`\n (max 64-bit long)\n\n :rtype: `IntegratedAddress`\n :raises: `TypeError` if the payment id is too long"}
{"_id": "q_9567", "text": "Convert hexadecimal string to mnemonic word representation with checksum."}
{"_id": "q_9568", "text": "Create options and verbose options from strings and non-string iterables in\n `options` array."}
{"_id": "q_9569", "text": "Assigns function to the operators property of the instance."}
{"_id": "q_9570", "text": "If you want to change the core prompters registry, you can\n override this method in a Question subclass."}
{"_id": "q_9571", "text": "Add a Question instance to the questions dict. Each key points\n to a list of Question instances with that key. Use the `question`\n kwarg to pass a Question instance if you want, or pass in the same\n args you would pass to instantiate a question."}
{"_id": "q_9572", "text": "Returns the next `Question` in the questionnaire, or `None` if there\n are no questions left. Returns first question for whose key there is no\n answer and for which condition is satisfied, or for which there is no\n condition."}
{"_id": "q_9573", "text": "Move `n` questions back in the questionnaire by removing the last `n`\n answers."}
{"_id": "q_9574", "text": "Helper method for displaying the answers so far."}
{"_id": "q_9575", "text": "Returns ``True`` if the input argument object is a native\n regular expression object, otherwise ``False``.\n\n Arguments:\n value (mixed): input value to test.\n\n Returns:\n bool"}
{"_id": "q_9576", "text": "Compares two values with regular expression matching support.\n\n Arguments:\n value (mixed): value to compare.\n expectation (mixed): value to match.\n regex_expr (bool, optional): enables string based regex matching.\n\n Returns:\n bool"}
{"_id": "q_9577", "text": "Simple function decorator allowing easy method chaining.\n\n Arguments:\n fn (function): target function to decorate."}
{"_id": "q_9578", "text": "Compares an string or regular expression againast a given value.\n\n Arguments:\n expr (str|regex): string or regular expression value to compare.\n value (str): value to compare against to.\n regex_expr (bool, optional): enables string based regex matching.\n\n Raises:\n AssertionError: in case of assertion error.\n\n Returns:\n bool"}
{"_id": "q_9579", "text": "Match the given HTTP request instance against the registered\n matcher functions in the current engine.\n\n Arguments:\n request (pook.Request): outgoing request to match.\n\n Returns:\n tuple(bool, list[Exception]): ``True`` if all matcher tests\n passes, otherwise ``False``. Also returns an optional list\n of error exceptions."}
{"_id": "q_9580", "text": "Returns a matcher instance by class or alias name.\n\n Arguments:\n name (str): matcher class name or alias.\n\n Returns:\n matcher: found matcher instance, otherwise ``None``."}
{"_id": "q_9581", "text": "Initializes a matcher instance passing variadic arguments to\n its constructor. Acts as a delegator proxy.\n\n Arguments:\n name (str): matcher class name or alias to execute.\n *args (mixed): variadic argument\n\n Returns:\n matcher: matcher instance.\n\n Raises:\n ValueError: if matcher was not found."}
{"_id": "q_9582", "text": "Defines response body data.\n\n Arguments:\n body (str|bytes): response body to use.\n\n Returns:\n self: ``pook.Response`` current instance."}
{"_id": "q_9583", "text": "Defines the mock response JSON body.\n\n Arguments:\n data (dict|list|str): JSON body data.\n\n Returns:\n self: ``pook.Response`` current instance."}
{"_id": "q_9584", "text": "Defines the mock URL to match.\n It can be a full URL with path and query params.\n\n Protocol schema is optional, defaults to ``http://``.\n\n Arguments:\n url (str): mock URL to match. E.g: ``server.com/api``.\n\n Returns:\n self: current Mock instance."}
{"_id": "q_9585", "text": "Defines a dictionary of arguments.\n\n Header keys are case insensitive.\n\n Arguments:\n headers (dict): headers to match.\n **headers (dict): headers to match as variadic keyword arguments.\n\n Returns:\n self: current Mock instance."}
{"_id": "q_9586", "text": "Defines a new header matcher expectation that must be present in the\n outgoing request in order to be satisfied, no matter what value it\n hosts.\n\n Header keys are case insensitive.\n\n Arguments:\n *names (str): header or headers names to match.\n\n Returns:\n self: current Mock instance.\n\n Example::\n\n (pook.get('server.com/api')\n .header_present('content-type'))"}
{"_id": "q_9587", "text": "Defines a list of headers that must be present in the\n outgoing request in order to satisfy the matcher, no matter what value\n the headers hosts.\n\n Header keys are case insensitive.\n\n Arguments:\n headers (list|tuple): header keys to match.\n\n Returns:\n self: current Mock instance.\n\n Example::\n\n (pook.get('server.com/api')\n .headers_present(['content-type', 'Authorization']))"}
{"_id": "q_9588", "text": "Defines the body data to match.\n\n ``body`` argument can be a ``str``, ``binary`` or a regular expression.\n\n Arguments:\n body (str|binary|regex): body data to match.\n\n Returns:\n self: current Mock instance."}
{"_id": "q_9589", "text": "Enables persistent mode for the current mock.\n\n Returns:\n self: current Mock instance."}
{"_id": "q_9590", "text": "Defines the mock response.\n\n Arguments:\n status (int, optional): response status code. Defaults to ``200``.\n **kw (dict): optional keyword arguments passed to ``pook.Response``\n constructor.\n\n Returns:\n pook.Response: mock response definition instance."}
{"_id": "q_9591", "text": "Matches an outgoing HTTP request against the current mock matchers.\n\n This method acts like a delegator to `pook.MatcherEngine`.\n\n Arguments:\n request (pook.Request): request instance to match.\n\n Raises:\n Exception: if the mock has an exception defined.\n\n Returns:\n tuple(bool, list[Exception]): ``True`` if the mock matches\n the outgoing HTTP request, otherwise ``False``. Also returns\n an optional list of error exceptions."}
{"_id": "q_9592", "text": "Async version of activate decorator\n\n Arguments:\n fn (function): function that be wrapped by decorator.\n _engine (Engine): pook engine instance\n\n Returns:\n function: decorator wrapper function."}
{"_id": "q_9593", "text": "Enables real networking mode, optionally passing one or multiple\n hostnames that would be used as filter.\n\n If at least one hostname matches with the outgoing traffic, the\n request will be executed via the real network.\n\n Arguments:\n *hostnames: optional list of host names to enable real network\n against them. hostname value can be a regular expression."}
{"_id": "q_9594", "text": "Creates and registers a new HTTP mock in the current engine.\n\n Arguments:\n url (str): request URL to mock.\n activate (bool): force mock engine activation.\n Defaults to ``False``.\n **kw (mixed): variadic keyword arguments for ``Mock`` constructor.\n\n Returns:\n pook.Mock: new mock instance."}
{"_id": "q_9595", "text": "Activates the registered interceptors in the mocking engine.\n\n This means any HTTP traffic captures by those interceptors will\n trigger the HTTP mock matching engine in order to determine if a given\n HTTP transaction should be mocked out or not."}
{"_id": "q_9596", "text": "Disables interceptors and stops intercepting any outgoing HTTP traffic."}
{"_id": "q_9597", "text": "Verifies if real networking mode should be used for the given\n request, passing it to the registered network filters.\n\n Arguments:\n request (pook.Request): outgoing HTTP request to test.\n\n Returns:\n bool"}
{"_id": "q_9598", "text": "Matches a given Request instance contract against the registered mocks.\n\n If a mock passes all the matchers, its response will be returned.\n\n Arguments:\n request (pook.Request): Request contract to match.\n\n Raises:\n pook.PookNoMatches: if networking is disabled and no mock matches\n with the given request contract.\n\n Returns:\n pook.Response: the mock response to be used by the interceptor."}
{"_id": "q_9599", "text": "Copies the current Request object instance for side-effects purposes.\n\n Returns:\n pook.Request: copy of the current Request instance."}
{"_id": "q_9600", "text": "Enables the HTTP traffic interceptors.\n\n This function can be used as decorator.\n\n Arguments:\n fn (function|coroutinefunction): Optional function argument\n if used as decorator.\n\n Returns:\n function: decorator wrapper function, only if called as decorator,\n otherwise ``None``.\n\n Example::\n\n # Standard use case\n pook.activate()\n pook.mock('server.com/foo').reply(404)\n\n res = requests.get('server.com/foo')\n assert res.status_code == 404\n pook.disable()\n\n # Decorator use case\n @pook.activate\n def test_request():\n pook.mock('server.com/foo').reply(404)\n\n res = requests.get('server.com/foo')\n assert res.status_code == 404"}
{"_id": "q_9601", "text": "Creates a new isolated mock engine to be used via context manager.\n\n Example::\n\n with pook.use() as engine:\n pook.mock('server.com/foo').reply(404)\n\n res = requests.get('server.com/foo')\n assert res.status_code == 404"}
{"_id": "q_9602", "text": "Adds one or multiple HTTP traffic interceptors to the current\n mocking engine.\n\n Interceptors are typically HTTP client specific wrapper classes that\n implements the pook interceptor interface.\n\n Arguments:\n interceptors (pook.interceptors.BaseInterceptor)"}
{"_id": "q_9603", "text": "Removes a specific interceptor by name.\n\n Arguments:\n name (str): interceptor name to disable.\n\n Returns:\n bool: `True` if the interceptor was disabled, otherwise `False`."}
{"_id": "q_9604", "text": "Get key from connection or default to settings."}
{"_id": "q_9605", "text": "Save the original_value."}
{"_id": "q_9606", "text": "Creates a new intent, optionally checking the cache first\n\n Args:\n name (str): The associated name of the intent\n lines (list<str>): All the sentences that should activate the intent\n reload_cache: Whether to ignore cached intent if exists"}
{"_id": "q_9607", "text": "Loads an entity, optionally checking the cache first\n\n Args:\n name (str): The associated name of the entity\n file_name (str): The location of the entity file\n reload_cache (bool): Whether to refresh all of cache"}
{"_id": "q_9608", "text": "Loads an intent, optionally checking the cache first\n\n Args:\n name (str): The associated name of the intent\n file_name (str): The location of the intent file\n reload_cache (bool): Whether to refresh all of cache"}
{"_id": "q_9609", "text": "Unload an intent"}
{"_id": "q_9610", "text": "Internal pickleable function used to train objects in another process"}
{"_id": "q_9611", "text": "Re-apply type annotations from .pyi stubs to your codebase."}
{"_id": "q_9612", "text": "Recursively retype files or directories given. Generate errors."}
{"_id": "q_9613", "text": "Retype `src`, finding types in `pyi_dir`. Save in `targets`.\n\n The file should remain formatted exactly as it was before, save for:\n - annotations\n - additional imports needed to satisfy annotations\n - additional module-level names needed to satisfy annotations\n\n Type comments in sources are normalized to type annotations."}
{"_id": "q_9614", "text": "Reapplies the typed_ast node into the lib2to3 tree.\n\n Also does post-processing. This is done in reverse order to enable placing\n TypeVars and aliases that depend on one another."}
{"_id": "q_9615", "text": "Converts type comments in `node` to proper annotated assignments."}
{"_id": "q_9616", "text": "Copies AST nodes from `type_comment` into the ast3.arguments in `args`.\n\n Does validaation of argument count (allowing for untyped self/cls)\n and type (vararg and kwarg)."}
{"_id": "q_9617", "text": "Copies argument type comments from the legacy long form to annotations\n in the entire function signature."}
{"_id": "q_9618", "text": "Return the type given in `expected`.\n\n Raise ValueError if `expected` isn't equal to `actual`. If --replace-any is\n used, the Any type in `actual` is considered equal.\n\n The implementation is naively checking if the string representation of\n `actual` is one of \"Any\", \"typing.Any\", or \"t.Any\". This is done for two\n reasons:\n\n 1. I'm lazy.\n 2. We want people to be able to explicitly state that they want Any without it\n being replaced. This way they can use an alias."}
{"_id": "q_9619", "text": "Removes the legacy signature type comment, leaving other comments if any."}
{"_id": "q_9620", "text": "Returns the offset after which a statement can be inserted to the `body`.\n\n This offset is calculated to come after all imports, and maybe existing\n (possibly annotated) assignments if `skip_assignments` is True.\n\n Also returns the indentation prefix that should be applied to the inserted\n node."}
{"_id": "q_9621", "text": "r\"\"\"Recomputes all line numbers based on the number of \\n characters."}
{"_id": "q_9622", "text": "Get user info for GBDX S3, put into instance vars for convenience.\n\n Args:\n None.\n\n Returns:\n Dictionary with S3 access key, S3 secret key, S3 session token,\n user bucket and user prefix (dict)."}
{"_id": "q_9623", "text": "Equalize and the histogram and normalize value range\n Equalization is on all three bands, not per-band"}
{"_id": "q_9624", "text": "Match the histogram to existing imagery"}
{"_id": "q_9625", "text": "Calculates Normalized Difference Vegetation Index using NIR and Red of an image.\n\n Returns: numpy array with ndvi values"}
{"_id": "q_9626", "text": "Calculates Normalized Difference Water Index using Coastal and NIR2 bands for WV02, WV03.\n For Landsat8 and sentinel2 calculated by using Green and NIR bands.\n\n Returns: numpy array of ndwi values"}
{"_id": "q_9627", "text": "Checks to see if a CatalogID has been ordered or not.\n\n Args:\n catalogID (str): The catalog ID from the platform catalog.\n Returns:\n ordered (bool): Whether or not the image has been ordered"}
{"_id": "q_9628", "text": "Return a wrapped object that warns about deprecated accesses"}
{"_id": "q_9629", "text": "Given a name, figure out if a multiplex port prefixes this name and return it. Otherwise return none."}
{"_id": "q_9630", "text": "Save output data from any task in this workflow to S3\n\n Args:\n output: Reference task output (e.g. task.outputs.output1).\n\n location (optional): Subfolder under which the output will be saved.\n It will be placed under the account directory in gbd-customer-data bucket:\n s3://gbd-customer-data/{account_id}/{location}\n Leave blank to save to: workflow_output/{workflow_id}/{task_name}/{port_name}\n\n Returns:\n None"}
{"_id": "q_9631", "text": "Generate workflow json for launching the workflow against the gbdx api\n\n Args:\n None\n\n Returns:\n json string"}
{"_id": "q_9632", "text": "Get the task IDs of a running workflow\n\n Args:\n None\n\n Returns:\n List of task IDs"}
{"_id": "q_9633", "text": "Cancel a running workflow.\n\n Args:\n None\n\n Returns:\n None"}
{"_id": "q_9634", "text": "Get stderr from all the tasks of a workflow.\n\n Returns:\n (list): tasks with their stderr\n\n Example:\n >>> workflow.stderr\n [\n {\n \"id\": \"4488895771403082552\",\n \"taskType\": \"AOP_Strip_Processor\",\n \"name\": \"Task1\",\n \"stderr\": \"............\"\n }\n ]"}
{"_id": "q_9635", "text": "Helper method for handling projection codes that are unknown to pyproj\n\n Args:\n prj_code (str): an epsg proj code\n\n Returns:\n projection: a pyproj projection"}
{"_id": "q_9636", "text": "Show a slippy map preview of the image. Requires iPython.\n\n Args:\n image (image): image object to display\n zoom (int): zoom level to intialize the map, default is 16\n center (list): center coordinates to initialize the map, defaults to center of image\n bands (list): bands of image to display, defaults to the image's default RGB bands"}
{"_id": "q_9637", "text": "Registers a new GBDX task.\n\n Args:\n task_json (dict): Dictionary representing task definition.\n json_filename (str): A full path of a file with json representing the task definition.\n Only one out of task_json and json_filename should be provided.\n Returns:\n Response (str)."}
{"_id": "q_9638", "text": "Gets definition of a registered GBDX task.\n\n Args:\n task_name (str): Task name.\n\n Returns:\n Dictionary representing the task definition."}
{"_id": "q_9639", "text": "Deletes a GBDX task.\n\n Args:\n task_name (str): Task name.\n\n Returns:\n Response (str)."}
{"_id": "q_9640", "text": "Updates a GBDX task.\n\n Args:\n task_name (str): Task name.\n task_json (dict): Dictionary representing updated task definition.\n\n Returns:\n Dictionary representing the updated task definition."}
{"_id": "q_9641", "text": "append two required tasks to the given output to ingest to VS"}
{"_id": "q_9642", "text": "Retrieves an AnswerFactory Recipe by id\n\n Args:\n recipe_id The id of the recipe\n\n Returns:\n A JSON representation of the recipe"}
{"_id": "q_9643", "text": "Saves an AnswerFactory Recipe\n\n Args:\n recipe (dict): Dictionary specifying a recipe\n\n Returns:\n AnswerFactory Recipe id"}
{"_id": "q_9644", "text": "Saves an AnswerFactory Project\n\n Args:\n project (dict): Dictionary specifying an AnswerFactory Project.\n\n Returns:\n AnswerFactory Project id"}
{"_id": "q_9645", "text": "Deletes a project by id\n\n Args:\n project_id: The project id to delete\n\n Returns:\n Nothing"}
{"_id": "q_9646", "text": "Renders a javascript snippet suitable for use as a mapbox-gl fill paint entry\n\n Returns:\n A dict that can be converted to a mapbox-gl javascript paint snippet"}
{"_id": "q_9647", "text": "Renders a javascript snippet suitable for use as a mapbox-gl fill-extrusion paint entry\n\n Returns:\n A dict that can be converted to a mapbox-gl javascript paint snippet"}
{"_id": "q_9648", "text": "Create a vectors in the vector service.\n\n Args:\n vectors: A single geojson vector or a list of geojson vectors. Item_type and ingest_source are required.\n\n Returns:\n (list): IDs of the vectors created\n\n Example:\n >>> vectors.create(\n ... {\n ... \"type\": \"Feature\",\n ... \"geometry\": {\n ... \"type\": \"Point\",\n ... \"coordinates\": [1.0,1.0]\n ... },\n ... \"properties\": {\n ... \"text\" : \"item text\",\n ... \"name\" : \"item name\",\n ... \"item_type\" : \"type\",\n ... \"ingest_source\" : \"source\",\n ... \"attributes\" : {\n ... \"latitude\" : 1,\n ... \"institute_founded\" : \"2015-07-17\",\n ... \"mascot\" : \"moth\"\n ... }\n ... }\n ... }\n ... )"}
{"_id": "q_9649", "text": "Create a single vector in the vector service\n\n Args:\n wkt (str): wkt representation of the geometry\n item_type (str): item_type of the vector\n ingest_source (str): source of the vector\n attributes: a set of key-value pairs of attributes\n\n Returns:\n id (str): string identifier of the vector created"}
{"_id": "q_9650", "text": "Retrieves a vector. Not usually necessary because searching is the best way to find & get stuff.\n\n Args:\n ID (str): ID of the vector object\n index (str): Optional. Index the object lives in. defaults to 'vector-web-s'\n\n Returns:\n record (dict): A dict object identical to the json representation of the catalog record"}
{"_id": "q_9651", "text": "Renders a mapbox gl map from a vector service query"}
{"_id": "q_9652", "text": "Renders a mapbox gl map from a vector service query or a list of geojson features\n\n Args:\n features (list): a list of geojson features\n query (str): a VectorServices query \n styles (list): a list of VectorStyles to apply to the features \n bbox (list): a bounding box to query for features ([minx, miny, maxx, maxy])\n zoom (int): the initial zoom level of the map\n center (list): a list of [lat, lon] used to center the map\n api_key (str): a valid Mapbox API key\n image (dict): a CatalogImage or a ndarray\n image_bounds (list): a list of bounds for image positioning \n Use outside of GBDX Notebooks requires a MapBox API key, sign up for free at https://www.mapbox.com/pricing/\n Pass the key using the `api_key` keyword or set an environmental variable called `MAPBOX API KEY`\n cmap (str): MatPlotLib colormap to use for rendering single band images (default: viridis)"}
{"_id": "q_9653", "text": "Reads data from a dask array and returns the computed ndarray matching the given bands\n\n Args:\n bands (list): band indices to read from the image. Returns bands in the order specified in the list of bands.\n\n Returns:\n ndarray: a numpy array of image data"}
{"_id": "q_9654", "text": "Get a random window of a given shape from within an image\n\n Args:\n window_shape (tuple): The desired shape of the returned image as (height, width) in pixels.\n\n Returns:\n image: a new image object of the specified shape and same type"}
{"_id": "q_9655", "text": "Iterate over random windows of an image\n\n Args:\n count (int): the number of the windows to generate. Defaults to 64, if `None` will continue to iterate over random windows until stopped.\n window_shape (tuple): The desired shape of each image as (height, width) in pixels.\n\n Yields:\n image: an image of the given shape and same type."}
{"_id": "q_9656", "text": "Return a subsetted window of a given size, centered on a geometry object\n\n Useful for generating training sets from vector training data\n Will throw a ValueError if the window is not within the image bounds\n\n Args:\n geom (shapely,geometry): Geometry to center the image on\n window_shape (tuple): The desired shape of the image as (height, width) in pixels.\n\n Returns:\n image: image object of same type"}
{"_id": "q_9657", "text": "Iterate over a grid of windows of a specified shape covering an image.\n\n The image is divided into a grid of tiles of size window_shape. Each iteration returns\n the next window.\n\n\n Args:\n window_shape (tuple): The desired shape of each image as (height,\n width) in pixels.\n pad: (bool): Whether or not to pad edge cells. If False, cells that do not\n have the desired shape will not be returned. Defaults to True.\n\n Yields:\n image: image object of same type."}
{"_id": "q_9658", "text": "Subsets the Image by the given bounds\n\n Args:\n bbox (list): optional. A bounding box array [minx, miny, maxx, maxy]\n wkt (str): optional. A WKT geometry string\n geojson (str): optional. A GeoJSON geometry dictionary\n\n Returns:\n image: an image instance of the same type"}
{"_id": "q_9659", "text": "Returns the bounds of a geometry object in pixel coordinates\n\n Args:\n geom: Shapely geometry object or GeoJSON as Python dictionary or WKT string\n clip (bool): Clip the bounds to the min/max extent of the image\n\n Returns:\n list: bounds in pixels [min x, min y, max x, max y] clipped to image bounds"}
{"_id": "q_9660", "text": "Finds supported geometry types, parses them and returns the bbox"}
{"_id": "q_9661", "text": "convert mercator bbox to tile index limits"}
{"_id": "q_9662", "text": "Get stdout for a particular task.\n\n Args:\n workflow_id (str): Workflow id.\n task_id (str): Task id.\n\n Returns:\n Stdout of the task (string)."}
{"_id": "q_9663", "text": "Cancels a running workflow.\n\n Args:\n workflow_id (str): Workflow id.\n\n Returns:\n Nothing"}
{"_id": "q_9664", "text": "Checks GBDX batch workflow status.\n\n Args:\n batch workflow_id (str): Batch workflow id.\n\n Returns:\n Batch Workflow status (str)."}
{"_id": "q_9665", "text": "Orders images from GBDX.\n\n Args:\n image_catalog_ids (str or list): A single catalog id or a list of \n catalog ids.\n batch_size (int): The image_catalog_ids will be split into \n batches of batch_size. The ordering API max \n batch size is 100, if batch_size is greater \n than 100 it will be truncated.\n callback (str): A url to call when ordering is completed.\n\n Returns:\n order_ids (str or list): If one batch, returns a string. If more\n than one batch, returns a list of order ids, \n one for each batch."}
{"_id": "q_9666", "text": "Checks imagery order status. There can be more than one image per\n order and this function returns the status of all images\n within the order.\n\n Args:\n order_id (str): The id of the order placed.\n\n Returns:\n List of dictionaries, one per image. Each dictionary consists\n of the keys 'acquisition_id', 'location' and 'state'."}
{"_id": "q_9667", "text": "Retrieves the strip footprint WKT string given a cat ID.\n\n Args:\n catID (str): The source catalog ID from the platform catalog.\n includeRelationships (bool): whether to include graph links to related objects. Default False.\n\n Returns:\n record (dict): A dict object identical to the json representation of the catalog record"}
{"_id": "q_9668", "text": "Retrieves the strip catalog metadata given a cat ID.\n\n Args:\n catID (str): The source catalog ID from the platform catalog.\n\n Returns:\n metadata (dict): A metadata dictionary .\n\n TODO: have this return a class object with interesting information exposed."}
{"_id": "q_9669", "text": "Use the google geocoder to get latitude and longitude for an address string\n\n Args:\n address: any address string\n\n Returns:\n A tuple of (lat,lng)"}
{"_id": "q_9670", "text": "Perform a catalog search over a specific point, specified by lat,lng\n\n Args:\n lat: latitude\n lng: longitude\n filters: Array of filters. Optional. Example:\n [\n \"(sensorPlatformName = 'WORLDVIEW01' OR sensorPlatformName ='QUICKBIRD02')\",\n \"cloudCover < 10\",\n \"offNadirAngle < 10\"\n ]\n startDate: string. Optional. Example: \"2004-01-01T00:00:00.000Z\"\n endDate: string. Optional. Example: \"2004-01-01T00:00:00.000Z\"\n types: Array of types to search for. Optional. Example (and default): [\"Acquisition\"]\n\n Returns:\n catalog search resultset"}
{"_id": "q_9671", "text": "Perform a catalog search\n\n Args:\n searchAreaWkt: WKT Polygon of area to search. Optional.\n filters: Array of filters. Optional. Example:\n [\n \"(sensorPlatformName = 'WORLDVIEW01' OR sensorPlatformName ='QUICKBIRD02')\",\n \"cloudCover < 10\",\n \"offNadirAngle < 10\"\n ]\n startDate: string. Optional. Example: \"2004-01-01T00:00:00.000Z\"\n endDate: string. Optional. Example: \"2004-01-01T00:00:00.000Z\"\n types: Array of types to search for. Optional. Example (and default): [\"Acquisition\"]\n\n Returns:\n catalog search resultset"}
{"_id": "q_9672", "text": "Return the most recent image\n\n Args:\n results: a catalog resultset, as returned from a search\n types: array of types you want. optional.\n sensors: array of sensornames. optional.\n N: number of recent images to return. defaults to 1.\n\n Returns:\n single catalog item, or none if not found"}
{"_id": "q_9673", "text": "Parses yaml and returns a list of repeated variables and\n the line on which they occur"}
{"_id": "q_9674", "text": "this function calculates the regression coefficients for a\n given vector containing the averages of tip and branch\n quantities.\n\n Parameters\n ----------\n Q : numpy.array\n vector with\n slope : None, optional\n Description\n\n Returns\n -------\n TYPE\n Description"}
{"_id": "q_9675", "text": "Inverse of the covariance matrix\n\n Returns\n -------\n\n H : (np.array)\n inverse of the covariance matrix."}
{"_id": "q_9676", "text": "calculate the weighted sums of the tip and branch values and\n their second moments."}
{"_id": "q_9677", "text": "This function implements the propagation of the means,\n variance, and covariances along a branch. It operates\n both towards the root and tips.\n\n Parameters\n ----------\n n : (node)\n the branch connecting this node to its parent is used\n for propagation\n tv : (float)\n tip value. Only required if not is terminal\n bl : (float)\n branch value. The increment of the tree associated quantity'\n\n var : (float)\n the variance increment along the branch\n\n Returns\n -------\n Q : (np.array)\n a vector of length 6 containing the updated quantities"}
{"_id": "q_9678", "text": "determine the position on the tree that minimizes the bilinear\n product of the inverse covariance and the data vectors.\n\n Returns\n -------\n best_root : (dict)\n dictionary with the node, the fraction `x` at which the branch\n is to be split, and the regression parameters"}
{"_id": "q_9679", "text": "calculates an interpolation object that maps time to the number of\n concurrent branches in the tree. The result is stored in self.nbranches"}
{"_id": "q_9680", "text": "returns the cost associated with a branch starting at t_node\n t_node is time before present, the branch goes back in time\n\n Args:\n - t_node: time of the node\n - branch_length: branch length, determines when this branch merges with sister\n - multiplicity: 2 if merger is binary, higher if this is a polytomy"}
{"_id": "q_9681", "text": "attaches the the merger cost to each branch length interpolator in the tree."}
{"_id": "q_9682", "text": "Convert profile to sequence and normalize profile across sites.\n\n Parameters\n ----------\n\n profile : numpy 2D array\n Profile. Shape of the profile should be (L x a), where L - sequence\n length, a - alphabet size.\n\n gtr : gtr.GTR\n Instance of the GTR class to supply the sequence alphabet\n\n collapse_prof : bool\n Whether to convert the profile to the delta-function\n\n Returns\n -------\n seq : numpy.array\n Sequence as numpy array of length L\n\n prof_values : numpy.array\n Values of the profile for the chosen sequence characters (length L)\n\n idx : numpy.array\n Indices chosen from profile as array of length L"}
{"_id": "q_9683", "text": "Set a new GTR object\n\n Parameters\n -----------\n\n value : GTR\n the new GTR object"}
{"_id": "q_9684", "text": "Create new GTR model if needed, and set the model as an attribute of the\n TreeAnc class\n\n Parameters\n -----------\n\n in_gtr : str, GTR\n The gtr model to be assigned. If string is passed,\n it is taken as the name of a standard GTR model, and is\n attempted to be created through :code:`GTR.standard()` interface. If a\n GTR instance is passed, it is set directly .\n\n **kwargs\n Keyword arguments to construct the GTR model. If none are passed, defaults\n are assumed."}
{"_id": "q_9685", "text": "Set link to parent and calculate distance to root for all tree nodes.\n Should be run once the tree is read and after every rerooting,\n topology change or branch length optimizations."}
{"_id": "q_9686", "text": "Set auxilliary parameters to every node of the tree."}
{"_id": "q_9687", "text": "Expand a nodes compressed sequence into the real sequence\n\n Parameters\n ----------\n node : PhyloTree.Clade\n Tree node\n\n Returns\n -------\n seq : np.array\n Sequence as np.array of chars"}
{"_id": "q_9688", "text": "Reconstruct ancestral states using Fitch's algorithm. The method requires\n sequences to be assigned to leaves. It implements the iteration from\n leaves to the root constructing the Fitch profiles for each character of\n the sequence, and then by propagating from the root to the leaves,\n reconstructs the sequences of the internal nodes.\n\n Keyword Args\n ------------\n\n Returns\n -------\n Ndiff : int\n Number of the characters that changed since the previous\n reconstruction. These changes are determined from the pre-set\n sequence attributes of the nodes. If there are no sequences available\n (i.e., no reconstruction has been made before), returns the total\n number of characters in the tree."}
{"_id": "q_9689", "text": "Find the intersection of any number of 1D arrays.\n Return the sorted, unique values that are in all of the input arrays.\n Adapted from numpy.lib.arraysetops.intersect1d"}
{"_id": "q_9690", "text": "Calculate the likelihood of the given realization of the sequences in\n the tree\n\n Returns\n -------\n\n log_lh : float\n The tree likelihood given the sequences"}
{"_id": "q_9691", "text": "Set branch lengths to either mutation lengths of given branch lengths.\n The assigend values are to be used in the following ML analysis."}
{"_id": "q_9692", "text": "Perform optimization for the branch lengths of the entire tree.\n This method only does a single path and needs to be iterated.\n\n **Note** this method assumes that each node stores information\n about its sequence as numpy.array object (node.sequence attribute).\n Therefore, before calling this method, sequence reconstruction with\n either of the available models must be performed.\n\n Parameters\n ----------\n\n mode : str\n Optimize branch length assuming the joint ML sequence assignment\n of both ends of the branch (:code:`joint`), or trace over all possible sequence\n assignments on both ends of the branch (:code:`marginal`) (slower, experimental).\n\n **kwargs :\n Keyword arguments\n\n Keyword Args\n ------------\n\n verbose : int\n Output level\n\n store_old : bool\n If True, the old lengths will be saved in :code:`node._old_dist` attribute.\n Useful for testing, and special post-processing."}
{"_id": "q_9693", "text": "EXPERIMENTAL GLOBAL OPTIMIZATION"}
{"_id": "q_9694", "text": "Calculate optimal branch length given the sequences of node and parent\n\n Parameters\n ----------\n node : PhyloTree.Clade\n TreeNode, attached to the branch.\n\n Returns\n -------\n new_len : float\n Optimal length of the given branch"}
{"_id": "q_9695", "text": "Get the multiple sequence alignment, including reconstructed sequences for\n the internal nodes.\n\n Returns\n -------\n new_aln : MultipleSeqAlignment\n Alignment including sequences of all internal nodes"}
{"_id": "q_9696", "text": "function that return the product of the transition matrix\n and the equilibrium frequencies to obtain the rate matrix\n of the GTR model"}
{"_id": "q_9697", "text": "Create a GTR model by specifying the matrix explicitly\n\n Parameters\n ----------\n\n mu : float\n Substitution rate\n\n W : nxn matrix\n Substitution matrix\n\n pi : n vector\n Equilibrium frequencies\n\n **kwargs:\n Key word arguments to be passed\n\n Keyword Args\n ------------\n\n alphabet : str\n Specify alphabet when applicable. If the alphabet specification is\n required, but no alphabet is specified, the nucleotide alphabet will be used as\n default."}
{"_id": "q_9698", "text": "Create standard model of molecular evolution.\n\n Parameters\n ----------\n\n model : str\n Model to create. See list of available models below\n **kwargs:\n Key word arguments to be passed to the model\n\n\n **Available models**\n\n - JC69:\n\n Jukes-Cantor 1969 model. This model assumes equal frequencies\n of the nucleotides and equal transition rates between nucleotide states.\n For more info, see: Jukes and Cantor (1969).\n Evolution of Protein Molecules. New York: Academic Press. pp. 21-132.\n To create this model, use:\n\n :code:`mygtr = GTR.standard(model='jc69', mu=<my_mu>, alphabet=<my_alph>)`\n\n :code:`my_mu` - substitution rate (float)\n\n :code:`my_alph` - alphabet (str: :code:`'nuc'` or :code:`'nuc_nogap'`)\n\n\n\n - K80:\n\n Kimura 1980 model. Assumes equal concentrations across nucleotides, but\n allows different rates between transitions and transversions. The ratio\n of the transversion/transition rates is given by kappa parameter.\n For more info, see\n Kimura (1980), J. Mol. Evol. 16 (2): 111-120. doi:10.1007/BF01731581.\n Current implementation of the model does not account for the gaps.\n\n\n :code:`mygtr = GTR.standard(model='k80', mu=<my_mu>, kappa=<my_kappa>)`\n\n :code:`mu` - overall substitution rate (float)\n\n :code:`kappa` - ratio of transversion/transition rates (float)\n\n\n - F81:\n\n Felsenstein 1981 model. Assumes non-equal concentrations across nucleotides,\n but the transition rate between all states is assumed to be equal. See\n Felsenstein (1981), J. Mol. Evol. 17 (6): 368-376. doi:10.1007/BF01734359\n for details.\n\n :code:`mygtr = GTR.standard(model='F81', mu=<mu>, pi=<pi>, alphabet=<alph>)`\n\n :code:`mu` - substitution rate (float)\n\n :code:`pi` - : nucleotide concentrations (numpy.array)\n\n :code:`alphabet' - alphabet to use. (:code:`'nuc'` or :code:`'nuc_nogap'`)\n\n\n - HKY85:\n\n Hasegawa, Kishino and Yano 1985 model. Allows different concentrations of the\n nucleotides (as in F81) + distinguishes between transition/transversion substitutions\n (similar to K80). Link:\n Hasegawa, Kishino, Yano (1985), J. Mol. Evol. 22 (2): 160-174. doi:10.1007/BF02101694\n\n Current implementation of the model does not account for the gaps\n\n :code:`mygtr = GTR.standard(model='HKY85', mu=<mu>, pi=<pi>, kappa=<kappa>)`\n\n :code:`mu` - substitution rate (float)\n\n :code:`pi` - : nucleotide concentrations (numpy.array)\n\n :code:`kappa` - ratio of transversion/transition rates (float)\n\n\n\n - T92:\n\n Tamura 1992 model. Extending Kimura (1980) model for the case where a\n G+C-content bias exists. Link:\n Tamura K (1992), Mol. Biol. Evol. 9 (4): 678-687. DOI: 10.1093/oxfordjournals.molbev.a040752\n\n Current implementation of the model does not account for the gaps\n\n :code:`mygtr = GTR.standard(model='T92', mu=<mu>, pi_GC=<pi_gc>, kappa=<kappa>)`\n\n :code:`mu` - substitution rate (float)\n\n :code:`pi_GC` - : relative GC content\n\n :code:`kappa` - ratio of transversion/transition rates (float)\n\n\n - TN93:\n\n Tamura and Nei 1993. The model distinguishes between the two different types of\n transition: (A <-> G) is allowed to have a different rate to (C<->T).\n Transversions have the same rate. The frequencies of the nucleotides are allowed\n to be different. Link: Tamura, Nei (1993), MolBiol Evol. 10 (3): 512-526.\n DOI:10.1093/oxfordjournals.molbev.a040023\n\n :code:`mygtr = GTR.standard(model='TN93', mu=<mu>, kappa1=<k1>, kappa2=<k2>)`\n\n :code:`mu` - substitution rate (float)\n\n :code:`kappa1` - relative A<-->C, A<-->T, T<-->G and G<-->C rates (float)\n\n :code:`kappa` - relative C<-->T rate (float)\n\n .. Note::\n Rate of A<-->G substitution is set to one. All other rates\n (kappa1, kappa2) are specified relative to this rate"}
{"_id": "q_9699", "text": "Find the optimal distance between the two sequences\n\n Parameters\n ----------\n\n seq_p : character array\n Parent sequence\n\n seq_c : character array\n Child sequence\n\n pattern_multiplicity : numpy array\n If sequences are reduced by combining identical alignment patterns,\n these multplicities need to be accounted for when counting the number\n of mutations across a branch. If None, all pattern are assumed to\n occur exactly once.\n\n ignore_gaps : bool\n If True, ignore gaps in distance calculations"}
{"_id": "q_9700", "text": "Find the optimal distance between the two sequences, for compressed sequences\n\n Parameters\n ----------\n\n seq_pair : compressed_sequence_pair\n Compressed representation of sequences along a branch, either\n as tuple of state pairs or as tuple of profiles.\n\n multiplicity : array\n Number of times each state pair in seq_pair appears (if profile==False)\n\n Number of times an alignment pattern is observed (if profiles==True)\n\n profiles : bool, default False\n The standard branch length optimization assumes fixed sequences at\n either end of the branch. With profiles==True, optimization is performed\n while summing over all possible states of the nodes at either end of the\n branch. Note that the meaning/format of seq_pair and multiplicity\n depend on the value of profiles."}
{"_id": "q_9701", "text": "Compute the probability of the sequence state of the child\n at time t later, given the parent profile.\n\n Parameters\n ----------\n\n profile : numpy.array\n Sequence profile. Shape = (L, a),\n where L - sequence length, a - alphabet size.\n\n t : double\n Time to propagate\n\n return_log: bool\n If True, return log-probability\n\n Returns\n -------\n\n res : np.array\n Profile of the sequence after time t in the future.\n Shape = (L, a), where L - sequence length, a - alphabet size."}
{"_id": "q_9702", "text": "Returns the log-likelihood of sampling a sequence from equilibrium frequency.\n Expects a sequence as numpy array\n\n Parameters\n ----------\n\n seq : numpy array\n Compressed sequence as an array of chars\n\n pattern_multiplicity : numpy_array\n The number of times each position in sequence is observed in the\n initial alignment. If None, sequence is assumed to be not compressed"}
{"_id": "q_9703", "text": "if branch_length mode is not explicitly set, set according to\n empirical branch length distribution in input tree\n\n Parameters\n ----------\n\n branch_length_mode : str, 'input', 'joint', 'marginal'\n if the maximal branch length in the tree is longer than 0.05, this will\n default to 'input'. Otherwise set to 'joint'"}
{"_id": "q_9704", "text": "Labels outlier branches that don't seem to follow a molecular clock\n and excludes them from subsequent molecular clock estimation and\n the timetree propagation.\n\n Parameters\n ----------\n reroot : str\n Method to find the best root in the tree (see :py:meth:`treetime.TreeTime.reroot` for options)\n\n n_iqd : int\n Number of iqd intervals. The outlier nodes are those which do not fall\n into :math:`IQD\\cdot n_iqd` interval (:math:`IQD` is the interval between\n 75\\ :sup:`th` and 25\\ :sup:`th` percentiles)\n\n If None, the default (3) assumed\n\n plot : bool\n If True, plot the results"}
{"_id": "q_9705", "text": "Print the total likelihood of the tree given the constrained leaves\n\n Parameters\n ----------\n\n joint : bool\n If true, print joint LH, else print marginal LH"}
{"_id": "q_9706", "text": "Add a coalescent model to the tree and optionally optimze\n\n Parameters\n ----------\n Tc : float,str\n If this is a float, it will be interpreted as the inverse merger\n rate in molecular clock units, if its is a"}
{"_id": "q_9707", "text": "Determine the node that, when the tree is rooted on this node, results\n in the best regression of temporal constraints and root to tip distances.\n\n Parameters\n ----------\n\n infer_gtr : bool\n If True, infer new GTR model after re-root\n\n covariation : bool\n account for covariation structure when rerooting the tree\n\n force_positive : bool\n only accept positive evolutionary rate estimates when rerooting the tree"}
{"_id": "q_9708", "text": "Function that attempts to load a tree and build it from the alignment\n if no tree is provided."}
{"_id": "q_9709", "text": "Checks if input is VCF and reads in appropriately if it is"}
{"_id": "q_9710", "text": "implementing treetime ancestral"}
{"_id": "q_9711", "text": "Assess the width of the probability distribution. This returns\n full-width-half-max"}
{"_id": "q_9712", "text": "Create delta function distribution."}
{"_id": "q_9713", "text": "multiplies a list of Distribution objects"}
{"_id": "q_9714", "text": "instantiate a TreeRegression object and set its tip_value and branch_value function\n to defaults that are sensible for treetime instances.\n\n Parameters\n ----------\n covariation : bool, optional\n account for phylogenetic covariation\n Returns\n -------\n TreeRegression\n a TreeRegression instance with self.tree attached as tree."}
{"_id": "q_9715", "text": "Use the date constraints to calculate the most likely positions of\n unconstrained nodes.\n\n Parameters\n ----------\n\n time_marginal : bool\n If true, use marginal reconstruction for node positions\n\n **kwargs\n Key word arguments to initialize dates constraints"}
{"_id": "q_9716", "text": "This function converts the estimated \"time_before_present\" properties of all nodes\n to numerical dates stored in the \"numdate\" attribute. This date is further converted\n into a human readable date string in format %Y-%m-%d assuming the usual calendar.\n\n Returns\n -------\n None\n All manipulations are done in place on the tree"}
{"_id": "q_9717", "text": "If temporal reconstruction was done using the marginal ML mode, the entire distribution of\n times is available. This function determines the interval around the highest\n posterior probability region that contains the specified fraction of the probability mass.\n In absense of marginal reconstruction, it will return uncertainty based on rate\n variation. If both are present, the wider interval will be returned.\n\n Parameters\n ----------\n\n node : PhyloTree.Clade\n The node for which the posterior region is to be calculated\n\n interval : float\n Float specifying who much of the posterior probability is\n to be contained in the region\n\n Returns\n -------\n max_posterior_region : numpy array\n Array with two numerical dates delineating the high posterior region"}
{"_id": "q_9718", "text": "Find the median of the function represented as an interpolation object."}
{"_id": "q_9719", "text": "Convert datetime object to the numeric date.\n The numeric date format is YYYY.F, where F is the fraction of the year passed\n\n Parameters\n ----------\n dt: datetime.datetime, None\n date of to be converted. if None, assume today"}
{"_id": "q_9720", "text": "Create the conversion object automatically from the tree\n\n Parameters\n ----------\n\n clock_model : dict\n dictionary as returned from TreeRegression with fields intercept and slope"}
{"_id": "q_9721", "text": "Socket connection."}
{"_id": "q_9722", "text": "Terminate connection with Guacamole guacd server."}
{"_id": "q_9723", "text": "Send encoded instructions to Guacamole guacd server."}
{"_id": "q_9724", "text": "Send instruction after encoding."}
{"_id": "q_9725", "text": "Establish connection with Guacamole guacd server via handshake."}
{"_id": "q_9726", "text": "Return a utf-8 encoded string from a valid unicode string.\n\n :param unicode_str: Unicode string.\n\n :return: str"}
{"_id": "q_9727", "text": "Loads a new GuacamoleInstruction from encoded instruction string.\n\n :param instruction: Instruction string.\n\n :return: GuacamoleInstruction()"}
{"_id": "q_9728", "text": "Prepare the instruction to be sent over the wire.\n\n :return: str"}
{"_id": "q_9729", "text": "Runs the operator matcher test function."}
{"_id": "q_9730", "text": "Runs the current operator with the subject arguments to test.\n\n This method is implemented by matchers only."}
{"_id": "q_9731", "text": "Registers a new operator function in the test engine.\n\n Arguments:\n *args: variadic arguments.\n **kw: variadic keyword arguments.\n\n Returns:\n function"}
{"_id": "q_9732", "text": "Registers a new attribute only operator function in the test engine.\n\n Arguments:\n *args: variadic arguments.\n **kw: variadic keyword arguments.\n\n Returns:\n function"}
{"_id": "q_9733", "text": "Register plugin in grappa.\n\n `plugin` argument can be a function or a object that implement `register`\n method, which should accept one argument: `grappa.Engine` instance.\n\n Arguments:\n plugin (function|module): grappa plugin object to register.\n\n Raises:\n ValueError: if `plugin` is not a valid interface.\n\n Example::\n\n import grappa\n\n class MyOperator(grappa.Operator):\n pass\n\n def my_plugin(engine):\n engine.register(MyOperator)\n\n grappa.use(my_plugin)"}
{"_id": "q_9734", "text": "Registers one or multiple operators in the test engine."}
{"_id": "q_9735", "text": "Get instance URL by ID"}
{"_id": "q_9736", "text": "Returns a versioned URI string for this class,\n and don't pluralize the class name."}
{"_id": "q_9737", "text": "Download the file to the specified directory or file path.\n Downloads to a temporary directory if no path is specified.\n\n Returns the absolute path to the file."}
{"_id": "q_9738", "text": "Get the commit objects parent Import or Migration"}
{"_id": "q_9739", "text": "Asks the user for their email and password."}
{"_id": "q_9740", "text": "Force an interactive login via the command line.\n Sets the global API key and updates the client auth."}
{"_id": "q_9741", "text": "Shortcut to do range filters on genomic datasets."}
{"_id": "q_9742", "text": "Shortcut to do a single position filter on genomic datasets."}
{"_id": "q_9743", "text": "Returns a dictionary with the requested facets.\n\n The facets function supports string args, and keyword\n args.\n\n q.facets('field_1', 'field_2') will return facets for\n field_1 and field_2.\n q.facets(field_1={'limit': 0}, field_2={'limit': 10})\n will return all facets for field_1 and 10 facets for field_2."}
{"_id": "q_9744", "text": "Allows the Query object to be an iterable.\n\n This method will iterate through a cached result set\n and fetch successive pages as required.\n\n A `StopIteration` exception will be raised when there aren't\n any more results available or when the requested result\n slice range or limit has been fetched.\n\n Returns: The next result."}
{"_id": "q_9745", "text": "Executes a query. Additional query parameters can be passed\n as keyword arguments.\n\n Returns: The request parameters and the raw query response."}
{"_id": "q_9746", "text": "Migrate the data from the Query to a target dataset.\n\n Valid optional kwargs include:\n\n * target_fields\n * include_errors\n * validation_params\n * metadata\n * commit_mode"}
{"_id": "q_9747", "text": "Main entry point for SolveBio CLI"}
{"_id": "q_9748", "text": "Used to create a new object from an HTTP response"}
{"_id": "q_9749", "text": "Issues an HTTP Request across the wire via the Python requests\n library.\n\n Parameters\n ----------\n\n method : str\n an HTTP method: GET, PUT, POST, DELETE, ...\n\n url : str\n the place to connect to. If the url doesn't start\n with a protocol (https:// or http://), we'll slap\n solvebio.api_host in the front.\n\n allow_redirects: bool, optional\n set *False* we won't follow any redirects\n\n headers: dict, optional\n\n Custom headers can be provided here; generally though this\n will be set correctly by default dependent on the\n method type. If the content type is JSON, we'll\n JSON-encode params.\n\n param : dict, optional\n passed as *params* in the requests.request\n\n timeout : int, optional\n timeout value in seconds for the request\n\n raw: bool, optional\n unless *True* the response encoded to json\n\n files: file\n File content in the form of a file handle which is to be\n uploaded. Files are passed in POST requests\n\n Returns\n -------\n response object. If *raw* is not *True* and\n repsonse if valid the object will be JSON encoded. Otherwise\n it will be the request.reposne object."}
{"_id": "q_9750", "text": "Get Task child object class"}
{"_id": "q_9751", "text": "Cancel a task"}
{"_id": "q_9752", "text": "Specialized INFO field parser for SnpEff ANN fields.\n Requires self._snpeff_ann_fields to be set."}
{"_id": "q_9753", "text": "Return a parsed dictionary for JSON."}
{"_id": "q_9754", "text": "Dump the class data in the format of a .netrc file."}
{"_id": "q_9755", "text": "Format a value accoding to its type.\n\n Unicode is supported:\n\n >>> hrow = ['\\u0431\\u0443\\u043a\\u0432\\u0430', \\\n '\\u0446\\u0438\\u0444\\u0440\\u0430'] ; \\\n tbl = [['\\u0430\\u0437', 2], ['\\u0431\\u0443\\u043a\\u0438', 4]] ; \\\n good_result = '\\\\u0431\\\\u0443\\\\u043a\\\\u0432\\\\u0430 \\\n \\\\u0446\\\\u0438\\\\u0444\\\\u0440\\\\u0430\\\\n-------\\\n -------\\\\n\\\\u0430\\\\u0437 \\\n 2\\\\n\\\\u0431\\\\u0443\\\\u043a\\\\u0438 4' ; \\\n tabulate(tbl, headers=hrow) == good_result\n True"}
{"_id": "q_9756", "text": "Return a string which represents a row of data cells."}
{"_id": "q_9757", "text": "Return a string which represents a horizontal line."}
{"_id": "q_9758", "text": "Produce a plain-text representation of the table."}
{"_id": "q_9759", "text": "Given a folder or file, upload all the folders and files contained\n within it, skipping ones that already exist on the remote."}
{"_id": "q_9760", "text": "Helper method to return a full path from a full or partial path.\n\n If no domain, assumes user's account domain\n If the vault is \"~\", assumes personal vault.\n\n Valid vault paths include:\n\n domain:vault\n domain:vault:/path\n domain:vault/path\n vault:/path\n vault\n ~/\n\n Invalid vault paths include:\n\n /vault/\n /path\n /\n :/\n\n Does not allow overrides for any vault path components."}
{"_id": "q_9761", "text": "Validate SolveBio API host url.\n\n Valid urls must not be empty and\n must contain either HTTP or HTTPS scheme."}
{"_id": "q_9762", "text": "Evaluates the expression with the provided context and format."}
{"_id": "q_9763", "text": "Set the playback rate of the video as a multiple of the default playback speed\n\n Examples:\n >>> player.set_rate(2)\n # Will play twice as fast as normal speed\n >>> player.set_rate(0.5)\n # Will play half speed"}
{"_id": "q_9764", "text": "Pause playback if currently playing, otherwise start playing if currently paused."}
{"_id": "q_9765", "text": "Set the video to playback position to `position` seconds from the start of the video\n\n Args:\n position (float): The position in seconds."}
{"_id": "q_9766", "text": "Set the video position on the screen\n\n Args:\n x1 (int): Top left x coordinate (px)\n y1 (int): Top left y coordinate (px)\n x2 (int): Bottom right x coordinate (px)\n y2 (int): Bottom right y coordinate (px)"}
{"_id": "q_9767", "text": "Play the video asynchronously returning control immediately to the calling code"}
{"_id": "q_9768", "text": "Quit the player, blocking until the process has died"}
{"_id": "q_9769", "text": "Can edit this object"}
{"_id": "q_9770", "text": "Can delete this object"}
{"_id": "q_9771", "text": "Set the form fields for every key in the form_field_dict.\n\n Params:\n form_field_dict -- a dictionary created by get_form_field_dict\n parent_key -- the key for the previous key in the recursive call\n field_type -- used to determine what kind of field we are setting"}
{"_id": "q_9772", "text": "Given field_key will return value held at self.model_instance. If\n model_instance has not been provided will return None."}
{"_id": "q_9773", "text": "Given any number of lists and strings will join them in order as one\n string separated by the sep kwarg. sep defaults to u\"_\".\n\n Add exclude_last_string=True as a kwarg to exclude the last item in a\n given string after being split by sep. Note if you only have one word\n in your string you can end up getting an empty string.\n\n Example uses:\n\n >>> from mongonaut.forms.form_utils import make_key\n >>> make_key('hi', 'my', 'firend')\n >>> u'hi_my_firend'\n\n >>> make_key('hi', 'my', 'firend', sep='i')\n >>> 'hiimyifirend'\n\n >>> make_key('hi', 'my', 'firend',['this', 'be', 'what'], sep='i')\n >>> 'hiimyifirendithisibeiwhat'\n\n >>> make_key('hi', 'my', 'firend',['this', 'be', 'what'])\n >>> u'hi_my_firend_this_be_what'"}
{"_id": "q_9774", "text": "Choose which widget to display for a field."}
{"_id": "q_9775", "text": "Set attributes on the display widget."}
{"_id": "q_9776", "text": "Gets the default form field for a mongoenigne field."}
{"_id": "q_9777", "text": "Injects data into the context to replicate CBV ListView."}
{"_id": "q_9778", "text": "Creates new mongoengine records."}
{"_id": "q_9779", "text": "Sets a number of commonly used attributes"}
{"_id": "q_9780", "text": "As long as the form is set on the view this method will validate the form\n and save the submitted data. Only call this if you are posting data.\n The given success_message will be used with the djanog messages framework\n if the posted data sucessfully submits."}
{"_id": "q_9781", "text": "Given the form_key will evaluate the document and set values correctly for\n the document given."}
{"_id": "q_9782", "text": "1. Figures out what value the list ought to have\n 2. Sets the list"}
{"_id": "q_9783", "text": "Get the time with TZ enabled"}
{"_id": "q_9784", "text": "Get the time without TZ enabled"}
{"_id": "q_9785", "text": "Check Validity of an IP address"}
{"_id": "q_9786", "text": "Check if IP is local"}
{"_id": "q_9787", "text": "If we can get a valid IP from the request,\n look up that address in the database to get the appropriate timezone\n and activate it.\n\n Else, use the default."}
{"_id": "q_9788", "text": "Set the default format name.\n\n :param str format_name: The display format name.\n :raises ValueError: if the format is not recognized."}
{"_id": "q_9789", "text": "Register a new output formatter.\n\n :param str format_name: The name of the format.\n :param callable handler: The function that formats the data.\n :param tuple preprocessors: The preprocessors to call before\n formatting.\n :param dict kwargs: Keys/values for keyword argument defaults."}
{"_id": "q_9790", "text": "Format the headers and data using a specific formatter.\n\n *format_name* must be a supported formatter (see\n :attr:`supported_formats`).\n\n :param iterable data: An :term:`iterable` (e.g. list) of rows.\n :param iterable headers: The column headers.\n :param str format_name: The display format to use (optional, if the\n :class:`TabularOutputFormatter` object has a default format set).\n :param tuple preprocessors: Additional preprocessors to call before\n any formatter preprocessors.\n :param \\*\\*kwargs: Optional arguments for the formatter.\n :return: The formatted data.\n :rtype: str\n :raises ValueError: If the *format_name* is not recognized."}
{"_id": "q_9791", "text": "Wrap tabulate inside a function for TabularOutputFormatter."}
{"_id": "q_9792", "text": "Returns the config folder for the application. The default behavior\n is to return whatever is most appropriate for the operating system.\n\n For an example application called ``\"My App\"`` by ``\"Acme\"``,\n something like the following folders could be returned:\n\n macOS (non-XDG):\n ``~/Library/Application Support/My App``\n Mac OS X (XDG):\n ``~/.config/my-app``\n Unix:\n ``~/.config/my-app``\n Windows 7 (roaming):\n ``C:\\\\Users\\<user>\\AppData\\Roaming\\Acme\\My App``\n Windows 7 (not roaming):\n ``C:\\\\Users\\<user>\\AppData\\Local\\Acme\\My App``\n\n :param app_name: the application name. This should be properly capitalized\n and can contain whitespace.\n :param app_author: The app author's name (or company). This should be\n properly capitalized and can contain whitespace.\n :param roaming: controls if the folder should be roaming or not on Windows.\n Has no effect on non-Windows systems.\n :param force_xdg: if this is set to `True`, then on macOS the XDG Base\n Directory Specification will be followed. Has no effect\n on non-macOS systems."}
{"_id": "q_9793", "text": "Read the default config file.\n\n :raises DefaultConfigValidationError: There was a validation error with\n the *default* file."}
{"_id": "q_9794", "text": "Get the absolute path to the user config file."}
{"_id": "q_9795", "text": "Get a list of absolute paths to the system config files."}
{"_id": "q_9796", "text": "Get a list of absolute paths to the additional config files."}
{"_id": "q_9797", "text": "Write the default config to the user's config file.\n\n :param bool overwrite: Write over an existing config if it exists."}
{"_id": "q_9798", "text": "Read a list of config files.\n\n :param iterable files: An iterable (e.g. list) of files to read."}
{"_id": "q_9799", "text": "Apply command-line options."}
{"_id": "q_9800", "text": "Apply a command-line option."}
{"_id": "q_9801", "text": "Set the default options."}
{"_id": "q_9802", "text": "Run the linter."}
{"_id": "q_9803", "text": "Generate and view the documentation."}
{"_id": "q_9804", "text": "Truncate very long strings. Only needed for tabular\n representation, because trying to tabulate very long data\n is problematic in terms of performance, and does not make any\n sense visually.\n\n :param iterable data: An :term:`iterable` (e.g. list) of rows.\n :param iterable headers: The column headers.\n :param int max_field_width: Width to truncate field for display\n :return: The processed data and headers.\n :rtype: tuple"}
{"_id": "q_9805", "text": "Format numbers according to a format specification.\n\n This uses Python's format specification to format numbers of the following\n types: :class:`int`, :class:`py2:long` (Python 2), :class:`float`, and\n :class:`~decimal.Decimal`. See the :ref:`python:formatspec` for more\n information about the format strings.\n\n .. NOTE::\n A column is only formatted if all of its values are the same type\n (except for :data:`None`).\n\n :param iterable data: An :term:`iterable` (e.g. list) of rows.\n :param iterable headers: The column headers.\n :param iterable column_types: The columns' type objects (e.g. int or float).\n :param str integer_format: The format string to use for integer columns.\n :param str float_format: The format string to use for float columns.\n :return: The processed data and headers.\n :rtype: tuple"}
{"_id": "q_9806", "text": "Format a row."}
{"_id": "q_9807", "text": "Wrap vertical table in a function for TabularOutputFormatter."}
{"_id": "q_9808", "text": "This is the most important method"}
{"_id": "q_9809", "text": "This method process the filters"}
{"_id": "q_9810", "text": "Remember the recipients."}
{"_id": "q_9811", "text": "Parse message headers, then remove BCC header."}
{"_id": "q_9812", "text": "Add boundary parameter to multipart message if they are not present."}
{"_id": "q_9813", "text": "Convert a message into a multipart message."}
{"_id": "q_9814", "text": "Convert markdown in message text to HTML."}
{"_id": "q_9815", "text": "Send email message using Python SMTP library."}
{"_id": "q_9816", "text": "Create sample template email and database."}
{"_id": "q_9817", "text": "Command line interface."}
{"_id": "q_9818", "text": "A decorator for defining tail-call optimized functions.\n\n Example\n -------\n\n @with_continuations()\n def factorial(n, k, self=None):\n return self(n-1, k*n) if n > 1 else k\n \n @with_continuations()\n def identity(x, self=None):\n return x\n \n @with_continuations(out=identity)\n def factorial2(n, k, self=None, out=None):\n return self(n-1, k*n) if n > 1 else out(k)\n\n print(factorial(7,1))\n print(factorial2(7,1))"}
{"_id": "q_9819", "text": "create the base url for the api\n\n Parameters\n ----------\n base_url : str\n format of the base_url using {api} and {version}\n api : str\n name of the api to use\n version : str\n version of the api\n\n Returns\n -------\n str\n the base url of the api you want to use"}
{"_id": "q_9820", "text": "Get the tasks attached to the instance\n\n Returns\n -------\n list\n List of tasks (:class:`asyncio.Task`)"}
{"_id": "q_9821", "text": "Run the tasks attached to the instance"}
{"_id": "q_9822", "text": "properly close the client"}
{"_id": "q_9823", "text": "upload media in chunks\n\n Parameters\n ----------\n media : file object\n a file object of the media\n media_size : int\n size of the media\n path : str, optional\n filename of the media\n media_type : str, optional\n mime type of the media\n media_category : str, optional\n twitter media category, must be used with ``media_type``\n chunk_size : int, optional\n size of a chunk in bytes\n params : dict, optional\n additional parameters of the request\n\n Returns\n -------\n .data_processing.PeonyResponse\n Response of the request"}
{"_id": "q_9824", "text": "Take the binding predictions returned by IEDB's web API\n and parse them into a DataFrame\n\n Expect response to look like:\n allele seq_num start end length peptide ic50 percentile_rank\n HLA-A*01:01 1 2 10 9 LYNTVATLY 2145.70 3.7\n HLA-A*01:01 1 5 13 9 TVATLYCVH 2216.49 3.9\n HLA-A*01:01 1 7 15 9 ATLYCVHQR 2635.42 5.1\n HLA-A*01:01 1 4 12 9 NTVATLYCV 6829.04 20\n HLA-A*01:01 1 1 9 9 SLYNTVATL 8032.38 24\n HLA-A*01:01 1 8 16 9 TLYCVHQRI 8853.90 26\n HLA-A*01:01 1 3 11 9 YNTVATLYC 9865.62 29\n HLA-A*01:01 1 6 14 9 VATLYCVHQ 27575.71 58\n HLA-A*01:01 1 10 18 9 YCVHQRIDV 48929.64 74\n HLA-A*01:01 1 9 17 9 LYCVHQRID 50000.00 75"}
{"_id": "q_9825", "text": "Hackish way to get the arguments of a function\n\n Parameters\n ----------\n func : callable\n Function to get the arguments from\n skip : int, optional\n Arguments to skip, defaults to 0 set it to 1 to skip the\n ``self`` argument of a method.\n\n Returns\n -------\n tuple\n Function's arguments"}
{"_id": "q_9826", "text": "log an exception and its traceback on the logger defined\n\n Parameters\n ----------\n msg : str, optional\n A message to add to the error\n exc_info : tuple\n Information about the current exception\n logger : logging.Logger\n logger to use"}
{"_id": "q_9827", "text": "Get all the file's metadata and read any kind of file object\n\n Parameters\n ----------\n data : bytes\n first bytes of the file (the mimetype shoudl be guessed from the\n file headers\n path : str, optional\n path to the file\n\n Returns\n -------\n str\n The mimetype of the media\n str\n The category of the media on Twitter"}
{"_id": "q_9828", "text": "Get the size of a file\n\n Parameters\n ----------\n media : file object\n The file object of the media\n\n Returns\n -------\n int\n The size of the file"}
{"_id": "q_9829", "text": "Returns new BindingPrediction with updated fields"}
{"_id": "q_9830", "text": "Get the data from the response"}
{"_id": "q_9831", "text": "Try to fill the gaps and strip last tweet from the response\n if its id is that of the first tweet of the last response\n\n Parameters\n ----------\n data : list\n The response data"}
{"_id": "q_9832", "text": "Get a temporary oauth token\n\n Parameters\n ----------\n consumer_key : str\n Your consumer key\n consumer_secret : str\n Your consumer secret\n callback_uri : str, optional\n Callback uri, defaults to 'oob'\n\n Returns\n -------\n dict\n Temporary tokens"}
{"_id": "q_9833", "text": "get the access token of the user\n\n Parameters\n ----------\n consumer_key : str\n Your consumer key\n consumer_secret : str\n Your consumer secret\n oauth_token : str\n OAuth token from :func:`get_oauth_token`\n oauth_token_secret : str\n OAuth token secret from :func:`get_oauth_token`\n oauth_verifier : str\n OAuth verifier from :func:`get_oauth_verifier`\n\n Returns\n -------\n dict\n Access tokens"}
{"_id": "q_9834", "text": "parse the responses containing the tokens\n\n Parameters\n ----------\n response : str\n The response containing the tokens\n\n Returns\n -------\n dict\n The parsed tokens"}
{"_id": "q_9835", "text": "Return netChop predictions for each position in each sequence.\n\n Parameters\n -----------\n sequences : list of string\n Amino acid sequences to predict cleavage for\n\n Returns\n -----------\n list of list of float\n\n The i'th list corresponds to the i'th sequence. Each list gives\n the cleavage probability for each position in the sequence."}
{"_id": "q_9836", "text": "This function wraps NetMHC3 and NetMHC4 to automatically detect which class\n to use. Currently based on running the '-h' command and looking for\n discriminating substrings between the versions."}
{"_id": "q_9837", "text": "Predict MHC affinity for peptides."}
{"_id": "q_9838", "text": "Given a list of HLA alleles and an optional list of valid\n HLA alleles, return a set of alleles that we will pass into\n the MHC binding predictor."}
{"_id": "q_9839", "text": "Create the connection\n\n Returns\n -------\n self\n\n Raises\n ------\n exception.PeonyException\n On a response status in 4xx that are not status 420 or 429\n Also on statuses in 1xx or 3xx since this should not be the status\n received here"}
{"_id": "q_9840", "text": "decorator to handle commands with prefixes\n\n Parameters\n ----------\n prefix : str\n the prefix of the command\n strict : bool, optional\n If set to True the command must be at the beginning\n of the message. Defaults to False.\n\n Returns\n -------\n function\n a decorator that returns an :class:`EventHandler` instance"}
{"_id": "q_9841", "text": "set the environment timezone to the timezone\n set in your twitter settings"}
{"_id": "q_9842", "text": "Given a list whose first element is a command name, followed by arguments,\n execute it and show timing info."}
{"_id": "q_9843", "text": "Run multiple shell commands in parallel, write each of their\n stdout output to files associated with each command.\n\n Parameters\n ----------\n multiple_args_dict : dict\n A dictionary whose keys are files and values are args list.\n Run each args list as a subprocess and write stdout to the\n corresponding file.\n\n print_commands : bool\n Print shell commands before running them.\n\n process_limit : int\n Limit the number of concurrent processes to this number. 0\n if there is no limit, -1 to use max number of processors\n\n polling_freq : int\n Number of seconds between checking for done processes, if\n we have a process limit"}
{"_id": "q_9844", "text": "read the data of the response\n\n Parameters\n ----------\n response : aiohttp.ClientResponse\n response\n loads : callable\n json loads function\n encoding : :obj:`str`, optional\n character encoding of the response, if set to None\n aiohttp should guess the right encoding\n\n Returns\n -------\n :obj:`bytes`, :obj:`str`, :obj:`dict` or :obj:`list`\n the data returned depends on the response"}
{"_id": "q_9845", "text": "Check the permissions of the user requesting a command\n\n Parameters\n ----------\n data : dict\n message data\n command_permissions : dict\n permissions of the command, contains all the roles as key and users\n with these permissions as values\n command : function\n the command that is run\n permissions : tuple or list\n a list of permissions for the command\n\n Returns\n -------\n bool\n True if the user has the right permissions, False otherwise"}
{"_id": "q_9846", "text": "Get the response data if possible and raise an exception"}
{"_id": "q_9847", "text": "Decorator to associate a code to an exception"}
{"_id": "q_9848", "text": "prepare all the arguments for the request\n\n Parameters\n ----------\n method : str\n HTTP method used by the request\n url : str\n The url to request\n headers : dict, optional\n Additionnal headers\n proxy : str\n proxy of the request\n skip_params : bool\n Don't use the parameters to sign the request\n\n Returns\n -------\n dict\n Parameters of the request correctly formatted"}
{"_id": "q_9849", "text": "Make sure the user doesn't override the Authorization header"}
{"_id": "q_9850", "text": "Raise error for keys that are not strings\n and add the prefix if it is missing"}
{"_id": "q_9851", "text": "Analyze the text to get the right function\n\n Parameters\n ----------\n text : str\n The text that could call a function"}
{"_id": "q_9852", "text": "Copy template and substitute template strings\n\n File `template_file` is copied to `dst_file`. Then, each template variable\n is replaced by a value. Template variables are of the form\n\n {{val}}\n\n Example:\n\n Contents of template_file:\n\n VAR1={{val1}}\n VAR2={{val2}}\n VAR3={{val3}}\n\n render_template(template_file, output_file, val1=\"hello\", val2=\"world\")\n\n Contents of output_file:\n\n VAR1=hello\n VAR2=world\n VAR3={{val3}}\n\n :param template_file: Path to the template file.\n :param dst_file: Path to the destination file.\n :param kwargs: Keys correspond to template variables.\n :return:"}
{"_id": "q_9853", "text": "is the type a numerical value?\n\n :param type: PKCS#11 type like `CKA_CERTIFICATE_TYPE`\n :rtype: bool"}
{"_id": "q_9854", "text": "is the type a boolean value?\n\n :param type: PKCS#11 type like `CKA_ALWAYS_SENSITIVE`\n :rtype: bool"}
{"_id": "q_9855", "text": "is the type a byte array value?\n\n :param type: PKCS#11 type like `CKA_MODULUS`\n :rtype: bool"}
{"_id": "q_9856", "text": "find the objects matching the template pattern\n\n :param template: list of attributes tuples (attribute,value).\n The default value is () and all the objects are returned\n :type template: list\n :return: a list of object ids\n :rtype: list"}
{"_id": "q_9857", "text": "A generator for getting all of the edges without consuming extra\n memory."}
{"_id": "q_9858", "text": "Checks whether there are within-group edges or not."}
{"_id": "q_9859", "text": "Renders the axis."}
{"_id": "q_9860", "text": "Plots nodes to screen."}
{"_id": "q_9861", "text": "Computes the theta along which a group's nodes are aligned."}
{"_id": "q_9862", "text": "Identifies the group for which a node belongs to."}
{"_id": "q_9863", "text": "Finds the index of the node in the sorted list."}
{"_id": "q_9864", "text": "Computes the radial position of the node."}
{"_id": "q_9865", "text": "Convenience function to find the node's theta angle."}
{"_id": "q_9866", "text": "Draws all of the edges in the graph."}
{"_id": "q_9867", "text": "The master function that is called that draws everything."}
{"_id": "q_9868", "text": "Get all publications."}
{"_id": "q_9869", "text": "Get a publication list."}
{"_id": "q_9870", "text": "Takes a string in BibTex format and returns a list of BibTex entries, where\n\teach entry is a dictionary containing the entries' key-value pairs.\n\n\t@type string: string\n\t@param string: bibliography in BibTex format\n\n\t@rtype: list\n\t@return: a list of dictionaries representing a bibliography"}
{"_id": "q_9871", "text": "Swap the positions of this object with a reference object."}
{"_id": "q_9872", "text": "Move object to a certain position, updating all affected objects to move accordingly up or down."}
{"_id": "q_9873", "text": "Move this object below the referenced object."}
{"_id": "q_9874", "text": "Move this object to the top of the ordered stack."}
{"_id": "q_9875", "text": "Load custom links and files from database and attach to publications."}
{"_id": "q_9876", "text": "return tree order"}
{"_id": "q_9877", "text": "finds loci with sufficient sampling for this test"}
{"_id": "q_9878", "text": "Write nexus to tmpfile, runs phyml tree inference, and parses\n and returns the resulting tree."}
{"_id": "q_9879", "text": "return a toyplot barplot of the results table."}
{"_id": "q_9880", "text": "returns a copy of the pca analysis object"}
{"_id": "q_9881", "text": "A function to build an input file for the program migrate from an ipyrad \n .loci file, and a dictionary grouping Samples into populations. \n\n Parameters:\n -----------\n name: (str)\n The name prefix for the migrate formatted output file.\n locifile: (str)\n The path to the .loci file produced by ipyrad. \n popdict: (dict)\n A Python dictionary grouping Samples into Populations. \n \n Examples:\n ---------\n You can create the population dictionary by hand, and pass in the path \n to your .loci file as a string. \n >> popdict = {'A': ['a', 'b', 'c'], 'B': ['d', 'e', 'f']}\n >> loci2migrate(\"outfile.migrate\", \"./mydata.loci\", popdict)\n\n Or, if you load your ipyrad.Assembly object from it's JSON file, you can\n access the loci file path and population information from there directly. \n >> data = ip.load_json(\"mydata.json\")\n >> loci2migrate(\"outfile.migrate\", data.outfiles.loci, data.populations)"}
{"_id": "q_9882", "text": "updates dictionary with the next .5M reads from the super long string \n phylip file. Makes for faster reading."}
{"_id": "q_9883", "text": "Index the reference sequence, unless it already exists. Also make a mapping\n of scaffolds to index numbers for later user in steps 5-6."}
{"_id": "q_9884", "text": "1. Run bedtools to get all overlapping regions\n 2. Parse out reads from regions using pysam and dump into chunk files. \n We measure it out to create 10 chunk files per sample. \n 3. If we really wanted to speed this up, though it is pretty fast already, \n we could parallelize it since we can easily break the regions into \n a list of chunks."}
{"_id": "q_9885", "text": "Run bedtools to get all overlapping regions. Pass this list into the func\n 'get_overlapping_reads' which will write fastq chunks to the clust.gz file.\n\n 1) Run bedtools merge to get a list of all contiguous blocks of bases\n in the reference seqeunce where one or more of our reads overlap.\n The output will look like this:\n 1 45230754 45230783\n 1 74956568 74956596\n ...\n 1 116202035 116202060"}
{"_id": "q_9886", "text": "Get all contiguous genomic regions with one or more overlapping\n reads. This is the shell command we'll eventually run\n\n bedtools bamtobed -i 1A_0.sorted.bam | bedtools merge [-d 100]\n -i <input_bam> : specifies the input file to bed'ize\n -d <int> : For PE set max distance between reads"}
{"_id": "q_9887", "text": "create some file handles for refmapping"}
{"_id": "q_9888", "text": "filters for loci with >= N PIS"}
{"_id": "q_9889", "text": "function that takes a dictionary mapping names to sequences, \n and a locus number, and writes it as a NEXUS file with a mrbayes \n analysis block given a set of mcmc arguments."}
{"_id": "q_9890", "text": "Read in sample names from a plain text file. This is a convenience\n function for branching so if you have tons of sample names you can\n pass in a file rather than having to set all the names at the command\n line."}
{"_id": "q_9891", "text": "fast line counter. Used to quickly sum number of input reads when running\n link_fastqs to append files."}
{"_id": "q_9892", "text": "faster line counter"}
{"_id": "q_9893", "text": "Returns a data frame with Sample files. Not very readable..."}
{"_id": "q_9894", "text": "Returns a data frame with Sample stats for each step"}
{"_id": "q_9895", "text": "pretty prints params if called as a function"}
{"_id": "q_9896", "text": "Set a parameter to a new value. Raises error if newvalue is wrong type.\n\n Note\n ----\n Use [Assembly].get_params() to see the parameter values currently\n linked to the Assembly object.\n\n Parameters\n ----------\n param : int or str\n The index (e.g., 1) or string name (e.g., \"project_dir\")\n for the parameter that will be changed.\n\n newvalue : int, str, or tuple\n The new value for the parameter selected for `param`. Use\n `ipyrad.get_params_info()` to get further information about\n a given parameter. If the wrong type is entered for newvalue\n (e.g., a str when it should be an int), an error will be raised.\n Further information about each parameter is also available\n in the documentation.\n\n Examples\n --------\n ## param 'project_dir' takes only a str as input\n [Assembly].set_params('project_dir', 'new_directory')\n\n ## param 'restriction_overhang' must be a tuple or str, if str it is \n ## converted to a tuple with the second entry empty.\n [Assembly].set_params('restriction_overhang', ('CTGCAG', 'CCGG')\n\n ## param 'max_shared_Hs_locus' can be an int or a float:\n [Assembly].set_params('max_shared_Hs_locus', 0.25)"}
{"_id": "q_9897", "text": "Returns a copy of the Assembly object. Does not allow Assembly\n object names to be replicated in namespace or path."}
{"_id": "q_9898", "text": "hidden wrapped function to start step 1"}
{"_id": "q_9899", "text": "hidden wrapped function to start step 2"}
{"_id": "q_9900", "text": "hidden wrapped function to start step 4"}
{"_id": "q_9901", "text": "hidden wrapped function to start step 5"}
{"_id": "q_9902", "text": "Hidden function to start Step 6."}
{"_id": "q_9903", "text": "Return a list of samples that are actually ready for the next step.\n Each step runs this prior to calling run, makes it easier to\n centralize and normalize how each step is checking sample states.\n mystep is the state produced by the current step."}
{"_id": "q_9904", "text": "returns the fastest func given data & longbar"}
{"_id": "q_9905", "text": "Writes sorted data 'dsort dict' to a tmp files"}
{"_id": "q_9906", "text": "Collate temp fastq files in tmp-dir into 1 gzipped sample."}
{"_id": "q_9907", "text": "Estimate a reasonable optim value by grabbing a chunk of sequences, \n decompressing and counting them, to estimate the full file size."}
{"_id": "q_9908", "text": "cleanup func for step 1"}
{"_id": "q_9909", "text": "fill a matrix with pairwise data sharing"}
{"_id": "q_9910", "text": "Get the param name from the dict index value."}
{"_id": "q_9911", "text": "save to json."}
{"_id": "q_9912", "text": "Save assembly and samples as json"}
{"_id": "q_9913", "text": "function to encode json string"}
{"_id": "q_9914", "text": "a subfunction for summarizing results"}
{"_id": "q_9915", "text": "Load existing results files for an object with this workdir and name. \n This does NOT reload the parameter settings for the object..."}
{"_id": "q_9916", "text": "Prints a summarized table of results from replicate runs, or,\n if individual_result=True, then returns a list of separate\n dataframes for each replicate run."}
{"_id": "q_9917", "text": "Sends the cluster bits to nprocessors for muscle alignment. They return\n with indel.h5 handles to be concatenated into a joint h5."}
{"_id": "q_9918", "text": "concatenates sorted aligned cluster tmpfiles and removes them."}
{"_id": "q_9919", "text": "build tmp h5 arrays that can return quick access for nloci"}
{"_id": "q_9920", "text": "return nloci from the tmp h5 arr"}
{"_id": "q_9921", "text": "uses bash commands to quickly count N seeds from utemp file"}
{"_id": "q_9922", "text": "sort seeds from cluster results"}
{"_id": "q_9923", "text": "A subfunction of build_clustbits to allow progress tracking. This func\n splits the unaligned clusters into bits for aligning on separate cores."}
{"_id": "q_9924", "text": "Function to remove older files. This is called either in substep 1 or after\n the final substep so that tempfiles are retained for restarting interrupted\n jobs until we're sure they're no longer needed."}
{"_id": "q_9925", "text": "cleanup for assembly object"}
{"_id": "q_9926", "text": "Filter for samples that are already finished with this step, allow others\n to run, pass them to parallel client function to filter with cutadapt."}
{"_id": "q_9927", "text": "concatenate if multiple input files for a single samples"}
{"_id": "q_9928", "text": "sends fastq files to cutadapt"}
{"_id": "q_9929", "text": "If multiple fastq files were appended into the list of fastqs for samples\n then we merge them here before proceeding."}
{"_id": "q_9930", "text": "Convert vcf from step6 to .loci format to facilitate downstream format conversion"}
{"_id": "q_9931", "text": "Function for importing a vcf file into loci format. Arguments\n are the input vcffile and the loci file to write out."}
{"_id": "q_9932", "text": "A function to find 2 engines per hostname on the ipyclient.\n We'll assume that the CPUs are hyperthreaded, which is why\n we grab two. If they are not then no foul. Two multi-threaded\n jobs will be run on each of the 2 engines per host."}
{"_id": "q_9933", "text": "compute stats for stats file and NHX tree features"}
{"_id": "q_9934", "text": "random sampler for equal_splits func"}
{"_id": "q_9935", "text": "groups together several numba compiled funcs"}
{"_id": "q_9936", "text": "The workhorse function. Not numba."}
{"_id": "q_9937", "text": "used in bootstrap resampling without a map file"}
{"_id": "q_9938", "text": "get shape of new bootstrap resampled locus array"}
{"_id": "q_9939", "text": "fills the new bootstrap resampled array"}
{"_id": "q_9940", "text": "converts unicode to utf-8 when reading in json files"}
{"_id": "q_9941", "text": "parse sample names from the sequence file"}
{"_id": "q_9942", "text": "runs quartet max-cut on a quartets file"}
{"_id": "q_9943", "text": "Makes a reduced array that excludes quartets with no information and \n prints the quartets and weights to a file formatted for wQMC"}
{"_id": "q_9944", "text": "renames newick from numbers to sample names"}
{"_id": "q_9945", "text": "write final tree files"}
{"_id": "q_9946", "text": "save a JSON file representation of Tetrad Class for checkpoint"}
{"_id": "q_9947", "text": "inputs results from workers into hdf4 array"}
{"_id": "q_9948", "text": "Get the row index of samples that are included. If samples are in the\n 'excluded' they were already filtered out of 'samples' during _get_samples."}
{"_id": "q_9949", "text": "pads names for loci output"}
{"_id": "q_9950", "text": "Function from make_loci to apply to chunks. smask is sample mask."}
{"_id": "q_9951", "text": "enter funcs for SE or merged data"}
{"_id": "q_9952", "text": "Create database file for storing final filtered snps data as hdf5 array.\n Copies splits and duplicates info from clust_database to database."}
{"_id": "q_9953", "text": "Used to count the number of unique bases in a site for snpstring."}
{"_id": "q_9954", "text": "write the bisnp string"}
{"_id": "q_9955", "text": "Write STRUCTURE format for all SNPs and unlinked SNPs"}
{"_id": "q_9956", "text": "Sorts, concatenates, and gzips VCF chunks. Also cleans up chunks."}
{"_id": "q_9957", "text": "Returns the most common base at each site in order."}
{"_id": "q_9958", "text": "collapse outgroup in ete Tree for easier viewing"}
{"_id": "q_9959", "text": "plot the tree using toyplot.graph. \n\n Parameters:\n -----------\n show_tip_labels: bool\n Show tip names from tree.\n use_edge_lengths: bool\n Use edge lengths from newick tree.\n show_node_support: bool\n Show support values at nodes using a set of default \n options. \n\n ..."}
{"_id": "q_9960", "text": "iterate over clustS files to get data"}
{"_id": "q_9961", "text": "checks for too many internal indels in muscle aligned clusters"}
{"_id": "q_9962", "text": "sets up directories for step3 data"}
{"_id": "q_9963", "text": "build a directed acyclic graph describing jobs to be run in order."}
{"_id": "q_9964", "text": "makes plot to help visualize the DAG setup. For developers only."}
{"_id": "q_9965", "text": "Blocks and prints progress for just the func being requested from a list\n of submitted engine jobs. Returns whether any of the jobs failed.\n\n func = str\n results = dict of asyncs"}
{"_id": "q_9966", "text": "if multiple fastq files were appended into the list of fastqs for samples\n then we merge them here before proceeding."}
{"_id": "q_9967", "text": "Calls vsearch for clustering. cov varies by data type, values were chosen\n based on experience, but could be edited by users"}
{"_id": "q_9968", "text": "Running on remote Engine. Refmaps, then merges, then dereplicates,\n then denovo clusters reads."}
{"_id": "q_9969", "text": "loads assembly or creates a new one and set its params from \n parsedict. Does not launch ipcluster."}
{"_id": "q_9970", "text": "return probability of base call"}
{"_id": "q_9971", "text": "store phased allele data for diploids"}
{"_id": "q_9972", "text": "checks if the sample should be run and passes the args"}
{"_id": "q_9973", "text": "check whether mindepth has changed, and thus whether clusters_hidepth\n needs to be recalculated, and get new maxlen for new highdepth clusts.\n if mindepth not changed then nothing changes."}
{"_id": "q_9974", "text": "calls chunk_clusters and tracks progress."}
{"_id": "q_9975", "text": "reports host and engine info for an ipyclient"}
{"_id": "q_9976", "text": "set the debug dict"}
{"_id": "q_9977", "text": "gets the right version of vsearch, muscle, and smalt\n depending on linux vs osx"}
{"_id": "q_9978", "text": "Returns nsets unique random quartet sets sampled from\n n-choose-k without replacement combinations."}
{"_id": "q_9979", "text": "set mkl thread limit and return old value so we can reset\n when finished."}
{"_id": "q_9980", "text": "get total number of quartets possible for a split"}
{"_id": "q_9981", "text": "get total number of quartets sampled for a split"}
{"_id": "q_9982", "text": "Runs quartet max-cut QMC on the quartets qdump file."}
{"_id": "q_9983", "text": "Enters results arrays into the HDF5 database."}
{"_id": "q_9984", "text": "Creates a client to view ipcluster engines for a given profile and \n returns it with at least one engine spun up and ready to go. If no \n engines are found after nwait amount of time then an error is raised.\n If engines==MPI it waits a bit longer to find engines. If the number\n of engines is set then it waits even longer to try to find that number\n of engines."}
{"_id": "q_9985", "text": "Memoization decorator for a function taking one or more arguments."}
{"_id": "q_9986", "text": "Returns both resolutions of a cut site that has an ambiguous base in\n it, else the single cut site"}
{"_id": "q_9987", "text": "takes diploid consensus alleles with phase data stored as a mixture\n of upper and lower case characters and splits it into 2 alleles"}
{"_id": "q_9988", "text": "returns a seq with complement. Preserves little n's for splitters."}
{"_id": "q_9989", "text": "returns reverse complement of a string"}
{"_id": "q_9990", "text": "prints a progress bar"}
{"_id": "q_9991", "text": "gets optimum threaded view of ids given the host setup"}
{"_id": "q_9992", "text": "Detects the number of CPUs on a system. This is better than asking\n ipyparallel since ipp has to wait for Engines to spin up."}
{"_id": "q_9993", "text": "make the subprocess call to structure"}
{"_id": "q_9994", "text": "private function to clumpp results"}
{"_id": "q_9995", "text": "Calculates Evanno method K value scores for a series\n of permuted clumpp results."}
{"_id": "q_9996", "text": "Calculates the Evanno table from results files for tests with \n K-values in the input list kvalues. The values lnPK, lnPPK,\n and deltaK are calculated. The max_var_multiplier arg can be used\n to exclude results files based on variance of the likelihood as a \n proxy for convergence. \n\n Parameters:\n -----------\n kvalues : list\n The list of K-values for which structure was run for this object.\n e.g., kvalues = [3, 4, 5]\n\n max_var_multiple: int\n A multiplier value to use as a filter for convergence of runs. \n Default=0=no filtering. As an example, if 10 replicates \n were run then the variance of the run with the minimum variance is\n used as a benchmark. If other runs have a variance that is N times \n greater then that run will be excluded. Remember, if replicate runs \n sampled different distributions of SNPs then it is not unexpected that \n they will have very different variances. However, you may still want \n to exclude runs with very high variance since they likely have \n not converged. \n\n quiet: bool\n Suppresses printed messages about convergence.\n\n Returns:\n --------\n table : pandas.DataFrame\n A data frame with LPK, LNPPK, and delta K. The latter is typically\n used to find the best fitting value of K. But be wary of over\n interpreting a single best K value."}
{"_id": "q_9997", "text": "parse an _f structure output file"}
{"_id": "q_9998", "text": "call the command as sps"}
{"_id": "q_9999", "text": "Submits raxml job to run. If no ipyclient object is provided then \n the function will block until the raxml run is finished. If an ipyclient\n is provided then the job is sent to a remote engine and an asynchronous \n result object is returned which can be queried or awaited until it finishes.\n\n Parameters\n -----------\n ipyclient:\n Not yet supported... \n quiet: \n suppress print statements\n force:\n overwrite existing results files with this job name. \n block:\n will block progress in notebook until job finishes, even if job\n is running on a remote ipyclient."}
{"_id": "q_10000", "text": "find binaries available"}
{"_id": "q_10001", "text": "return array of bootstrap D-stats"}
{"_id": "q_10002", "text": "returns a fields argument formatted as a list of strings.\n and doesn't allow zero."}
{"_id": "q_10003", "text": "Get a player information\n\n Parameters\n ----------\n \\*tags: str\n Valid player tags. Minimum length: 3\n Valid characters: 0289PYLQGRJCUV\n \\*\\*keys: Optional[list] = None\n Filter which keys should be included in the\n response\n \\*\\*exclude: Optional[list] = None\n Filter which keys should be excluded from the\n response\n \\*\\*timeout: Optional[int] = None\n Custom timeout that overwrites Client.timeout"}
{"_id": "q_10004", "text": "Get a clan information\n\n Parameters\n ----------\n \\*tags: str\n Valid clan tags. Minimum length: 3\n Valid characters: 0289PYLQGRJCUV\n \\*\\*keys: Optional[list] = None\n Filter which keys should be included in the\n response\n \\*\\*exclude: Optional[list] = None\n Filter which keys should be excluded from the\n response\n \\*\\*timeout: Optional[int] = None\n Custom timeout that overwrites Client.timeout"}
{"_id": "q_10005", "text": "Search for a tournament\n\n Parameters\n ----------\n name: str\n The name of the tournament\n \\*\\*keys: Optional[list] = None\n Filter which keys should be included in the\n response\n \\*\\*exclude: Optional[list] = None\n Filter which keys should be excluded from the\n response\n \\*\\*max: Optional[int] = None\n Limit the number of items returned in the response\n \\*\\*page: Optional[int] = None\n Works with max, the zero-based page of the\n items\n \\*\\*timeout: Optional[int] = None\n Custom timeout that overwrites Client.timeout"}
{"_id": "q_10006", "text": "Get a list of top clans by war\n\n location_id: Optional[str] = ''\n A location ID or '' (global)\n See https://github.com/RoyaleAPI/cr-api-data/blob/master/json/regions.json\n for a list of acceptable location IDs\n \\*\\*keys: Optional[list] = None\n Filter which keys should be included in the\n response\n \\*\\*exclude: Optional[list] = None\n Filter which keys should be excluded from the\n response\n \\*\\*max: Optional[int] = None\n Limit the number of items returned in the response\n \\*\\*page: Optional[int] = None\n Works with max, the zero-based page of the\n items\n \\*\\*timeout: Optional[int] = None\n Custom timeout that overwrites Client.timeout"}
{"_id": "q_10007", "text": "Get a list of most queried players\n\n \\*\\*keys: Optional[list] = None\n Filter which keys should be included in the\n response\n \\*\\*exclude: Optional[list] = None\n Filter which keys should be excluded from the\n response\n \\*\\*max: Optional[int] = None\n Limit the number of items returned in the response\n \\*\\*page: Optional[int] = None\n Works with max, the zero-based page of the\n items\n \\*\\*timeout: Optional[int] = None\n Custom timeout that overwrites Client.timeout"}
{"_id": "q_10008", "text": "Get a list of most queried tournaments\n\n \\*\\*keys: Optional[list] = None\n Filter which keys should be included in the\n response\n \\*\\*exclude: Optional[list] = None\n Filter which keys should be excluded from the\n response\n \\*\\*max: Optional[int] = None\n Limit the number of items returned in the response\n \\*\\*page: Optional[int] = None\n Works with max, the zero-based page of the\n items\n \\*\\*timeout: Optional[int] = None\n Custom timeout that overwrites Client.timeout"}
{"_id": "q_10009", "text": "Get a list of queried tournaments\n\n \\*\\*keys: Optional[list] = None\n Filter which keys should be included in the\n response\n \\*\\*exclude: Optional[list] = None\n Filter which keys should be excluded from the\n response\n \\*\\*max: Optional[int] = None\n Limit the number of items returned in the response\n \\*\\*page: Optional[int] = None\n Works with max, the zero-based page of the\n items\n \\*\\*timeout: Optional[int] = None\n Custom timeout that overwrites Client.timeout"}
{"_id": "q_10010", "text": "Get information about a player\n\n Parameters\n ----------\n tag: str\n A valid tournament tag. Minimum length: 3\n Valid characters: 0289PYLQGRJCUV\n timeout: Optional[int] = None\n Custom timeout that overwrites Client.timeout"}
{"_id": "q_10011", "text": "Get inforamtion about a clan\n\n Parameters\n ----------\n tag: str\n A valid tournament tag. Minimum length: 3\n Valid characters: 0289PYLQGRJCUV\n timeout: Optional[int] = None\n Custom timeout that overwrites Client.timeout"}
{"_id": "q_10012", "text": "Search for a clan. At least one\n of the filters must be present\n\n Parameters\n ----------\n name: Optional[str]\n The name of a clan\n (has to be at least 3 characters long)\n locationId: Optional[int]\n A location ID\n minMembers: Optional[int]\n The minimum member count\n of a clan\n maxMembers: Optional[int]\n The maximum member count\n of a clan\n minScore: Optional[int]\n The minimum trophy score of\n a clan\n \\*\\*limit: Optional[int] = None\n Limit the number of items returned in the response\n \\*\\*timeout: Optional[int] = None\n Custom timeout that overwrites Client.timeout"}
{"_id": "q_10013", "text": "Get the arena image URL\n\n Parameters\n ---------\n obj: official_api.models.BaseAttrDict\n An object that has the arena ID in ``.arena.id``\n Can be ``Profile`` for example.\n\n Returns None or str"}
{"_id": "q_10014", "text": "Form a deck link\n\n Parameters\n ---------\n deck: official_api.models.BaseAttrDict\n An object is a deck. Can be retrieved from ``Player.current_deck``\n\n Returns str"}
{"_id": "q_10015", "text": "Export gene panels to .bed like format.\n \n Specify any number of panels on the command line"}
{"_id": "q_10016", "text": "Given a weekday and a date, will increment the date until it's\n weekday matches that of the given weekday, then that date is returned."}
{"_id": "q_10017", "text": "Add 'num' to the day and count that day until we reach end_repeat, or\n until we're outside of the current month, counting the days\n as we go along."}
{"_id": "q_10018", "text": "Starts from 'start' day and counts backwards until 'end' day.\n 'start' should be >= 'end'. If it's equal to, does nothing.\n If a day falls outside of end_repeat, it won't be counted."}
{"_id": "q_10019", "text": "Export causative variants for a collaborator\n\n Args:\n adapter(MongoAdapter)\n collaborator(str)\n document_id(str): Search for a specific variant\n case_id(str): Search causative variants for a case\n\n Yields:\n variant_obj(scout.Models.Variant): Variants marked as causative ordered by position."}
{"_id": "q_10020", "text": "Export mitochondrial variants for a case to create a MT excel report\n\n Args:\n variants(list): all MT variants for a case, sorted by position\n sample_id(str) : the id of a sample within the case\n\n Returns:\n document_lines(list): list of lines to include in the document"}
{"_id": "q_10021", "text": "Update a user in the database"}
{"_id": "q_10022", "text": "Display a list of STR variants."}
{"_id": "q_10023", "text": "Display a specific STR variant."}
{"_id": "q_10024", "text": "Show cancer variants overview."}
{"_id": "q_10025", "text": "ACMG classification form."}
{"_id": "q_10026", "text": "Calculate an ACMG classification from submitted criteria."}
{"_id": "q_10027", "text": "Parse gene panel file and fill in HGNC symbols for filter."}
{"_id": "q_10028", "text": "Download all verified variants for user's cases"}
{"_id": "q_10029", "text": "Return a dictionary with hgnc symbols as keys\n\n Value of the dictionaries are information about the hgnc ids for a symbol.\n If the symbol is primary for a gene then 'true_id' will exist.\n A list of hgnc ids that the symbol points to is in ids.\n\n Args:\n hgnc_genes(dict): a dictionary with hgnc_id as key and gene info as value\n\n Returns:\n alias_genes(dict):\n {\n 'hgnc_symbol':{\n 'true_id': int,\n 'ids': list(int)\n }\n }"}
{"_id": "q_10030", "text": "Add information of incomplete penetrance"}
{"_id": "q_10031", "text": "Gather information from different sources and return a gene dict\n\n Extract information collected from a number of sources and combine them\n into a gene dict with HGNC symbols as keys.\n\n hgnc_id works as the primary symbol and it is from this source we gather\n as much information as possible (hgnc_complete_set.txt)\n\n Coordinates are gathered from ensemble and the entries are linked from hgnc\n to ensembl via ENSGID.\n\n From exac the gene intolerance scores are collected, genes are linked to hgnc\n via hgnc symbol. This is a unstable symbol since they often change.\n\n\n Args:\n ensembl_lines(iterable(str)): Strings with ensembl gene information\n hgnc_lines(iterable(str)): Strings with hgnc gene information\n exac_lines(iterable(str)): Strings with exac PLi score info\n mim2gene_lines(iterable(str))\n genemap_lines(iterable(str))\n hpo_lines(iterable(str)): Strings with hpo gene information\n\n Yields:\n gene(dict): A dictionary with gene information"}
{"_id": "q_10032", "text": "Get the cytoband coordinate for a position\n\n Args:\n chrom(str)\n pos(int)\n\n Returns:\n coordinate(str)"}
{"_id": "q_10033", "text": "Get the subcategory for a VCF variant\n\n The sub categories are:\n 'snv', 'indel', 'del', 'ins', 'dup', 'bnd', 'inv'\n\n Args:\n alt_len(int)\n ref_len(int)\n category(str)\n svtype(str)\n\n Returns:\n subcategory(str)"}
{"_id": "q_10034", "text": "Return the end coordinate for a variant\n\n Args:\n pos(int)\n alt(str)\n category(str)\n snvend(str)\n svend(int)\n svlen(int)\n\n Returns:\n end(int)"}
{"_id": "q_10035", "text": "Find out the coordinates for a variant\n\n Args:\n variant(cyvcf2.Variant)\n\n Returns:\n coordinates(dict): A dictionary on the form:\n {\n 'position':<int>,\n 'end':<int>,\n 'end_chrom':<str>,\n 'length':<int>,\n 'sub_category':<str>,\n 'mate_id':<str>,\n 'cytoband_start':<str>,\n 'cytoband_end':<str>,\n }"}
{"_id": "q_10036", "text": "Show all panels for a case."}
{"_id": "q_10037", "text": "Add delivery report to an existing case."}
{"_id": "q_10038", "text": "Retrieves a list of HPO terms from scout database\n\n Args:\n store (obj): an adapter to the scout database\n query (str): the term to search in the database\n limit (str): the number of desired results\n\n Returns:\n hpo_phenotypes (dict): the complete list of HPO objects stored in scout"}
{"_id": "q_10039", "text": "Show all objects in the whitelist collection"}
{"_id": "q_10040", "text": "Parse information about a gene."}
{"_id": "q_10041", "text": "Fetch matching genes and convert to JSON."}
{"_id": "q_10042", "text": "Show all transcripts in the database"}
{"_id": "q_10043", "text": "Returns the events that occur on the given day.\n Works by getting all occurrences for the month, then drilling\n down to only those occurring on the given day."}
{"_id": "q_10044", "text": "Pre-process list of SV variants."}
{"_id": "q_10045", "text": "Pre-process list of STR variants."}
{"_id": "q_10046", "text": "Returns a header for the CSV file with the filtered variants to be exported.\n\n Args:\n case_obj(scout.models.Case)\n\n Returns:\n header: includes the fields defined in scout.constants.variants_export EXPORT_HEADER\n + AD_reference, AD_alternate, GT_quality for each sample analysed for a case"}
{"_id": "q_10047", "text": "Get variant information"}
{"_id": "q_10048", "text": "Get sift predictions from genes."}
{"_id": "q_10049", "text": "Pre-process case for the variant view.\n\n Adds information about files from case obj to variant\n\n Args:\n store(scout.adapter.MongoAdapter)\n case_obj(scout.models.Case)\n variant_obj(scout.models.Variant)"}
{"_id": "q_10050", "text": "Query observations for a variant."}
{"_id": "q_10051", "text": "Parse variant genes."}
{"_id": "q_10052", "text": "Generate amino acid change as a string."}
{"_id": "q_10053", "text": "Calculate end position for a variant."}
{"_id": "q_10054", "text": "Returns a judgement on the overall frequency of the variant.\n\n Combines multiple metrics into a single call."}
{"_id": "q_10055", "text": "Compose link to COSMIC Database.\n\n Args:\n variant_obj(scout.models.Variant)\n\n Returns:\n url_template(str): Link to COSMIIC database if cosmic id is present"}
{"_id": "q_10056", "text": "Compose link to Beacon Network."}
{"_id": "q_10057", "text": "Compose link to UCSC."}
{"_id": "q_10058", "text": "Translate SPIDEX annotation to human readable string."}
{"_id": "q_10059", "text": "Fetch data related to cancer variants for a case."}
{"_id": "q_10060", "text": "Gather the required data for creating the clinvar submission form\n\n Args:\n store(scout.adapter.MongoAdapter)\n institute_id(str): Institute ID\n case_name(str): case ID\n variant_id(str): variant._id\n\n Returns:\n a dictionary with all the required data (case and variant level) to pre-fill in fields in the clinvar submission form"}
{"_id": "q_10061", "text": "Collects all variants from the clinvar submission collection with a specific submission_id\n\n Args:\n store(scout.adapter.MongoAdapter)\n institute_id(str): Institute ID\n case_name(str): case ID\n variant_id(str): variant._id\n submission_id(str): clinvar submission id, i.e. SUB76578\n\n Returns:\n A dictionary with all the data to display the clinvar_update.html template page"}
{"_id": "q_10062", "text": "Collect data relevant for rendering ACMG classification form."}
{"_id": "q_10063", "text": "Fetch and fill-in evaluation object."}
{"_id": "q_10064", "text": "Parse out HGNC symbols from a stream."}
{"_id": "q_10065", "text": "Get the clnsig information\n\n Args:\n acc(str): The clnsig accession number, raw from vcf\n sig(str): The clnsig significance score, raw from vcf\n revstat(str): The clnsig revstat, raw from vcf\n transcripts(iterable(dict))\n\n Returns:\n clnsig_accsessions(list): A list with clnsig accessions"}
{"_id": "q_10066", "text": "Get a list with compounds objects for this variant.\n\n Arguments:\n compound_info(str): A Variant dictionary\n case_id (str): unique family id\n variant_type(str): 'research' or 'clinical'\n\n Returns:\n compounds(list(dict)): A list of compounds"}
{"_id": "q_10067", "text": "Build a Individual object\n\n Args:\n ind (dict): A dictionary with individual information\n\n Returns:\n ind_obj (dict): A Individual object\n\n dict(\n individual_id = str, # required\n display_name = str,\n sex = str,\n phenotype = int,\n father = str, # Individual id of father\n mother = str, # Individual id of mother\n capture_kits = list, # List of names of capture kits\n bam_file = str, # Path to bam file\n vcf2cytosure = str, # Path to CGH file\n analysis_type = str, # choices=ANALYSIS_TYPES\n )"}
{"_id": "q_10068", "text": "Upload variants to a case\n\n Note that the files has to be linked with the case, \n if they are not use 'scout update case'."}
{"_id": "q_10069", "text": "Return a variant."}
{"_id": "q_10070", "text": "Return a opened file"}
{"_id": "q_10071", "text": "Get the net of any 'next' and 'prev' querystrings."}
{"_id": "q_10072", "text": "Returns what the next and prev querystrings should be."}
{"_id": "q_10073", "text": "Make sure any event day we send back for weekday repeating\n events is not a weekend."}
{"_id": "q_10074", "text": "Parse all data necessary for loading a case into scout\n\n This can be done either by providing a VCF file and other information\n on the command line. Or all the information can be specified in a config file.\n Please see Scout documentation for further instructions.\n\n Args:\n config(dict): A yaml formatted config file\n ped(iterable(str)): A ped formatted family file\n owner(str): The institute that owns a case\n vcf_snv(str): Path to a vcf file\n vcf_str(str): Path to a VCF file\n vcf_sv(str): Path to a vcf file\n vcf_cancer(str): Path to a vcf file\n peddy_ped(str): Path to a peddy ped\n multiqc(str): Path to dir with multiqc information\n\n Returns:\n config_data(dict): Holds all the necessary information for loading\n Scout"}
{"_id": "q_10075", "text": "Add information from peddy outfiles to the individuals"}
{"_id": "q_10076", "text": "Parse the individual information\n\n Reformat sample information to proper individuals\n\n Args:\n samples(list(dict))\n\n Returns:\n individuals(list(dict))"}
{"_id": "q_10077", "text": "Parse out minimal family information from a PED file.\n\n Args:\n ped_stream(iterable(str))\n family_type(str): Format of the pedigree information\n\n Returns:\n family_id(str), samples(list[dict])"}
{"_id": "q_10078", "text": "Build a evaluation object ready to be inserted to database\n\n Args:\n variant_specific(str): md5 string for the specific variant\n variant_id(str): md5 string for the common variant\n user_id(str)\n user_name(str)\n institute_id(str)\n case_id(str)\n classification(str): The ACMG classification\n criteria(list(dict)): A list of dictionaries with ACMG criterias\n\n Returns:\n evaluation_obj(dict): Correctly formatted evaluation object"}
{"_id": "q_10079", "text": "Export all mitochondrial variants for each sample of a case\n and write them to an excel file\n\n Args:\n adapter(MongoAdapter)\n case_id(str)\n test(bool): True if the function is called for testing purposes\n outpath(str): path to output file\n\n Returns:\n written_files(int): number of written or simulated files"}
{"_id": "q_10080", "text": "Check if criterias for Likely Benign are fullfilled\n\n The following are descriptions of Likely Benign clasification from ACMG paper:\n\n Likely Benign\n (i) 1 Strong (BS1\u2013BS4) and 1 supporting (BP1\u2013 BP7) OR\n (ii) \u22652 Supporting (BP1\u2013BP7)\n\n Args:\n bs_terms(list(str)): Terms that indicate strong evidence for benign variant\n bp_terms(list(str)): Terms that indicate supporting evidence for benign variant\n\n Returns:\n bool: if classification indicates Benign level"}
{"_id": "q_10081", "text": "Add extra information about genes from gene panels\n\n Args:\n variant_obj(dict): A variant from the database\n gene_panels(list(dict)): List of panels from database"}
{"_id": "q_10082", "text": "Return all variants with sanger information\n\n Args:\n institute_id(str)\n case_id(str)\n\n Returns:\n res(pymongo.Cursor): A Cursor with all variants with sanger activity"}
{"_id": "q_10083", "text": "Returns the specified variant.\n\n Arguments:\n document_id : A md5 key that represents the variant or \"variant_id\"\n gene_panels(List[GenePanel])\n case_id (str): case id (will search with \"variant_id\")\n\n Returns:\n variant_object(Variant): A odm variant object"}
{"_id": "q_10084", "text": "Return all verified variants for a given institute\n\n Args:\n institute_id(str): institute id\n\n Returns:\n res(list): a list with validated variants"}
{"_id": "q_10085", "text": "Check if there are any variants that are previously marked causative\n\n Loop through all variants that are marked 'causative' for an\n institute and check if any of the variants are present in the\n current case.\n\n Args:\n case_obj (dict): A Case object\n institute_obj (dict): check across the whole institute\n\n Returns:\n causatives(iterable(Variant))"}
{"_id": "q_10086", "text": "Find the same variant in other cases marked causative.\n\n Args:\n case_obj(dict)\n variant_obj(dict)\n\n Yields:\n other_variant(dict)"}
{"_id": "q_10087", "text": "Delete variants of one type for a case\n\n This is used when a case is reanalyzed\n\n Args:\n case_id(str): The case id\n variant_type(str): 'research' or 'clinical'\n category(str): 'snv', 'sv' or 'cancer'"}
{"_id": "q_10088", "text": "Return overlapping variants.\n\n Look at the genes that a variant overlaps to.\n Then return all variants that overlap these genes.\n\n If variant_obj is sv it will return the overlapping snvs and oposite\n There is a problem when SVs are huge since there are to many overlapping variants.\n\n Args:\n variant_obj(dict)\n\n Returns:\n variants(iterable(dict))"}
{"_id": "q_10089", "text": "Returns variants that has been evaluated\n\n Return all variants, snvs/indels and svs from case case_id\n which have a entry for 'acmg_classification', 'manual_rank', 'dismiss_variant'\n or if they are commented.\n\n Args:\n case_id(str)\n\n Returns:\n variants(iterable(Variant))"}
{"_id": "q_10090", "text": "Produce a reduced vcf with variants from the specified coordinates\n This is used for the alignment viewer.\n\n Args:\n case_obj(dict): A case from the scout database\n variant_type(str): 'clinical' or 'research'. Default: 'clinical'\n category(str): 'snv' or 'sv'. Default: 'snv'\n rank_threshold(float): Only load variants above this score. Default: 5\n chrom(str): Load variants from a certain chromosome\n start(int): Specify the start position\n end(int): Specify the end position\n gene_obj(dict): A gene object from the database\n\n Returns:\n file_name(str): Path to the temporary file"}
{"_id": "q_10091", "text": "Given a list of variants get variant objects found in a specific patient\n\n Args:\n variants(list): a list of variant ids\n sample_name(str): a sample display name\n category(str): 'snv', 'sv' ..\n\n Returns:\n result(iterable(Variant))"}
{"_id": "q_10092", "text": "Determine which fields to include in csv header by checking a list of submission objects\n\n Args:\n submission_objs(list): a list of objects (variants or casedata) to include in a csv file\n csv_type(str) : 'variant_data' or 'case_data'\n\n Returns:\n custom_header(dict): A dictionary with the fields required in the csv header. Keys and values are specified in CLINVAR_HEADER and CASEDATA_HEADER"}
{"_id": "q_10093", "text": "Load all the transcripts\n\n Transcript information is from ensembl.\n\n Args:\n adapter(MongoAdapter)\n transcripts_lines(iterable): iterable with ensembl transcript lines\n build(str)\n ensembl_genes(dict): Map from ensembl_id -> HgncGene\n\n Returns:\n transcript_objs(list): A list with all transcript objects"}
{"_id": "q_10094", "text": "Add a gene panel to the database."}
{"_id": "q_10095", "text": "Delete a version of a gene panel or all versions of a gene panel"}
{"_id": "q_10096", "text": "Delete all indexes in the database"}
{"_id": "q_10097", "text": "Delete a user from the database"}
{"_id": "q_10098", "text": "Delete all exons in the database"}
{"_id": "q_10099", "text": "Delete a case and it's variants from the database"}
{"_id": "q_10100", "text": "Show all individuals from all cases in the database"}
{"_id": "q_10101", "text": "Parse a list of matchmaker matches objects and returns\n a readable list of matches to display in matchmaker matches view.\n\n Args:\n patient_id(str): id of a mme patient\n match_objs(list): list of match objs returned by MME server for the patient\n # match_objs looks like this:\n [\n {\n 'node' : { id : node_id , label: node_label},\n 'patients' : [\n { 'patient': {patient1_data} },\n { 'patient': {patient2_data} },\n ..\n ]\n },\n ..\n ]\n\n Returns:\n parsed_matches(list): a list of parsed match objects"}
{"_id": "q_10102", "text": "Display cases from the database"}
{"_id": "q_10103", "text": "Returns the currently active user as an object."}
{"_id": "q_10104", "text": "Login a user if they have access."}
{"_id": "q_10105", "text": "Build a institute object\n\n Args:\n internal_id(str)\n display_name(str)\n sanger_recipients(list(str)): List with email addresses\n\n Returns:\n institute_obj(scout.models.Institute)"}
{"_id": "q_10106", "text": "Delete a event\n\n Arguments:\n event_id (str): The database key for the event"}
{"_id": "q_10107", "text": "Fetch events from the database.\n\n Args:\n institute (dict): A institute\n case (dict): A case\n variant_id (str, optional): global variant id\n level (str, optional): restrict comments to 'specific' or 'global'\n comments (bool, optional): restrict events to include only comments\n panel (str): A panel name\n\n Returns:\n pymongo.Cursor: Query result"}
{"_id": "q_10108", "text": "Fetch all events by a specific user."}
{"_id": "q_10109", "text": "Remove an existing phenotype from a case\n\n Args:\n institute (dict): A Institute object\n case (dict): Case object\n user (dict): A User object\n link (dict): The url to be used in the event\n phenotype_id (str): A phenotype id\n\n Returns:\n updated_case(dict)"}
{"_id": "q_10110", "text": "Add a comment to a variant or a case.\n\n This function will create an Event to log that a user have commented on\n a variant. If a variant id is given it will be a variant comment.\n A variant comment can be 'global' or specific. The global comments will\n be shown for this variation in all cases while the specific comments\n will only be shown for a specific case.\n\n Arguments:\n institute (dict): A Institute object\n case (dict): A Case object\n user (dict): A User object\n link (str): The url to be used in the event\n variant (dict): A variant object\n content (str): The content of the comment\n comment_level (str): Any one of 'specific' or 'global'.\n Default is 'specific'\n\n Return:\n comment(dict): The comment event that was inserted"}
{"_id": "q_10111", "text": "Check if the variant is in the interval given by the coordinates\n\n Args:\n chromosome(str): Variant chromosome\n pos(int): Variant position\n coordinates(dict): Dictionary with the region of interest"}
{"_id": "q_10112", "text": "Render search box and view for HPO phenotype terms"}
{"_id": "q_10113", "text": "Load exons into the scout database"}
{"_id": "q_10114", "text": "Load all variants in a region to a existing case"}
{"_id": "q_10115", "text": "Returns a queryset of events that will occur again after 'now'.\n Used to help generate a list of upcoming events."}
{"_id": "q_10116", "text": "Check if gene is already added to a panel."}
{"_id": "q_10117", "text": "Create a new gene panel.\n\n Args:\n store(scout.adapter.MongoAdapter)\n institute_id(str)\n panel_name(str)\n display_name(str)\n csv_lines(iterable(str)): Stream with genes\n\n Returns:\n panel_id: the ID of the new panel document created or None"}
{"_id": "q_10118", "text": "Get information about a case from archive."}
{"_id": "q_10119", "text": "Migrate case information from archive."}
{"_id": "q_10120", "text": "Update all information that was manually annotated from a old instance."}
{"_id": "q_10121", "text": "Upload research variants to cases\n\n If a case is specified, all variants found for that case will be\n uploaded.\n\n If no cases are specified then all cases that have 'research_requested'\n will have there research variants uploaded"}
{"_id": "q_10122", "text": "Load genes into the database\n \n link_genes will collect information from all the different sources and \n merge it into a dictionary with hgnc_id as key and gene information as values.\n\n Args:\n adapter(scout.adapter.MongoAdapter)\n genes(dict): If genes are already parsed\n ensembl_lines(iterable(str)): Lines formated with ensembl gene information\n hgnc_lines(iterable(str)): Lines with gene information from genenames.org\n exac_lines(iterable(str)): Lines with information pLi-scores from ExAC\n mim2gene(iterable(str)): Lines with map from omim id to gene symbol\n genemap_lines(iterable(str)): Lines with information of omim entries\n hpo_lines(iterable(str)): Lines information about map from hpo terms to genes\n build(str): What build to use. Defaults to '37'\n\n Returns:\n gene_objects(list): A list with all gene_objects that was loaded into database"}
{"_id": "q_10123", "text": "Show all hpo terms in the database"}
{"_id": "q_10124", "text": "Register Flask blueprints."}
{"_id": "q_10125", "text": "Show all alias symbols and how they map to ids"}
{"_id": "q_10126", "text": "Build a gene_panel object\n\n Args:\n panel_info(dict): A dictionary with panel information\n adapter (scout.adapter.MongoAdapter)\n\n Returns:\n panel_obj(dict)\n\n gene_panel = dict(\n panel_id = str, # required\n institute = str, # institute_id, required\n version = float, # required\n date = datetime, # required\n display_name = str, # default is panel_name\n genes = list, # list of panel genes, sorted on panel_gene['symbol']\n )"}
{"_id": "q_10127", "text": "Export variants which have been verified for an institute\n and write them to an excel file.\n\n Args:\n collaborator(str): institute id\n test(bool): True if the function is called for testing purposes\n outpath(str): path to output file\n\n Returns:\n written_files(int): number of written or simulated files"}
{"_id": "q_10128", "text": "Export causatives for a collaborator in .vcf format"}
{"_id": "q_10129", "text": "Generate an md5-key from a list of arguments.\n\n Args:\n list_of_arguments: A list of strings\n\n Returns:\n A md5-key object generated from the list of strings."}
{"_id": "q_10130", "text": "Setup via Flask."}
{"_id": "q_10131", "text": "Setup connection to database."}
{"_id": "q_10132", "text": "Create indexes for the database"}
{"_id": "q_10133", "text": "Setup a scout database."}
{"_id": "q_10134", "text": "Setup a scout demo instance. This instance will be populated with a\n case, a gene panel and some variants."}
{"_id": "q_10135", "text": "Setup scout instances."}
{"_id": "q_10136", "text": "Show all institutes in the database"}
{"_id": "q_10137", "text": "Parse the genetic models entry of a vcf\n\n Args:\n models_info(str): The raw vcf information\n case_id(str)\n\n Returns:\n genetic_models(list)"}
{"_id": "q_10138", "text": "Add a institute to the database\n\n Args:\n institute_obj(Institute)"}
{"_id": "q_10139", "text": "Featch a single institute from the backend\n\n Args:\n institute_id(str)\n\n Returns:\n Institute object"}
{"_id": "q_10140", "text": "Return a datetime object if there is a valid date\n\n Raise exception if date is not valid\n Return todays date if no date where added\n\n Args:\n date(str)\n date_format(str)\n\n Returns:\n date_obj(datetime.datetime)"}
{"_id": "q_10141", "text": "Export a list of genes based on hpo terms"}
{"_id": "q_10142", "text": "Check if a connection could be made to the mongo process specified\n\n Args:\n host(str)\n port(int)\n username(str)\n password(str)\n authdb (str): database to to for authentication\n max_delay(int): Number of milliseconds to wait for connection\n\n Returns:\n bool: If connection could be established"}
{"_id": "q_10143", "text": "Load a delivery report into a case in the database\n\n If the report already exists the function will exit.\n If the user want to load a report that is already in the database\n 'update' has to be 'True'\n\n Args:\n adapter (MongoAdapter): Connection to the database\n report_path (string): Path to delivery report\n case_id (string): Optional case identifier\n update (bool): If an existing report should be replaced\n \n Returns:\n updated_case(dict)"}
{"_id": "q_10144", "text": "Add a user object to the database\n\n Args:\n user_obj(scout.models.User): A dictionary with user information\n \n Returns:\n user_info(dict): a copy of what was inserted"}
{"_id": "q_10145", "text": "Visualize BAM alignments."}
{"_id": "q_10146", "text": "Load all the exons\n \n Transcript information is from ensembl.\n Check that the transcript that the exon belongs to exists in the database\n\n Args:\n adapter(MongoAdapter)\n exon_lines(iterable): iterable with ensembl exon lines\n build(str)\n ensembl_transcripts(dict): Existing ensembl transcripts"}
{"_id": "q_10147", "text": "Update all compounds for a case"}
{"_id": "q_10148", "text": "Update a gene object with links\n\n Args:\n gene_obj(dict)\n build(int)\n\n Returns:\n gene_obj(dict): gene_obj updated with many links"}
{"_id": "q_10149", "text": "Query the hgnc aliases"}
{"_id": "q_10150", "text": "Parse an hgnc formated line\n\n Args:\n line(list): A list with hgnc gene info\n header(list): A list with the header info\n\n Returns:\n hgnc_info(dict): A dictionary with the relevant info"}
{"_id": "q_10151", "text": "Parse lines with hgnc formated genes\n\n This is designed to take a dump with genes from HGNC.\n This is downloaded from:\n ftp://ftp.ebi.ac.uk/pub/databases/genenames/new/tsv/hgnc_complete_set.txt\n\n Args:\n lines(iterable(str)): An iterable with HGNC formated genes\n Yields:\n hgnc_gene(dict): A dictionary with the relevant information"}
{"_id": "q_10152", "text": "Retrieve the database id of an open clinvar submission for a user and institute,\n if none is available then create a new submission and return it\n\n Args:\n user_id(str): a user ID\n institute_id(str): an institute ID\n\n Returns:\n submission(obj) : an open clinvar submission object"}
{"_id": "q_10153", "text": "saves an official clinvar submission ID in a clinvar submission object\n\n Args:\n clinvar_id(str): a string with a format: SUB[0-9]. It is obtained from clinvar portal when starting a new submission\n submission_id(str): submission_id(str) : id of the submission to be updated\n\n Returns:\n updated_submission(obj): a clinvar submission object, updated"}
{"_id": "q_10154", "text": "Returns the official Clinvar submission ID for a submission object\n\n Args:\n submission_id(str): submission_id(str) : id of the submission\n\n Returns:\n clinvar_subm_id(str): a string with a format: SUB[0-9]. It is obtained from clinvar portal when starting a new submission"}
{"_id": "q_10155", "text": "Collect all open and closed clinvar submission created by a user for an institute\n\n Args:\n user_id(str): a user ID\n institute_id(str): an institute ID\n\n Returns:\n submissions(list): a list of clinvar submission objects"}
{"_id": "q_10156", "text": "Remove a variant object from clinvar database and update the relative submission object\n\n Args:\n object_id(str) : the id of an object to remove from clinvar_collection database collection (a variant of a case)\n object_type(str) : either 'variant_data' or 'case_data'. It's a key in the clinvar_submission object.\n submission_id(str): the _id key of a clinvar submission\n\n Returns:\n updated_submission(obj): an updated clinvar submission"}
{"_id": "q_10157", "text": "Get all variants included in clinvar submissions for a case\n\n Args:\n case_id(str): a case _id\n\n Returns:\n submission_variants(dict): keys are variant ids and values are variant submission objects"}
{"_id": "q_10158", "text": "Render seach box for genes."}
{"_id": "q_10159", "text": "Render information about a gene."}
{"_id": "q_10160", "text": "Return JSON data about genes."}
{"_id": "q_10161", "text": "Make sure that the gene panels exist in the database\n Also check if the default panels are defined in gene panels\n\n Args:\n adapter(MongoAdapter)\n panels(list(str)): A list with panel names\n\n Returns:\n panels_exists(bool)"}
{"_id": "q_10162", "text": "Load all variants in a region defined by a HGNC id\n\n Args:\n adapter (MongoAdapter)\n case_id (str): Case id\n hgnc_id (int): If all variants from a gene should be uploaded\n chrom (str): If variants from coordinates should be uploaded\n start (int): Start position for region\n end (int): Stop position for region"}
{"_id": "q_10163", "text": "Load a new case from a Scout config.\n\n Args:\n adapter(MongoAdapter)\n config(dict): loading info\n ped(Iterable(str)): Pedigree ingformation\n update(bool): If existing case should be updated"}
{"_id": "q_10164", "text": "Fetch insitiute and case objects."}
{"_id": "q_10165", "text": "Preprocess institute objects."}
{"_id": "q_10166", "text": "Update a panel in the database"}
{"_id": "q_10167", "text": "Load the omim phenotypes into the database\n \n Parse the phenotypes from genemap2.txt and find the associated hpo terms\n from ALL_SOURCES_ALL_FREQUENCIES_diseases_to_genes_to_phenotypes.txt.\n\n Args:\n adapter(MongoAdapter)\n genemap_lines(iterable(str))\n genes(dict): Dictionary with all genes found in database\n hpo_disease_lines(iterable(str))"}
{"_id": "q_10168", "text": "Parse any frequency from the info dict\n\n Args:\n variant(cyvcf2.Variant)\n info_key(str)\n\n Returns:\n frequency(float): or None if frequency does not exist"}
{"_id": "q_10169", "text": "Show all users in the database"}
{"_id": "q_10170", "text": "Build a hgnc_gene object\n\n Args:\n gene_info(dict): Gene information\n\n Returns:\n gene_obj(dict)\n \n {\n '_id': ObjectId(),\n # This is the hgnc id, required:\n 'hgnc_id': int, \n # The primary symbol, required \n 'hgnc_symbol': str,\n 'ensembl_id': str, # required\n 'build': str, # '37' or '38', defaults to '37', required\n \n 'chromosome': str, # required\n 'start': int, # required\n 'end': int, # required\n \n 'description': str, # Gene description\n 'aliases': list(), # Gene symbol aliases, includes hgnc_symbol, str\n 'entrez_id': int,\n 'omim_id': int,\n 'pli_score': float,\n 'primary_transcripts': list(), # List of refseq transcripts (str)\n 'ucsc_id': str,\n 'uniprot_ids': list(), # List of str\n 'vega_id': str,\n 'transcripts': list(), # List of hgnc_transcript\n \n # Inheritance information\n 'inheritance_models': list(), # List of model names\n 'incomplete_penetrance': bool, # Acquired from HPO\n \n # Phenotype information\n 'phenotypes': list(), # List of dictionaries with phenotype information\n }"}
{"_id": "q_10171", "text": "Load a gene panel based on the info sent\n A panel object is built and integrity checks are made.\n The panel object is then loaded into the database.\n\n Args:\n path(str): Path to panel file\n institute(str): Name of institute that owns the panel\n panel_id(str): Panel id\n date(datetime.datetime): Date of creation\n version(float)\n full_name(str): Option to have a long name\n\n panel_info(dict): {\n 'file': <path to panel file>(str),\n 'institute': <institute>(str),\n 'type': <panel type>(str),\n 'date': date,\n 'version': version,\n 'panel_name': panel_id,\n 'full_name': name,\n }"}
{"_id": "q_10172", "text": "Create and load the OMIM-AUTO panel"}
{"_id": "q_10173", "text": "Add a gene panel to the database\n\n Args:\n panel_obj(dict)"}
{"_id": "q_10174", "text": "Fetch a gene panel by '_id'.\n\n Args:\n panel_id (str, ObjectId): str or ObjectId of document ObjectId\n\n Returns:\n dict: panel object or `None` if panel not found"}
{"_id": "q_10175", "text": "Fetch all gene panels and group them by gene\n\n Args:\n case_obj(scout.models.Case)\n Returns:\n gene_dict(dict): A dictionary with gene as keys and a set of\n panel names as value"}
{"_id": "q_10176", "text": "Replace a existing gene panel with a new one\n\n Keeps the object id\n\n Args:\n panel_obj(dict)\n version(float)\n date_obj(datetime.datetime)\n\n Returns:\n updated_panel(dict)"}
{"_id": "q_10177", "text": "Add a pending action to a gene panel\n\n Store the pending actions in panel.pending\n\n Args:\n panel_obj(dict): The panel that is about to be updated\n hgnc_gene(dict)\n action(str): choices=['add','delete','edit']\n info(dict): additional gene info (disease_associated_transcripts,\n reduced_penetrance, mosaicism, database_entry_version ,\n inheritance_models, comment)\n\n Returns:\n updated_panel(dict):"}
{"_id": "q_10178", "text": "Apply the pending changes to an existing gene panel or create a new version of the same panel.\n\n Args:\n panel_obj(dict): panel in database to update\n version(double): panel version to update\n\n Returns:\n inserted_id(str): id of updated panel or the new one"}
{"_id": "q_10179", "text": "Return all the clinical gene symbols for a case."}
{"_id": "q_10180", "text": "Interact with cases existing in the database."}
{"_id": "q_10181", "text": "Add the proper indexes to the scout instance.\n\n All indexes are specified in scout/constants/indexes.py\n\n If this method is utilised when new indexes are defined those should be added"}
{"_id": "q_10182", "text": "Update the indexes\n \n If there are any indexes that are not added to the database, add those."}
{"_id": "q_10183", "text": "Build a mongo query\n\n These are the different query options:\n {\n 'genetic_models': list,\n 'chrom': str,\n 'thousand_genomes_frequency': float,\n 'exac_frequency': float,\n 'clingen_ngi': int,\n 'cadd_score': float,\n 'cadd_inclusive\": boolean,\n 'genetic_models': list(str),\n 'hgnc_symbols': list,\n 'region_annotations': list,\n 'functional_annotations': list,\n 'clinsig': list,\n 'clinsig_confident_always_returned': boolean,\n 'variant_type': str(('research', 'clinical')),\n 'chrom': str,\n 'start': int,\n 'end': int,\n 'svtype': list,\n 'size': int,\n 'size_shorter': boolean,\n 'gene_panels': list(str),\n 'mvl_tag\": boolean,\n 'decipher\": boolean,\n }\n\n Arguments:\n case_id(str)\n query(dict): a dictionary of query filters specified by the users\n variant_ids(list(str)): A list of md5 variant ids\n\n Returns:\n mongo_query : A dictionary in the mongo query format"}
{"_id": "q_10184", "text": "Add clinsig filter values to the mongo query object\n\n Args:\n query(dict): a dictionary of query filters specified by the users\n mongo_query(dict): the query that is going to be submitted to the database\n\n Returns:\n clinsig_query(dict): a dictionary with clinsig key-values"}
{"_id": "q_10185", "text": "Adds gene-related filters to the query object\n\n Args:\n query(dict): a dictionary of query filters specified by the users\n mongo_query(dict): the query that is going to be submitted to the database\n\n Returns:\n mongo_query(dict): returned object contains gene and panel-related filters"}
{"_id": "q_10186", "text": "Drop the mongo database given."}
{"_id": "q_10187", "text": "Parse user submitted panel."}
{"_id": "q_10188", "text": "Load a bulk of hgnc gene objects\n \n Raises IntegrityError if there are any write concerns\n\n Args:\n gene_objs(iterable(scout.models.hgnc_gene))\n\n Returns:\n result (pymongo.results.InsertManyResult)"}
{"_id": "q_10189", "text": "Load a bulk of transcript objects to the database\n\n Arguments:\n transcript_objs(iterable(scout.models.hgnc_transcript))"}
{"_id": "q_10190", "text": "Load a bulk of exon objects to the database\n\n Arguments:\n exon_objs(iterable(scout.models.hgnc_exon))"}
{"_id": "q_10191", "text": "Fetch a hgnc gene\n\n Args:\n hgnc_identifier(int)\n\n Returns:\n gene_obj(HgncGene)"}
{"_id": "q_10192", "text": "Query the genes with a hgnc symbol and return the hgnc id\n\n Args:\n hgnc_symbol(str)\n build(str)\n\n Returns:\n hgnc_id(int)"}
{"_id": "q_10193", "text": "Fetch all hgnc genes that match a hgnc symbol\n\n Check both hgnc_symbol and aliases\n\n Args:\n hgnc_symbol(str)\n build(str): The build in which to search\n search(bool): if partial searching should be used\n\n Returns:\n result()"}
{"_id": "q_10194", "text": "Fetch all hgnc genes\n\n Returns:\n result()"}
{"_id": "q_10195", "text": "Delete the genes collection"}
{"_id": "q_10196", "text": "Delete the transcripts collection"}
{"_id": "q_10197", "text": "Return a dictionary with hgnc symbols as keys and a list of hgnc ids\n as value.\n\n If a gene symbol is listed as primary the list of ids will only consist\n of that entry if not the gene can not be determined so the result is a list\n of hgnc_ids\n\n Args:\n build(str)\n genes(iterable(scout.models.HgncGene)):\n\n Returns:\n alias_genes(dict): {<hgnc_alias>: {'true': <hgnc_id>, 'ids': {<hgnc_id_1>, <hgnc_id_2>, ...}}}"}
{"_id": "q_10198", "text": "Return a dictionary with ensembl ids as keys and gene objects as value.\n\n Args:\n build(str)\n \n Returns:\n genes(dict): {<ensg_id>: gene_obj, ...}"}
{"_id": "q_10199", "text": "Add the correct hgnc id to a set of genes with hgnc symbols\n\n Args:\n genes(list(dict)): A set of genes with hgnc symbols only"}
{"_id": "q_10200", "text": "Return a dictionary with chromosomes as keys and interval trees as values\n\n Each interval represents a coding region of overlapping genes.\n\n Args:\n build(str): The genome build\n genes(iterable(scout.models.HgncGene)):\n\n Returns:\n intervals(dict): A dictionary with chromosomes as keys and overlapping genomic intervals as values"}
{"_id": "q_10201", "text": "Update the automate generated omim gene panel in the database."}
{"_id": "q_10202", "text": "Show all MatchMaker matches for a given case"}
{"_id": "q_10203", "text": "Starts an internal match or a match against one or all MME external nodes"}
{"_id": "q_10204", "text": "Visualize case report"}
{"_id": "q_10205", "text": "Add or remove a diagnosis for a case."}
{"_id": "q_10206", "text": "Handle phenotypes."}
{"_id": "q_10207", "text": "Perform actions on multiple phenotypes."}
{"_id": "q_10208", "text": "Handle events."}
{"_id": "q_10209", "text": "Assign and unassign a user from a case."}
{"_id": "q_10210", "text": "Mark a variant as sanger validated."}
{"_id": "q_10211", "text": "Mark a variant as confirmed causative."}
{"_id": "q_10212", "text": "Display delivery report."}
{"_id": "q_10213", "text": "Share a case with a different institute."}
{"_id": "q_10214", "text": "Request a case to be rerun."}
{"_id": "q_10215", "text": "Open the research list for a case."}
{"_id": "q_10216", "text": "Download vcf2cytosure file for individual."}
{"_id": "q_10217", "text": "Load multiqc report for the case."}
{"_id": "q_10218", "text": "Preprocess case objects.\n\n Add the necessary information to display the 'cases' view\n\n Args:\n store(adapter.MongoAdapter)\n case_query(pymongo.Cursor)\n limit(int): Maximum number of cases to display\n\n Returns:\n data(dict): includes the cases, how many there are and the limit."}
{"_id": "q_10219", "text": "Gather contents to be visualized in a case report\n\n Args:\n store(adapter.MongoAdapter)\n institute_obj(models.Institute)\n case_obj(models.Case)\n\n Returns:\n data(dict)"}
{"_id": "q_10220", "text": "Get all Clinvar submissions for a user and an institute"}
{"_id": "q_10221", "text": "Collect MT variants and format line of a MT variant report\n to be exported in excel format\n\n Args:\n store(adapter.MongoAdapter)\n case_obj(models.Case)\n temp_excel_dir(os.Path): folder where the temp excel files are written to\n\n Returns:\n written_files(int): the number of files written to temp_excel_dir"}
{"_id": "q_10222", "text": "vcf2cytosure CGH file for inidividual."}
{"_id": "q_10223", "text": "Add a patient to MatchMaker server\n\n Args:\n store(adapter.MongoAdapter)\n user_obj(dict) a scout user object (to be added as matchmaker contact)\n case_obj(dict) a scout case object\n add_gender(bool) if True case gender will be included in matchmaker\n add_features(bool) if True HPO features will be included in matchmaker\n add_disorders(bool) if True OMIM diagnoses will be included in matchmaker\n genes_only(bool) if True only genes and not variants will be shared\n mme_base_url(str) base url of the MME server\n mme_accepts(str) request content accepted by MME server\n mme_token(str) auth token of the MME server\n\n Returns:\n submitted_info(dict) info submitted to MatchMaker and its responses"}
{"_id": "q_10224", "text": "Initiate a MatchMaker match against either other Scout patients or external nodes\n\n Args:\n case_obj(dict): a scout case object already submitted to MME\n match_type(str): 'internal' or 'external'\n mme_base_url(str): base url of the MME server\n mme_token(str): auth token of the MME server\n mme_accepts(str): request content accepted by MME server (only for internal matches)\n\n Returns:\n matches(list): a list of eventual matches"}
{"_id": "q_10225", "text": "Parse how the different variant callers have performed\n\n Args:\n variant (cyvcf2.Variant): A variant object\n\n Returns:\n callers (dict): A dictionary on the format\n {'gatk': <filter>,'freebayes': <filter>,'samtools': <filter>}"}
{"_id": "q_10226", "text": "Load a institute into the database\n\n Args:\n adapter(MongoAdapter)\n internal_id(str)\n display_name(str)\n sanger_recipients(list(email))"}
{"_id": "q_10227", "text": "Load a case into the database.\n\n A case can be loaded without specifying vcf files and/or bam files"}
{"_id": "q_10228", "text": "Update compounds for a variant.\n\n This will add all the necessary information of a variant on a compound object.\n\n Args:\n variant(scout.models.Variant)\n variant_objs(dict): A dictionary with _ids as keys and variant objs as values.\n\n Returns:\n compound_objs(list(dict)): A dictionary with updated compound objects."}
{"_id": "q_10229", "text": "Update the compounds for a set of variants.\n\n Args:\n variants(dict): A dictionary with _ids as keys and variant objs as values"}
{"_id": "q_10230", "text": "Load a variant object\n\n Args:\n variant_obj(dict)\n\n Returns:\n inserted_id"}
{"_id": "q_10231", "text": "Load a variant object, if the object already exists update compounds.\n\n Args:\n variant_obj(dict)\n\n Returns:\n result"}
{"_id": "q_10232", "text": "Load a bulk of variants\n\n Args:\n variants(iterable(scout.models.Variant))\n\n Returns:\n object_ids"}
{"_id": "q_10233", "text": "Assign a user to a case.\n\n This function will create an Event to log that a person has been assigned\n to a case. Also the user will be added to case \"assignees\".\n\n Arguments:\n institute (dict): A institute\n case (dict): A case\n user (dict): A User object\n link (str): The url to be used in the event\n\n Returns:\n updated_case(dict)"}
{"_id": "q_10234", "text": "Share a case with a new institute.\n\n Arguments:\n institute (dict): A Institute object\n case (dict): Case object\n collaborator_id (str): A instute id\n user (dict): A User object\n link (str): The url to be used in the event\n\n Return:\n updated_case"}
{"_id": "q_10235", "text": "Diagnose a case using OMIM ids.\n\n Arguments:\n institute (dict): A Institute object\n case (dict): Case object\n user (dict): A User object\n link (str): The url to be used in the event\n level (str): choices=('phenotype','gene')\n\n Return:\n updated_case"}
{"_id": "q_10236", "text": "Create an event for a variant verification for a variant\n and an event for a variant verification for a case\n\n Arguments:\n institute (dict): A Institute object\n case (dict): Case object\n user (dict): A User object\n link (str): The url to be used in the event\n variant (dict): A variant object\n\n Returns:\n updated_variant(dict)"}
{"_id": "q_10237", "text": "Get all variants with validations ever ordered.\n\n Args:\n institute_id(str) : The id of an institute\n user_id(str) : The id of an user\n\n Returns:\n sanger_ordered(list) : a list of dictionaries, each with \"case_id\" as keys and list of variant ids as values"}
{"_id": "q_10238", "text": "Create an event for marking a variant causative.\n\n Arguments:\n institute (dict): A Institute object\n case (dict): Case object\n user (dict): A User object\n link (str): The url to be used in the event\n variant (variant): A variant object\n\n Returns:\n updated_case(dict)"}
{"_id": "q_10239", "text": "Create an event for updating the manual dismiss variant entry\n\n This function will create a event and update the dismiss variant\n field of the variant.\n\n Arguments:\n institute (dict): A Institute object\n case (dict): Case object\n user (dict): A User object\n link (str): The url to be used in the event\n variant (dict): A variant object\n dismiss_variant (list): The new dismiss variant list\n\n Return:\n updated_variant"}
{"_id": "q_10240", "text": "Create an event for updating the ACMG classification of a variant.\n\n Arguments:\n institute_obj (dict): A Institute object\n case_obj (dict): Case object\n user_obj (dict): A User object\n link (str): The url to be used in the event\n variant_obj (dict): A variant object\n acmg_str (str): The new ACMG classification string\n\n Returns:\n updated_variant"}
{"_id": "q_10241", "text": "Construct the necessary ids for a variant\n\n Args:\n chrom(str): Variant chromosome\n pos(int): Variant position\n ref(str): Variant reference\n alt(str): Variant alternative\n case_id(str): Unique case id\n variant_type(str): 'clinical' or 'research'\n\n Returns:\n ids(dict): Dictionary with the relevant ids"}
{"_id": "q_10242", "text": "Parse the simple id for a variant\n\n Simple id is used as a human readable reference for a position, it is\n in no way unique.\n\n Args:\n chrom(str)\n pos(str)\n ref(str)\n alt(str)\n\n Returns:\n simple_id(str): The simple human readable variant id"}
{"_id": "q_10243", "text": "Fetches a single case from database\n\n Use either the _id or combination of institute_id and display_name\n\n Args:\n case_id(str): _id for a caes\n institute_id(str):\n display_name(str)\n\n Yields:\n A single Case"}
{"_id": "q_10244", "text": "Add a case to the database\n If the case already exists exception is raised\n\n Args:\n case_obj(Case)"}
{"_id": "q_10245", "text": "Replace a existing case with a new one\n\n Keeps the object id\n\n Args:\n case_obj(dict)\n\n Returns:\n updated_case(dict)"}
{"_id": "q_10246", "text": "Submit an evaluation to the database\n\n Get all the relevant information, build a evaluation_obj\n\n Args:\n variant_obj(dict)\n user_obj(dict)\n institute_obj(dict)\n case_obj(dict)\n link(str): variant url\n criteria(list(dict)):\n\n [\n {\n 'term': str,\n 'comment': str,\n 'links': list(str)\n },\n .\n .\n ]"}
{"_id": "q_10247", "text": "Parse and massage the transcript information\n\n There could be multiple lines with information about the same transcript.\n This is why it is necessary to parse the transcripts first and then return a dictionary\n where all information has been merged.\n\n Args:\n transcript_lines(): This could be an iterable with strings or a pandas.DataFrame\n\n Returns:\n parsed_transcripts(dict): Map from enstid -> transcript info"}
{"_id": "q_10248", "text": "Parse a dataframe with ensembl transcript information\n\n Args:\n res(pandas.DataFrame)\n\n Yields:\n transcript_info(dict)"}
{"_id": "q_10249", "text": "Parse an ensembl formated line\n\n Args:\n line(list): A list with ensembl gene info\n header(list): A list with the header info\n\n Returns:\n ensembl_info(dict): A dictionary with the relevant info"}
{"_id": "q_10250", "text": "Parse lines with ensembl formated genes\n\n This is designed to take a biomart dump with genes from ensembl.\n Mandatory columns are:\n 'Gene ID' 'Chromosome' 'Gene Start' 'Gene End' 'HGNC symbol\n\n Args:\n lines(iterable(str)): An iterable with ensembl formated genes\n Yields:\n ensembl_gene(dict): A dictionary with the relevant information"}
{"_id": "q_10251", "text": "Parse lines with ensembl formated exons\n\n This is designed to take a biomart dump with exons from ensembl.\n Check documentation for spec for download\n\n Args:\n lines(iterable(str)): An iterable with ensembl formated exons\n Yields:\n ensembl_gene(dict): A dictionary with the relevant information"}
{"_id": "q_10252", "text": "Parse a dataframe with ensembl exon information\n\n Args:\n res(pandas.DataFrame)\n\n Yields:\n gene_info(dict)"}
{"_id": "q_10253", "text": "Initializes the log file in the proper format.\n\n Arguments:\n\n filename (str): Path to a file. Or None if logging is to\n be disabled.\n loglevel (str): Determines the level of the log output."}
{"_id": "q_10254", "text": "docstring for parse_omim_morbid"}
{"_id": "q_10255", "text": "Get a dictionary with phenotypes\n \n Use the mim numbers for phenotypes as keys and phenotype information as \n values.\n\n Args:\n genemap_lines(iterable(str))\n \n Returns:\n phenotypes_found(dict): A dictionary with mim_numbers as keys and \n dictionaries with phenotype information as values.\n \n {\n 'description': str, # Description of the phenotype\n 'hgnc_symbols': set(), # Associated hgnc symbols\n 'inheritance': set(), # Associated phenotypes\n 'mim_number': int, # mim number of phenotype\n }"}
{"_id": "q_10256", "text": "Parse the omim files"}
{"_id": "q_10257", "text": "Convert a string to number\n If int convert to int otherwise float\n \n If not possible return None"}
{"_id": "q_10258", "text": "Return a formatted month as a table."}
{"_id": "q_10259", "text": "Set some commonly used variables."}
{"_id": "q_10260", "text": "Change colspan to \"5\", add \"today\" button, and return a month\n name as a table row."}
{"_id": "q_10261", "text": "Populate variables used to build popovers."}
{"_id": "q_10262", "text": "Parse metadata for a gene panel\n\n For historical reasons it is possible to include all information about a gene panel in the\n header of a panel file. This function parses the header.\n\n Args:\n panel_lines(iterable(str))\n\n Returns:\n panel_info(dict): Dictionary with panel information"}
{"_id": "q_10263", "text": "Parse a gene line with information from a panel file\n\n Args:\n gene_info(dict): dictionary with gene info\n\n Returns:\n gene(dict): A dictionary with the gene information\n {\n 'hgnc_id': int,\n 'hgnc_symbol': str,\n 'disease_associated_transcripts': list(str),\n 'inheritance_models': list(str),\n 'mosaicism': bool,\n 'reduced_penetrance': bool,\n 'database_entry_version': str,\n }"}
{"_id": "q_10264", "text": "Parse the panel info and return a gene panel\n\n Args:\n path(str): Path to panel file\n institute(str): Name of institute that owns the panel\n panel_id(str): Panel id\n date(datetime.datetime): Date of creation\n version(float)\n full_name(str): Option to have a long name\n\n Returns:\n gene_panel(dict)"}
{"_id": "q_10265", "text": "Update the hpo terms in the database. Fetch the latest release and update terms."}
{"_id": "q_10266", "text": "Returns a JSON response, transforming 'context' to make the payload."}
{"_id": "q_10267", "text": "Get the year and month. First tries from kwargs, then from\n querystrings. If none, or if cal_ignore qs is specified,\n sets year and month to this year and this month."}
{"_id": "q_10268", "text": "Check if any events are cancelled on the given date 'd'."}
{"_id": "q_10269", "text": "Fetch a hpo term\n\n Args:\n hpo_id(str)\n\n Returns:\n hpo_obj(dict)"}
{"_id": "q_10270", "text": "Return all HPO terms\n\n If a query is sent hpo_terms will try to match with regex on term or\n description.\n\n Args:\n query(str): Part of a hpoterm or description\n hpo_term(str): Search for a specific hpo term\n limit(int): the number of desired results\n\n Returns:\n result(pymongo.Cursor): A cursor with hpo terms"}
{"_id": "q_10271", "text": "Return all disease terms that overlaps a gene\n\n If no gene, return all disease terms\n\n Args:\n hgnc_id(int)\n\n Returns:\n iterable(dict): A list with all disease terms that match"}
{"_id": "q_10272", "text": "r'''Beat tracking evaluation\n\n Parameters\n ----------\n ref : jams.Annotation\n Reference annotation object\n est : jams.Annotation\n Estimated annotation object\n kwargs\n Additional keyword arguments\n\n Returns\n -------\n scores : dict\n Dictionary of scores, where the key is the metric name (str) and\n the value is the (float) score achieved.\n\n See Also\n --------\n mir_eval.beat.evaluate\n\n Examples\n --------\n >>> # Load in the JAMS objects\n >>> ref_jam = jams.load('reference.jams')\n >>> est_jam = jams.load('estimated.jams')\n >>> # Select the first relevant annotations\n >>> ref_ann = ref_jam.search(namespace='beat')[0]\n >>> est_ann = est_jam.search(namespace='beat')[0]\n >>> scores = jams.eval.beat(ref_ann, est_ann)"}
{"_id": "q_10273", "text": "r'''Chord evaluation\n\n Parameters\n ----------\n ref : jams.Annotation\n Reference annotation object\n est : jams.Annotation\n Estimated annotation object\n kwargs\n Additional keyword arguments\n\n Returns\n -------\n scores : dict\n Dictionary of scores, where the key is the metric name (str) and\n the value is the (float) score achieved.\n\n See Also\n --------\n mir_eval.chord.evaluate\n\n Examples\n --------\n >>> # Load in the JAMS objects\n >>> ref_jam = jams.load('reference.jams')\n >>> est_jam = jams.load('estimated.jams')\n >>> # Select the first relevant annotations\n >>> ref_ann = ref_jam.search(namespace='chord')[0]\n >>> est_ann = est_jam.search(namespace='chord')[0]\n >>> scores = jams.eval.chord(ref_ann, est_ann)"}
{"_id": "q_10274", "text": "Flatten a multi_segment annotation into mir_eval style.\n\n Parameters\n ----------\n annotation : jams.Annotation\n An annotation in the `multi_segment` namespace\n\n Returns\n -------\n hier_intervalss : list\n A list of lists of intervals, ordered by increasing specificity.\n\n hier_labels : list\n A list of lists of labels, ordered by increasing specificity."}
{"_id": "q_10275", "text": "r'''Multi-level segmentation evaluation\n\n Parameters\n ----------\n ref : jams.Annotation\n Reference annotation object\n est : jams.Annotation\n Estimated annotation object\n kwargs\n Additional keyword arguments\n\n Returns\n -------\n scores : dict\n Dictionary of scores, where the key is the metric name (str) and\n the value is the (float) score achieved.\n\n See Also\n --------\n mir_eval.hierarchy.evaluate\n\n Examples\n --------\n >>> # Load in the JAMS objects\n >>> ref_jam = jams.load('reference.jams')\n >>> est_jam = jams.load('estimated.jams')\n >>> # Select the first relevant annotations\n >>> ref_ann = ref_jam.search(namespace='multi_segment')[0]\n >>> est_ann = est_jam.search(namespace='multi_segment')[0]\n >>> scores = jams.eval.hierarchy(ref_ann, est_ann)"}
{"_id": "q_10276", "text": "r'''Melody extraction evaluation\n\n Parameters\n ----------\n ref : jams.Annotation\n Reference annotation object\n est : jams.Annotation\n Estimated annotation object\n kwargs\n Additional keyword arguments\n\n Returns\n -------\n scores : dict\n Dictionary of scores, where the key is the metric name (str) and\n the value is the (float) score achieved.\n\n See Also\n --------\n mir_eval.melody.evaluate\n\n Examples\n --------\n >>> # Load in the JAMS objects\n >>> ref_jam = jams.load('reference.jams')\n >>> est_jam = jams.load('estimated.jams')\n >>> # Select the first relevant annotations\n >>> ref_ann = ref_jam.search(namespace='pitch_contour')[0]\n >>> est_ann = est_jam.search(namespace='pitch_contour')[0]\n >>> scores = jams.eval.melody(ref_ann, est_ann)"}
{"_id": "q_10277", "text": "Add a namespace definition to our working set.\n\n Namespace files consist of partial JSON schemas defining the behavior\n of the `value` and `confidence` fields of an Annotation.\n\n Parameters\n ----------\n filename : str\n Path to json file defining the namespace object"}
{"_id": "q_10278", "text": "Return the allowed values for an enumerated namespace.\n\n Parameters\n ----------\n ns_key : str\n Namespace key identifier\n\n Returns\n -------\n values : list\n\n Raises\n ------\n NamespaceError\n If `ns_key` is not found, or does not have enumerated values\n\n Examples\n --------\n >>> jams.schema.values('tag_gtzan')\n ['blues', 'classical', 'country', 'disco', 'hip-hop', 'jazz',\n 'metal', 'pop', 'reggae', 'rock']"}
{"_id": "q_10279", "text": "Get the dtypes associated with the value and confidence fields\n for a given namespace.\n\n Parameters\n ----------\n ns_key : str\n The namespace key in question\n\n Returns\n -------\n value_dtype, confidence_dtype : numpy.dtype\n Type identifiers for value and confidence fields."}
{"_id": "q_10280", "text": "Print out a listing of available namespaces"}
{"_id": "q_10281", "text": "Load the schema file from the package."}
{"_id": "q_10282", "text": "Expand a list of relative paths to a give base directory.\n\n Parameters\n ----------\n base_dir : str\n The target base directory\n\n rel_paths : list (or list-like)\n Collection of relative path strings\n\n Returns\n -------\n expanded_paths : list\n `rel_paths` rooted at `base_dir`\n\n Examples\n --------\n >>> jams.util.expand_filepaths('/data', ['audio', 'beat', 'seglab'])\n ['/data/audio', '/data/beat', '/data/seglab']"}
{"_id": "q_10283", "text": "Safely make a full directory path if it doesn't exist.\n\n Parameters\n ----------\n dpath : str\n Path of directory/directories to create\n\n mode : int [default=0777]\n Permissions for the new directories\n\n See also\n --------\n os.makedirs"}
{"_id": "q_10284", "text": "Get the metadata from a jam and an annotation, combined as a string.\n\n Parameters\n ----------\n jam : JAMS\n The jams object\n\n ann : Annotation\n An annotation object\n\n Returns\n -------\n comments : str\n The jam.file_metadata and ann.annotation_metadata, combined and serialized"}
{"_id": "q_10285", "text": "Parse arguments from the command line"}
{"_id": "q_10286", "text": "A decorator to register namespace conversions.\n\n Usage\n -----\n >>> @conversion('tag_open', 'tag_.*')\n ... def tag_to_open(annotation):\n ... annotation.namespace = 'tag_open'\n ... return annotation"}
{"_id": "q_10287", "text": "Convert a given annotation to the target namespace.\n\n Parameters\n ----------\n annotation : jams.Annotation\n An annotation object\n\n target_namespace : str\n The target namespace\n\n Returns\n -------\n mapped_annotation : jams.Annotation\n if `annotation` already belongs to `target_namespace`, then\n it is returned directly.\n\n otherwise, `annotation` is copied and automatically converted\n to the target namespace.\n\n Raises\n ------\n SchemaError\n if the input annotation fails to validate\n\n NamespaceError\n if no conversion is possible\n\n Examples\n --------\n Convert frequency measurements in Hz to MIDI\n\n >>> ann_midi = jams.convert(ann_hz, 'note_midi')\n\n And back to Hz\n\n >>> ann_hz2 = jams.convert(ann_midi, 'note_hz')"}
{"_id": "q_10288", "text": "Test if an annotation can be mapped to a target namespace\n\n Parameters\n ----------\n annotation : jams.Annotation\n An annotation object\n\n target_namespace : str\n The target namespace\n\n Returns\n -------\n True\n if `annotation` can be automatically converted to\n `target_namespace`\n\n False\n otherwise"}
{"_id": "q_10289", "text": "Convert a pitch_hz annotation to a contour"}
{"_id": "q_10290", "text": "Convert a pitch_hz annotation to pitch_midi"}
{"_id": "q_10291", "text": "Convert scaper annotations to tag_open"}
{"_id": "q_10292", "text": "This is a decorator which can be used to mark functions\n as deprecated.\n\n It will result in a warning being emitted when the function is used."}
{"_id": "q_10293", "text": "Pop a prefix from a query string.\n\n\n Parameters\n ----------\n query : str\n The query string\n\n prefix : str\n The prefix string to pop, if it exists\n\n sep : str\n The string to separate fields\n\n Returns\n -------\n popped : str\n `query` with a `prefix` removed from the front (if found)\n or `query` if the prefix was not found\n\n Examples\n --------\n >>> query_pop('Annotation.namespace', 'Annotation')\n 'namespace'\n >>> query_pop('namespace', 'Annotation')\n 'namespace'"}
{"_id": "q_10294", "text": "Test if a string matches a query.\n\n Parameters\n ----------\n string : str\n The string to test\n\n query : string, callable, or object\n Either a regular expression, callable function, or object.\n\n Returns\n -------\n match : bool\n `True` if:\n - `query` is a callable and `query(string) == True`\n - `query` is a regular expression and `re.match(query, string)`\n - or `string == query` for any other query\n\n `False` otherwise"}
{"_id": "q_10295", "text": "Helper function to format repr strings for JObjects and friends.\n\n Parameters\n ----------\n obj\n The object to repr\n\n indent : int >= 0\n indent each new line by `indent` spaces\n\n Returns\n -------\n r : str\n If `obj` has a `__summary__` method, it is used.\n\n If `obj` is a `SortedKeyList`, then it returns a description\n of the length of the list.\n\n Otherwise, `repr(obj)`."}
{"_id": "q_10296", "text": "Update the attributes of a JObject.\n\n Parameters\n ----------\n kwargs\n Keyword arguments of the form `attribute=new_value`\n\n Examples\n --------\n >>> J = jams.JObject(foo=5)\n >>> J.dumps()\n '{\"foo\": 5}'\n >>> J.update(bar='baz')\n >>> J.dumps()\n '{\"foo\": 5, \"bar\": \"baz\"}'"}
{"_id": "q_10297", "text": "Validate a JObject against its schema\n\n Parameters\n ----------\n strict : bool\n Enforce strict schema validation\n\n Returns\n -------\n valid : bool\n True if the jam validates\n False if not, and `strict==False`\n\n Raises\n ------\n SchemaError\n If `strict==True` and `jam` fails validation"}
{"_id": "q_10298", "text": "Append an observation to the data field\n\n Parameters\n ----------\n time : float >= 0\n duration : float >= 0\n The time and duration of the new observation, in seconds\n value\n confidence\n The value and confidence of the new observations.\n\n Types and values should conform to the namespace of the\n Annotation object.\n\n Examples\n --------\n >>> ann = jams.Annotation(namespace='chord')\n >>> ann.append(time=3, duration=2, value='E#')"}
{"_id": "q_10299", "text": "Validate this annotation object against the JAMS schema,\n and its data against the namespace schema.\n\n Parameters\n ----------\n strict : bool\n If `True`, then schema violations will cause an Exception.\n If `False`, then schema violations will issue a warning.\n\n Returns\n -------\n valid : bool\n `True` if the object conforms to schema.\n `False` if the object fails to conform to schema,\n but `strict == False`.\n\n Raises\n ------\n SchemaError\n If `strict == True` and the object fails validation\n\n See Also\n --------\n JObject.validate"}
{"_id": "q_10300", "text": "Trim the annotation and return as a new `Annotation` object.\n\n Trimming will result in the new annotation only containing observations\n that occur in the intersection of the time range spanned by the\n annotation and the time range specified by the user. The new annotation\n will span the time range ``[trim_start, trim_end]`` where\n ``trim_start = max(self.time, start_time)`` and ``trim_end =\n min(self.time + self.duration, end_time)``.\n\n If ``strict=False`` (default) observations that start before\n ``trim_start`` and end after it will be trimmed such that they start at\n ``trim_start``, and similarly observations that start before\n ``trim_end`` and end after it will be trimmed to end at ``trim_end``.\n If ``strict=True`` such borderline observations will be discarded.\n\n The new duration of the annotation will be ``trim_end - trim_start``.\n\n Note that if the range defined by ``[start_time, end_time]``\n doesn't intersect with the original time range spanned by the\n annotation the resulting annotation will contain no observations, will\n have the same start time as the original annotation and have duration\n 0.\n\n This function also copies over all the annotation metadata from the\n original annotation and documents the trim operation by adding a list\n of tuples to the annotation's sandbox keyed by\n ``Annotation.sandbox.trim`` which documents each trim operation with a\n tuple ``(start_time, end_time, trim_start, trim_end)``.\n\n Parameters\n ----------\n start_time : float\n The desired start time for the trimmed annotation in seconds.\n end_time\n The desired end time for the trimmed annotation in seconds. Must be\n greater than ``start_time``.\n strict : bool\n When ``False`` (default) observations that lie at the boundaries of\n the trimming range (given by ``[trim_start, trim_end]`` as\n described above), i.e. observations that start before and end after\n either the trim start or end time, will have their time and/or\n duration adjusted such that only the part of the observation that\n lies within the trim range is kept. When ``True`` such observations\n are discarded and not included in the trimmed annotation.\n\n Returns\n -------\n ann_trimmed : Annotation\n The trimmed annotation, returned as a new jams.Annotation object.\n If the trim range specified by ``[start_time, end_time]`` does not\n intersect at all with the original time range of the annotation a\n warning will be issued and the returned annotation will be empty.\n\n Raises\n ------\n ParameterError\n If ``end_time`` is not greater than ``start_time``.\n\n Examples\n --------\n >>> ann = jams.Annotation(namespace='tag_open', time=2, duration=8)\n >>> ann.append(time=2, duration=2, value='one')\n >>> ann.append(time=4, duration=2, value='two')\n >>> ann.append(time=6, duration=2, value='three')\n >>> ann.append(time=7, duration=2, value='four')\n >>> ann.append(time=8, duration=2, value='five')\n >>> ann_trim = ann.trim(5, 8, strict=False)\n >>> print(ann_trim.time, ann_trim.duration)\n (5, 3)\n >>> ann_trim.to_dataframe()\n time duration value confidence\n 0 5 1 two None\n 1 6 2 three None\n 2 7 1 four None\n >>> ann_trim_strict = ann.trim(5, 8, strict=True)\n >>> print(ann_trim_strict.time, ann_trim_strict.duration)\n (5, 3)\n >>> ann_trim_strict.to_dataframe()\n time duration value confidence\n 0 6 2 three None"}
{"_id": "q_10301", "text": "Sample the annotation at specified times.\n\n Parameters\n ----------\n times : np.ndarray, non-negative, ndim=1\n The times (in seconds) to sample the annotation\n\n confidence : bool\n If `True`, return both values and confidences.\n If `False` (default) only return values.\n\n Returns\n -------\n values : list\n `values[i]` is a list of observation values for intervals\n that cover `times[i]`.\n\n confidence : list (optional)\n `confidence` values corresponding to `values`"}
{"_id": "q_10302", "text": "Render this annotation list in HTML\n\n Returns\n -------\n rendered : str\n An HTML table containing this annotation's data."}
{"_id": "q_10303", "text": "Trim every annotation contained in the annotation array using\n `Annotation.trim` and return as a new `AnnotationArray`.\n\n See `Annotation.trim` for details about trimming. This function does\n not modify the annotations in the original annotation array.\n\n\n Parameters\n ----------\n start_time : float\n The desired start time for the trimmed annotations in seconds.\n end_time\n The desired end time for trimmed annotations in seconds. Must be\n greater than ``start_time``.\n strict : bool\n When ``False`` (default) observations that lie at the boundaries of\n the trimming range (see `Annotation.trim` for details) will have\n their time and/or duration adjusted such that only the part of the\n observation that lies within the trim range is kept. When ``True``\n such observations are discarded and not included in the trimmed\n annotation.\n\n Returns\n -------\n trimmed_array : AnnotationArray\n An annotation array where every annotation has been trimmed."}
{"_id": "q_10304", "text": "Slice every annotation contained in the annotation array using\n `Annotation.slice`\n and return as a new AnnotationArray\n\n See `Annotation.slice` for details about slicing. This function does\n not modify the annotations in the original annotation array.\n\n Parameters\n ----------\n start_time : float\n The desired start time for slicing in seconds.\n end_time\n The desired end time for slicing in seconds. Must be greater than\n ``start_time``.\n strict : bool\n When ``False`` (default) observations that lie at the boundaries of\n the slicing range (see `Annotation.slice` for details) will have\n their time and/or duration adjusted such that only the part of the\n observation that lies within the trim range is kept. When ``True``\n such observations are discarded and not included in the sliced\n annotation.\n\n Returns\n -------\n sliced_array : AnnotationArray\n An annotation array where every annotation has been sliced."}
{"_id": "q_10305", "text": "Add the contents of another jam to this object.\n\n Note that, by default, this method fails if file_metadata is not\n identical and raises a ValueError; either resolve this manually\n (because conflicts should almost never happen), force an 'overwrite',\n or tell the method to 'ignore' the metadata of the object being added.\n\n Parameters\n ----------\n jam: JAMS object\n Object to add to this jam\n\n on_conflict: str, default='fail'\n Strategy for resolving metadata conflicts; one of\n ['fail', 'overwrite', or 'ignore'].\n\n Raises\n ------\n ParameterError\n if `on_conflict` is an unknown value\n\n JamsError\n If a conflict is detected and `on_conflict='fail'`"}
{"_id": "q_10306", "text": "Serialize annotation as a JSON formatted stream to file.\n\n Parameters\n ----------\n path_or_file : str or file-like\n Path to save the JAMS object on disk\n OR\n An open file descriptor to write into\n\n strict : bool\n Force strict schema validation\n\n fmt : str ['auto', 'jams', 'jamz']\n The output encoding format.\n\n If `auto`, it is inferred from the file name.\n\n If the input is an open file handle, `jams` encoding\n is used.\n\n\n Raises\n ------\n SchemaError\n If `strict == True` and the JAMS object fails schema\n or namespace validation.\n\n See also\n --------\n validate"}
{"_id": "q_10307", "text": "Plotting wrapper for pitch contours"}
{"_id": "q_10308", "text": "Plotting wrapper for events"}
{"_id": "q_10309", "text": "Plotting wrapper for beat-position data"}
{"_id": "q_10310", "text": "Plotting wrapper for piano rolls"}
{"_id": "q_10311", "text": "Visualize a jams annotation through mir_eval\n\n Parameters\n ----------\n annotation : jams.Annotation\n The annotation to display\n\n meta : bool\n If `True`, include annotation metadata in the figure\n\n kwargs\n Additional keyword arguments to mir_eval.display functions\n\n Returns\n -------\n ax\n Axis handles for the new display\n\n Raises\n ------\n NamespaceError\n If the annotation cannot be visualized"}
{"_id": "q_10312", "text": "Display multiple annotations with shared axes\n\n Parameters\n ----------\n annotations : jams.AnnotationArray\n A collection of annotations to display\n\n fig_kw : dict\n Keyword arguments to `plt.figure`\n\n meta : bool\n If `True`, display annotation metadata for each annotation\n\n kwargs\n Additional keyword arguments to the `mir_eval.display` routines\n\n Returns\n -------\n fig\n The created figure\n axs\n List of subplot axes corresponding to each displayed annotation"}
{"_id": "q_10313", "text": "Generate a click sample.\n\n This replicates functionality from mir_eval.sonify.clicks,\n but exposes the target frequency and duration."}
{"_id": "q_10314", "text": "Sonify events with clicks.\n\n This uses mir_eval.sonify.clicks, and is appropriate for instantaneous\n events such as beats or segment boundaries."}
{"_id": "q_10315", "text": "Sonify beats and downbeats together."}
{"_id": "q_10316", "text": "Sonify a piano-roll\n\n This uses mir_eval.sonify.time_frequency, and is appropriate\n for sparse transcription data, e.g., annotations in the `note_midi`\n namespace."}
{"_id": "q_10317", "text": "Setup time axis."}
{"_id": "q_10318", "text": "Blank DC bins in coarse channels.\n\n Note: currently only works if entire file is read"}
{"_id": "q_10319", "text": "Setup ploting edges."}
{"_id": "q_10320", "text": "Plot waterfall of data\n\n Args:\n f_start (float): start frequency, in MHz\n f_stop (float): stop frequency, in MHz\n logged (bool): Plot in linear (False) or dB units (True),\n cb (bool): for plotting the colorbar\n kwargs: keyword args to be passed to matplotlib imshow()"}
{"_id": "q_10321", "text": "Plot the time series.\n\n Args:\n f_start (float): start frequency, in MHz\n f_stop (float): stop frequency, in MHz\n logged (bool): Plot in linear (False) or dB units (True),\n kwargs: keyword args to be passed to matplotlib imshow()"}
{"_id": "q_10322", "text": "Converts a data array with length n_chans to an array of length n_coarse_chans\n by averaging over the coarse channels"}
{"_id": "q_10323", "text": "Returns calibrated Stokes parameters for an observation given an array\n of differential gains and phase differences."}
{"_id": "q_10324", "text": "Output fractional linear and circular polarizations for a\n rawspec cross polarization .fil file. NOT STANDARD USE"}
{"_id": "q_10325", "text": "Writes two new filterbank files containing fractional linear and\n circular polarization data"}
{"_id": "q_10326", "text": "Return the index of the closest in xarr to value val"}
{"_id": "q_10327", "text": "Rebin data by averaging bins together\n\n Args:\n d (np.array): data\n n_x (int): number of bins in x dir to rebin into one\n n_y (int): number of bins in y dir to rebin into one\n\n Returns:\n d: rebinned data with shape (n_x, n_y)"}
{"_id": "q_10328", "text": "upgrade data from nbits to 8bits\n\n Notes: Pretty sure this function is a little broken!"}
{"_id": "q_10329", "text": "Plots the uncalibrated full stokes spectrum of the noise diode.\n Use diff=False to plot both ON and OFF, or diff=True for ON-OFF"}
{"_id": "q_10330", "text": "Plots the calculated gain offsets of each coarse channel along with\n the time averaged power spectra of the X and Y feeds"}
{"_id": "q_10331", "text": "Open a HDF5 or filterbank file\n\n Returns instance of a Reader to read data from file.\n\n ================== ==================================================\n Filename extension File type\n ================== ==================================================\n h5, hdf5 HDF5 format\n fil fil format\n *other* Will raise NotImplementedError\n ================== =================================================="}
{"_id": "q_10332", "text": "Calculate size of data of interest."}
{"_id": "q_10333", "text": "Calculate shape of data of interest."}
{"_id": "q_10334", "text": "Setup channel borders"}
{"_id": "q_10335", "text": "Updating frequency borders from channel values"}
{"_id": "q_10336", "text": "Populate time axis.\n IF update_header then only return tstart"}
{"_id": "q_10337", "text": "Populate frequency axis"}
{"_id": "q_10338", "text": "Given the blob dimensions, calculate how many fit in the data selection."}
{"_id": "q_10339", "text": "Check if the current selection is too large."}
{"_id": "q_10340", "text": "Read data."}
{"_id": "q_10341", "text": "Read a block of data. The number of samples per row is set in self.channels\n If reverse=True the x axis is flipped."}
{"_id": "q_10342", "text": "Write data to .fil file.\n It check the file size then decides how to write the file.\n\n Args:\n filename_out (str): Name of output file"}
{"_id": "q_10343", "text": "Write data to HDF5 file.\n It check the file size then decides how to write the file.\n\n Args:\n filename_out (str): Name of output file"}
{"_id": "q_10344", "text": "Sets the blob dimmentions, trying to read around 1024 MiB at a time.\n This is assuming a chunk is about 1 MiB."}
{"_id": "q_10345", "text": "Extract a portion of data by frequency range.\n\n Args:\n f_start (float): start frequency in MHz\n f_stop (float): stop frequency in MHz\n if_id (int): IF input identification (req. when multiple IFs in file)\n\n Returns:\n (freqs, data) (np.arrays): frequency axis in MHz and data subset"}
{"_id": "q_10346", "text": "Command line tool for plotting and viewing info on guppi raw files"}
{"_id": "q_10347", "text": "Read first header in file\n\n Returns:\n header (dict): keyword:value pairs of header metadata"}
{"_id": "q_10348", "text": "Compute some basic stats on the next block of data"}
{"_id": "q_10349", "text": "Plot a histogram of data values"}
{"_id": "q_10350", "text": "Generate a blimpy header dictionary"}
{"_id": "q_10351", "text": "Command line tool to make a md5sum comparison of two .fil files."}
{"_id": "q_10352", "text": "Command line tool for converting guppi raw into HDF5 versions of guppi raw"}
{"_id": "q_10353", "text": "Returns time-averaged spectra of the ON and OFF measurements in a\n calibrator measurement with flickering noise diode\n\n Parameters\n ----------\n data : 2D Array object (float)\n 2D dynamic spectrum for data (any Stokes parameter) with flickering noise diode.\n tsamp : float\n Sampling time of data in seconds\n diode_p : float\n Period of the flickering noise diode in seconds\n numsamps : int\n Number of samples over which to average noise diode ON and OFF\n switch : boolean\n Use switch=True if the noise diode \"skips\" turning from OFF to ON once or vice versa\n inds : boolean\n Use inds=True to also return the indexes of the time series where the ND is ON and OFF"}
{"_id": "q_10354", "text": "Given properties of the calibrator source, calculate fluxes of the source\n in a particular frequency range\n\n Parameters\n ----------\n calflux : float\n Known flux of calibrator source at a particular frequency\n calfreq : float\n Frequency where calibrator source has flux calflux (see above)\n spec_in : float\n Known power-law spectral index of calibrator source. Use convention flux(frequency) = constant * frequency^(spec_in)\n centerfreqs : 1D Array (float)\n Central frequency values of each coarse channel\n oneflux : boolean\n Use oneflux to choose between calculating the flux for each core channel (False)\n or using one value for the entire frequency range (True)"}
{"_id": "q_10355", "text": "Returns central frequency of each coarse channel\n\n Parameters\n ----------\n freqs : 1D Array (float)\n Frequency values for each bin of the spectrum\n chan_per_coarse: int\n Number of frequency bins per coarse channel"}
{"_id": "q_10356", "text": "Returns frequency dependent system temperature given observations on and off a calibrator source\n\n Parameters\n ----------\n (See diode_spec())"}
{"_id": "q_10357", "text": "Produce calibrated Stokes I for an observation given a noise diode\n measurement on the source and a diode spectrum with the same number of\n coarse channels\n\n Parameters\n ----------\n main_obs_name : str\n Path to filterbank file containing final data to be calibrated\n dio_name : str\n Path to filterbank file for observation on the target source with flickering noise diode\n dspec : 1D Array (float) or float\n Coarse channel spectrum (or average) of the noise diode in Jy (obtained from diode_spec())\n Tsys : 1D Array (float) or float\n Coarse channel spectrum (or average) of the system temperature in Jy\n fullstokes: boolean\n Use fullstokes=True if data is in IQUV format or just Stokes I, use fullstokes=False if\n it is in cross_pols format"}
{"_id": "q_10358", "text": "Return the length of the blimpy header, in bytes\n\n Args:\n filename (str): name of file to open\n\n Returns:\n idx_end (int): length of header, in bytes"}
{"_id": "q_10359", "text": "Open file and confirm if it is a filterbank file or not."}
{"_id": "q_10360", "text": "Generate a serialzed sigproc header which can be written to disk.\n\n Args:\n f (Filterbank object): Filterbank object for which to generate header\n\n Returns:\n header_str (str): Serialized string corresponding to header"}
{"_id": "q_10361", "text": "Calculate number of integrations in a given file"}
{"_id": "q_10362", "text": "Check authorization id and other properties returned by the\n authentication mechanism.\n\n [receiving entity only]\n\n Allow only no authzid or authzid equal to current username@domain\n\n FIXME: other rules in s2s\n\n :Parameters:\n - `properties`: data obtained during authentication\n :Types:\n - `properties`: mapping\n\n :return: `True` if user is authorized to use a provided authzid\n :returntype: `bool`"}
{"_id": "q_10363", "text": "Start SASL authentication process.\n\n [initiating entity only]\n\n :Parameters:\n - `username`: user name.\n - `authzid`: authorization ID.\n - `mechanism`: SASL mechanism to use."}
{"_id": "q_10364", "text": "Make a RosterItem from an XML element.\n\n :Parameters:\n - `element`: the XML element\n :Types:\n - `element`: :etree:`ElementTree.Element`\n\n :return: a freshly created roster item\n :returntype: `cls`"}
{"_id": "q_10365", "text": "Make an XML element from self.\n\n :Parameters:\n - `parent`: Parent element\n :Types:\n - `parent`: :etree:`ElementTree.Element`"}
{"_id": "q_10366", "text": "Check if `self` is valid roster push item.\n\n Valid item must have proper `subscription` value other and valid value\n for 'ask'.\n\n :Parameters:\n - `fix`: if `True` than replace invalid 'subscription' and 'ask'\n values with the defaults\n :Types:\n - `fix`: `bool`\n\n :Raise: `ValueError` if the item is invalid."}
{"_id": "q_10367", "text": "Check if `self` is valid roster set item.\n\n For use on server to validate incoming roster sets.\n\n Valid item must have proper `subscription` value other and valid value\n for 'ask'. The lengths of name and group names must fit the configured\n limits.\n\n :Parameters:\n - `fix`: if `True` than replace invalid 'subscription' and 'ask'\n values with right defaults\n - `settings`: settings object providing the name limits\n :Types:\n - `fix`: `bool`\n - `settings`: `XMPPSettings`\n\n :Raise: `BadRequestProtocolError` if the item is invalid."}
{"_id": "q_10368", "text": "Set of groups defined in the roster.\n\n :Return: the groups\n :ReturnType: `set` of `unicode`"}
{"_id": "q_10369", "text": "Return a list of items within a given group.\n\n :Parameters:\n - `name`: name to look-up\n - `case_sensitive`: if `False` the matching will be case\n insensitive.\n :Types:\n - `name`: `unicode`\n - `case_sensitive`: `bool`\n\n :Returntype: `list` of `RosterItem`"}
{"_id": "q_10370", "text": "Add an item to the roster.\n\n This will not automatically update the roster on the server.\n\n :Parameters:\n - `item`: the item to add\n - `replace`: if `True` then existing item will be replaced,\n otherwise a `ValueError` will be raised on conflict\n :Types:\n - `item`: `RosterItem`\n - `replace`: `bool`"}
{"_id": "q_10371", "text": "Remove item from the roster.\n\n :Parameters:\n - `jid`: JID of the item to remove\n :Types:\n - `jid`: `JID`"}
{"_id": "q_10372", "text": "Request roster upon login."}
{"_id": "q_10373", "text": "Request roster from server.\n\n :Parameters:\n - `version`: if not `None` versioned roster will be requested\n for given local version. Use \"\" to request full roster.\n :Types:\n - `version`: `unicode`"}
{"_id": "q_10374", "text": "Handle failure of the roster request."}
{"_id": "q_10375", "text": "Handle a roster push received from server."}
{"_id": "q_10376", "text": "Add a contact to the roster.\n\n :Parameters:\n - `jid`: contact's jid\n - `name`: name for the contact\n - `groups`: sequence of group names the contact should belong to\n - `callback`: function to call when the request succeeds. It should\n accept a single argument - a `RosterItem` describing the\n requested change\n - `error_callback`: function to call when the request fails. It\n should accept a single argument - an error stanza received\n (`None` in case of timeout)\n :Types:\n - `jid`: `JID`\n - `name`: `unicode`\n - `groups`: sequence of `unicode`"}
{"_id": "q_10377", "text": "Modify a contact in the roster.\n\n :Parameters:\n - `jid`: contact's jid\n - `name`: a new name for the contact\n - `groups`: a sequence of group names the contact should belong to\n - `callback`: function to call when the request succeeds. It should\n accept a single argument - a `RosterItem` describing the\n requested change\n - `error_callback`: function to call when the request fails. It\n should accept a single argument - an error stanza received\n (`None` in case of timeout)\n :Types:\n - `jid`: `JID`\n - `name`: `unicode`\n - `groups`: sequence of `unicode`"}
{"_id": "q_10378", "text": "Unlink and free the XML node owned by `self`."}
{"_id": "q_10379", "text": "Set history parameters.\n\n Types:\n - `parameters`: `HistoryParameters`"}
{"_id": "q_10380", "text": "Return history parameters carried by the stanza.\n\n :returntype: `HistoryParameters`"}
{"_id": "q_10381", "text": "Set password for the MUC request.\n\n :Parameters:\n - `password`: password\n :Types:\n - `password`: `unicode`"}
{"_id": "q_10382", "text": "Get password from the MUC request.\n\n :returntype: `unicode`"}
{"_id": "q_10383", "text": "Initialize a `MucItem` object from an XML node.\n\n :Parameters:\n - `xmlnode`: the XML node.\n :Types:\n - `xmlnode`: `libxml2.xmlNode`"}
{"_id": "q_10384", "text": "Get a list of objects describing the content of `self`.\n\n :return: the list of objects.\n :returntype: `list` of `MucItemBase` (`MucItem` and/or `MucStatus`)"}
{"_id": "q_10385", "text": "Add an item to `self`.\n\n :Parameters:\n - `item`: the item to add.\n :Types:\n - `item`: `MucItemBase`"}
{"_id": "q_10386", "text": "Get the MUC specific payload element.\n\n :return: the object describing the stanza payload in MUC namespace.\n :returntype: `MucX` or `MucUserX` or `MucAdminQuery` or `MucOwnerX`"}
{"_id": "q_10387", "text": "Make the presence stanza a MUC room join request.\n\n :Parameters:\n - `password`: password to the room.\n - `history_maxchars`: limit of the total number of characters in\n history.\n - `history_maxstanzas`: limit of the total number of messages in\n history.\n - `history_seconds`: send only messages received in the last\n `seconds` seconds.\n - `history_since`: Send only the messages received since the\n dateTime specified (UTC).\n :Types:\n - `password`: `unicode`\n - `history_maxchars`: `int`\n - `history_maxstanzas`: `int`\n - `history_seconds`: `int`\n - `history_since`: `datetime.datetime`"}
{"_id": "q_10388", "text": "Make the iq stanza a MUC room participant kick request.\n\n :Parameters:\n - `nick`: nickname of user to kick.\n - `reason`: reason of the kick.\n :Types:\n - `nick`: `unicode`\n - `reason`: `unicode`\n\n :return: object describing the kick request details.\n :returntype: `MucItem`"}
{"_id": "q_10389", "text": "Modify stringprep cache size.\n\n :Parameters:\n - `size`: new cache size"}
{"_id": "q_10390", "text": "Mapping part of string preparation."}
{"_id": "q_10391", "text": "Checks for prohibited characters."}
{"_id": "q_10392", "text": "Checks for unassigned character codes."}
{"_id": "q_10393", "text": "Decorator for glib callback methods of GLibMainLoop used to store the\n exception raised."}
{"_id": "q_10394", "text": "Timeout callback called to try prepare an IOHandler again."}
{"_id": "q_10395", "text": "Call the timeout handler due."}
{"_id": "q_10396", "text": "Stops the loop after the time specified in the `loop` call."}
{"_id": "q_10397", "text": "Creates Iq, Message or Presence object for XML stanza `element`\n\n :Parameters:\n - `element`: the stanza XML element\n - `return_path`: object through which responses to this stanza should\n be sent (will be weakly referenced by the stanza object).\n - `language`: default language for the stanza\n :Types:\n - `element`: :etree:`ElementTree.Element`\n - `return_path`: `StanzaRoute`\n - `language`: `unicode`"}
{"_id": "q_10398", "text": "Examines out the response returned by a stanza handler and sends all\n stanzas provided.\n\n :Parameters:\n - `response`: the response to process. `None` or `False` means 'not\n handled'. `True` means 'handled'. Stanza or stanza list means\n handled with the stanzas to send back\n :Types:\n - `response`: `bool` or `Stanza` or iterable of `Stanza`\n\n :Returns:\n - `True`: if `response` is `Stanza`, iterable or `True` (meaning\n the stanza was processed).\n - `False`: when `response` is `False` or `None`\n\n :returntype: `bool`"}
{"_id": "q_10399", "text": "Process IQ stanza of type 'response' or 'error'.\n\n :Parameters:\n - `stanza`: the stanza received\n :Types:\n - `stanza`: `Iq`\n\n If a matching handler is available pass the stanza to it. Otherwise\n ignore it if it is \"error\" or \"result\" stanza or return\n \"feature-not-implemented\" error if it is \"get\" or \"set\"."}
{"_id": "q_10400", "text": "Process IQ stanza received.\n\n :Parameters:\n - `stanza`: the stanza received\n :Types:\n - `stanza`: `Iq`\n\n If a matching handler is available pass the stanza to it. Otherwise\n ignore it if it is \"error\" or \"result\" stanza or return\n \"feature-not-implemented\" error if it is \"get\" or \"set\"."}
{"_id": "q_10401", "text": "Process stanza not addressed to us.\n\n Return \"recipient-unavailable\" return if it is not\n \"error\" nor \"result\" stanza.\n\n This method should be overriden in derived classes if they\n are supposed to handle stanzas not addressed directly to local\n stream endpoint.\n\n :Parameters:\n - `stanza`: presence stanza to be processed"}
{"_id": "q_10402", "text": "Process stanza received from the stream.\n\n First \"fix\" the stanza with `self.fix_in_stanza()`,\n then pass it to `self.route_stanza()` if it is not directed\n to `self.me` and `self.process_all_stanzas` is not True. Otherwise\n stanza is passwd to `self.process_iq()`, `self.process_message()`\n or `self.process_presence()` appropriately.\n\n :Parameters:\n - `stanza`: the stanza received.\n\n :returns: `True` when stanza was handled"}
{"_id": "q_10403", "text": "Set response handler for an IQ \"get\" or \"set\" stanza.\n\n This should be called before the stanza is sent.\n\n :Parameters:\n - `stanza`: an IQ stanza\n - `res_handler`: result handler for the stanza. Will be called\n when matching <iq type=\"result\"/> is received. Its only\n argument will be the stanza received. The handler may return\n a stanza or list of stanzas which should be sent in response.\n - `err_handler`: error handler for the stanza. Will be called\n when matching <iq type=\"error\"/> is received. Its only\n argument will be the stanza received. The handler may return\n a stanza or list of stanzas which should be sent in response\n but this feature should rather not be used (it is better not to\n respond to 'error' stanzas).\n - `timeout_handler`: timeout handler for the stanza. Will be called\n (with no arguments) when no matching <iq type=\"result\"/> or <iq\n type=\"error\"/> is received in next `timeout` seconds.\n - `timeout`: timeout value for the stanza. After that time if no\n matching <iq type=\"result\"/> nor <iq type=\"error\"/> stanza is\n received, then timeout_handler (if given) will be called."}
{"_id": "q_10404", "text": "Same as `set_response_handlers` but assume `self.lock` is\n acquired."}
{"_id": "q_10405", "text": "Send a stanza somwhere.\n\n The default implementation sends it via the `uplink` if it is defined\n or raises the `NoRouteError`.\n\n :Parameters:\n - `stanza`: the stanza to send.\n :Types:\n - `stanza`: `pyxmpp.stanza.Stanza`"}
{"_id": "q_10406", "text": "Call the event dispatcher.\n\n Quit the main loop when the `QUIT` event is reached.\n\n :Return: `True` if `QUIT` was reached."}
{"_id": "q_10407", "text": "Create error response for any non-error message stanza.\n\n :Parameters:\n - `cond`: error condition name, as defined in XMPP specification.\n\n :return: new message stanza with the same \"id\" as self, \"from\" and\n \"to\" attributes swapped, type=\"error\" and containing <error />\n element plus payload of `self`.\n :returntype: `Message`"}
{"_id": "q_10408", "text": "Find a SessionHandler instance in the list and move it to the beginning."}
{"_id": "q_10409", "text": "Schedule a new XMPP c2s connection."}
{"_id": "q_10410", "text": "Gracefully disconnect from the server."}
{"_id": "q_10411", "text": "Same as `close_stream` but with the `lock` acquired."}
{"_id": "q_10412", "text": "Handle the `AuthenticatedEvent`."}
{"_id": "q_10413", "text": "Handle the `AuthorizedEvent`."}
{"_id": "q_10414", "text": "Default base client handlers factory.\n\n Subclasses can provide different behaviour by overriding this.\n\n :Return: list of handlers"}
{"_id": "q_10415", "text": "Return a payload class for given element name."}
{"_id": "q_10416", "text": "Unquote quoted value from DIGEST-MD5 challenge or response.\n\n If `data` doesn't start or doesn't end with '\"' then return it unchanged,\n remove the quotes and escape backslashes otherwise.\n\n :Parameters:\n - `data`: a quoted string.\n :Types:\n - `data`: `bytes`\n\n :return: the unquoted string.\n :returntype: `bytes`"}
{"_id": "q_10417", "text": "Prepare a string for quoting for DIGEST-MD5 challenge or response.\n\n Don't add the quotes, only escape '\"' and \"\\\\\" with backslashes.\n\n :Parameters:\n - `data`: a raw string.\n :Types:\n - `data`: `bytes`\n\n :return: `data` with '\"' and \"\\\\\" escaped using \"\\\\\".\n :returntype: `bytes`"}
{"_id": "q_10418", "text": "Initialize a `VCardAdr` object from and XML element.\n\n :Parameters:\n - `value`: field value as an XML node\n :Types:\n - `value`: `libxml2.xmlNode`"}
{"_id": "q_10419", "text": "Initialize the mandatory `self.fn` from `self.n`.\n\n This is a workaround for buggy clients which set only one of them."}
{"_id": "q_10420", "text": "Initialize a VCard object from XML node.\n\n :Parameters:\n - `data`: vcard to parse.\n :Types:\n - `data`: `libxml2.xmlNode`"}
{"_id": "q_10421", "text": "Get the RFC2426 representation of `self`.\n\n :return: the UTF-8 encoded RFC2426 representation.\n :returntype: `str`"}
{"_id": "q_10422", "text": "Update current status of the item and compute time of the next\n state change.\n\n :return: the new state.\n :returntype: :std:`datetime`"}
{"_id": "q_10423", "text": "Remove the fetcher from cache and mark it not active."}
{"_id": "q_10424", "text": "Handle a successfull retrieval and call apriopriate handler.\n\n Should be called when retrieval succeeds.\n\n Do nothing when the fetcher is not active any more (after\n one of handlers was already called).\n\n :Parameters:\n - `value`: fetched object.\n - `state`: initial state of the object.\n :Types:\n - `value`: any\n - `state`: `str`"}
{"_id": "q_10425", "text": "Handle a retrieval error and call apriopriate handler.\n\n Should be called when retrieval fails.\n\n Do nothing when the fetcher is not active any more (after\n one of handlers was already called).\n\n :Parameters:\n - `error_data`: additional information about the error (e.g. `StanzaError` instance).\n :Types:\n - `error_data`: fetcher dependant"}
{"_id": "q_10426", "text": "Handle fetcher timeout and call apriopriate handler.\n\n Is called by the cache object and should _not_ be called by fetcher or\n application.\n\n Do nothing when the fetcher is not active any more (after\n one of handlers was already called)."}
{"_id": "q_10427", "text": "Check if a backup item is available in cache and call\n the item handler if it is.\n\n :return: `True` if backup item was found.\n :returntype: `bool`"}
{"_id": "q_10428", "text": "Get an item from the cache.\n\n :Parameters:\n - `address`: its address.\n - `state`: the worst state that is acceptable.\n :Types:\n - `address`: any hashable\n - `state`: `str`\n\n :return: the item or `None` if it was not found.\n :returntype: `CacheItem`"}
{"_id": "q_10429", "text": "Update state of an item in the cache.\n\n Update item's state and remove the item from the cache\n if its new state is 'purged'\n\n :Parameters:\n - `item`: item to update.\n :Types:\n - `item`: `CacheItem`\n\n :return: new state of the item.\n :returntype: `str`"}
{"_id": "q_10430", "text": "Remove purged and overlimit items from the cache.\n\n TODO: optimize somehow.\n\n Leave no more than 75% of `self.max_items` items in the cache."}
{"_id": "q_10431", "text": "Remove a running fetcher from the list of active fetchers.\n\n :Parameters:\n - `fetcher`: fetcher instance.\n :Types:\n - `fetcher`: `CacheFetcher`"}
{"_id": "q_10432", "text": "Set the fetcher class.\n\n :Parameters:\n - `fetcher_class`: the fetcher class.\n :Types:\n - `fetcher_class`: `CacheFetcher` based class"}
{"_id": "q_10433", "text": "Register a fetcher class for an object class.\n\n :Parameters:\n - `object_class`: class to be retrieved by the fetcher.\n - `fetcher_class`: the fetcher class.\n :Types:\n - `object_class`: `classobj`\n - `fetcher_class`: `CacheFetcher` based class"}
{"_id": "q_10434", "text": "Unregister a fetcher class for an object class.\n\n :Parameters:\n - `object_class`: class retrieved by the fetcher.\n :Types:\n - `object_class`: `classobj`"}
{"_id": "q_10435", "text": "Add a client authenticator class to `SERVER_MECHANISMS_D`,\n `SERVER_MECHANISMS` and, optionally, to `SECURE_SERVER_MECHANISMS`"}
{"_id": "q_10436", "text": "Class decorator generator for `ClientAuthenticator` or\n `ServerAuthenticator` subclasses. Adds the class to the pyxmpp.sasl\n mechanism registry.\n\n :Parameters:\n - `name`: SASL mechanism name\n - `secure`: if the mechanims can be considered secure - `True`\n if it can be used over plain-text channel\n - `preference`: mechanism preference level (the higher the better)\n :Types:\n - `name`: `unicode`\n - `secure`: `bool`\n - `preference`: `int`"}
{"_id": "q_10437", "text": "Send session esteblishment request if the feature was advertised by\n the server."}
{"_id": "q_10438", "text": "Convert ASN.1 string to a Unicode string."}
{"_id": "q_10439", "text": "Get human-readable subject name derived from the SubjectName\n or SubjectAltName field."}
{"_id": "q_10440", "text": "Verify certificate for a server.\n\n :Parameters:\n - `server_name`: name of the server presenting the cerificate\n - `srv_type`: service type requested, as used in the SRV record\n :Types:\n - `server_name`: `unicode` or `JID`\n - `srv_type`: `unicode`\n\n :Return: `True` if the certificate is valid for given name, `False`\n otherwise."}
{"_id": "q_10441", "text": "Return `True` if jid is listed in the certificate commonName.\n\n :Parameters:\n - `jid`: JID requested (domain part only)\n :Types:\n - `jid`: `JID`\n\n :Returntype: `bool`"}
{"_id": "q_10442", "text": "Check if the cerificate is valid for given domain-only JID\n and a service type.\n\n :Parameters:\n - `jid`: JID requested (domain part only)\n - `srv_type`: service type, e.g. 'xmpp-client'\n :Types:\n - `jid`: `JID`\n - `srv_type`: `unicode`\n :Returntype: `bool`"}
{"_id": "q_10443", "text": "Verify certificate for a client.\n\n Please note that `client_jid` is only a hint to choose from the names,\n other JID may be returned if `client_jid` is not included in the\n certificate.\n\n :Parameters:\n - `client_jid`: client name requested. May be `None` to allow\n any name in one of the `domains`.\n - `domains`: list of domains we can handle.\n :Types:\n - `client_jid`: `JID`\n - `domains`: `list` of `unicode`\n\n :Return: one of the jids in the certificate or `None` is no authorized\n name is found."}
{"_id": "q_10444", "text": "Load certificate data from an SSL socket."}
{"_id": "q_10445", "text": "Get certificate data from an SSL socket."}
{"_id": "q_10446", "text": "Decode DER-encoded certificate.\n\n :Parameters:\n - `data`: the encoded certificate\n :Types:\n - `data`: `bytes`\n\n :Return: decoded certificate data\n :Returntype: ASN1CertificateData"}
{"_id": "q_10447", "text": "Load data from a ASN.1 subject."}
{"_id": "q_10448", "text": "Load data from a ASN.1 validity value."}
{"_id": "q_10449", "text": "Load SubjectAltName from a ASN.1 GeneralNames value.\n\n :Values:\n - `alt_names`: the SubjectAltNama extension value\n :Types:\n - `alt_name`: `GeneralNames`"}
{"_id": "q_10450", "text": "Request client connection and start the main loop."}
{"_id": "q_10451", "text": "Add a handler object.\n\n :Parameters:\n - `handler`: the object providing event handler methods\n :Types:\n - `handler`: `EventHandler`"}
{"_id": "q_10452", "text": "Remove a handler object.\n\n :Parameters:\n - `handler`: the object to remove"}
{"_id": "q_10453", "text": "Read all events currently in the queue and dispatch them to the\n handlers unless `dispatch` is `False`.\n\n Note: If the queue contains `QUIT` the events after it won't be\n removed.\n\n :Parameters:\n - `dispatch`: if the events should be handled (`True`) or ignored\n (`False`)\n\n :Return: `QUIT` if the `QUIT` event was reached."}
{"_id": "q_10454", "text": "Make a response for the first challenge from the server.\n\n :return: the response or a failure indicator.\n :returntype: `sasl.Response` or `sasl.Failure`"}
{"_id": "q_10455", "text": "Process the second challenge from the server and return the\n response.\n\n :Parameters:\n - `challenge`: the challenge from server.\n :Types:\n - `challenge`: `bytes`\n\n :return: the response or a failure indicator.\n :returntype: `sasl.Response` or `sasl.Failure`"}
{"_id": "q_10456", "text": "Decorating attaching a feature URI (for Service Discovery or Capability\n to a XMPPFeatureHandler class."}
{"_id": "q_10457", "text": "Class decorator generator for decorationg\n `StanzaPayload` subclasses.\n\n :Parameters:\n - `element_name`: XML element qname handled by the class\n :Types:\n - `element_name`: `unicode`"}
{"_id": "q_10458", "text": "Method decorator generator for decorating stream element\n handler methods in `StreamFeatureHandler` subclasses.\n\n :Parameters:\n - `element_name`: stream element QName\n - `usage_restriction`: optional usage restriction: \"initiator\" or\n \"receiver\"\n :Types:\n - `element_name`: `unicode`\n - `usage_restriction`: `unicode`"}
{"_id": "q_10459", "text": "Create a new `Option` object from an XML element.\n\n :Parameters:\n - `xmlnode`: the XML element.\n :Types:\n - `xmlnode`: `libxml2.xmlNode`\n\n :return: the object created.\n :returntype: `Option`"}
{"_id": "q_10460", "text": "Add an option for the field.\n\n :Parameters:\n - `value`: option values.\n - `label`: option label (human-readable description).\n :Types:\n - `value`: `list` of `unicode`\n - `label`: `unicode`"}
{"_id": "q_10461", "text": "Create a new `Field` object from an XML element.\n\n :Parameters:\n - `xmlnode`: the XML element.\n :Types:\n - `xmlnode`: `libxml2.xmlNode`\n\n :return: the object created.\n :returntype: `Field`"}
{"_id": "q_10462", "text": "Add a field to the item.\n\n :Parameters:\n - `name`: field name.\n - `values`: raw field values. Not to be used together with `value`.\n - `field_type`: field type.\n - `label`: field label.\n - `options`: optional values for the field.\n - `required`: `True` if the field is required.\n - `desc`: natural-language description of the field.\n - `value`: field value or values in a field_type-specific type. May be used only\n if `values` parameter is not provided.\n :Types:\n - `name`: `unicode`\n - `values`: `list` of `unicode`\n - `field_type`: `str`\n - `label`: `unicode`\n - `options`: `list` of `Option`\n - `required`: `bool`\n - `desc`: `unicode`\n - `value`: `bool` for \"boolean\" field, `JID` for \"jid-single\", `list` of `JID`\n for \"jid-multi\", `list` of `unicode` for \"list-multi\" and \"text-multi\"\n and `unicode` for other field types.\n\n :return: the field added.\n :returntype: `Field`"}
{"_id": "q_10463", "text": "Create a new `Item` object from an XML element.\n\n :Parameters:\n - `xmlnode`: the XML element.\n :Types:\n - `xmlnode`: `libxml2.xmlNode`\n\n :return: the object created.\n :returntype: `Item`"}
{"_id": "q_10464", "text": "Make a \"submit\" form using data in `self`.\n\n Remove uneeded information from the form. The information removed\n includes: title, instructions, field labels, fixed fields etc.\n\n :raise ValueError: when any required field has no value.\n\n :Parameters:\n - `keep_types`: when `True` field type information will be included\n in the result form. That is usually not needed.\n :Types:\n - `keep_types`: `bool`\n\n :return: the form created.\n :returntype: `Form`"}
{"_id": "q_10465", "text": "Initialize a `Form` object from an XML element.\n\n :Parameters:\n - `xmlnode`: the XML element.\n :Types:\n - `xmlnode`: `libxml2.xmlNode`"}
{"_id": "q_10466", "text": "Register Service Discovery cache fetchers into given\n cache suite and using the stream provided.\n\n :Parameters:\n - `cache_suite`: the cache suite where the fetchers are to be\n registered.\n - `stream`: the stream to be used by the fetchers.\n :Types:\n - `cache_suite`: `cache.CacheSuite`\n - `stream`: `pyxmpp.stream.Stream`"}
{"_id": "q_10467", "text": "Get the category of the item.\n\n :return: the category of the item.\n :returntype: `unicode`"}
{"_id": "q_10468", "text": "Set the category of the item.\n\n :Parameters:\n - `category`: the new category.\n :Types:\n - `category`: `unicode`"}
{"_id": "q_10469", "text": "Set the type of the item.\n\n :Parameters:\n - `item_type`: the new type.\n :Types:\n - `item_type`: `unicode`"}
{"_id": "q_10470", "text": "Check if `self` contains an item.\n\n :Parameters:\n - `jid`: JID of the item.\n - `node`: node name of the item.\n :Types:\n - `jid`: `JID`\n - `node`: `libxml2.xmlNode`\n\n :return: `True` if the item is found in `self`.\n :returntype: `bool`"}
{"_id": "q_10471", "text": "Get the features contained in `self`.\n\n :return: the list of features.\n :returntype: `list` of `unicode`"}
{"_id": "q_10472", "text": "Check if `self` contains the named feature.\n\n :Parameters:\n - `var`: the feature name.\n :Types:\n - `var`: `unicode`\n\n :return: `True` if the feature is found in `self`.\n :returntype: `bool`"}
{"_id": "q_10473", "text": "Remove a feature from `self`.\n\n :Parameters:\n - `var`: the feature name.\n :Types:\n - `var`: `unicode`"}
{"_id": "q_10474", "text": "List the identity objects contained in `self`.\n\n :return: the list of identities.\n :returntype: `list` of `DiscoIdentity`"}
{"_id": "q_10475", "text": "Check if the item described by `self` belongs to the given category\n and type.\n\n :Parameters:\n - `item_category`: the category name.\n - `item_type`: the type name. If `None` then only the category is\n checked.\n :Types:\n - `item_category`: `unicode`\n - `item_type`: `unicode`\n\n :return: `True` if `self` contains at least one <identity/> object with\n given type and category.\n :returntype: `bool`"}
{"_id": "q_10476", "text": "Add an identity to the `DiscoInfo` object.\n\n :Parameters:\n - `item_name`: name of the item.\n - `item_category`: category of the item.\n - `item_type`: type of the item.\n :Types:\n - `item_name`: `unicode`\n - `item_category`: `unicode`\n - `item_type`: `unicode`\n\n :returns: the identity created.\n :returntype: `DiscoIdentity`"}
{"_id": "q_10477", "text": "Create error response for the a \"get\" or \"set\" iq stanza.\n\n :Parameters:\n - `cond`: error condition name, as defined in XMPP specification.\n\n :return: new `Iq` object with the same \"id\" as self, \"from\" and \"to\"\n attributes swapped, type=\"error\" and containing <error /> element\n plus payload of `self`.\n :returntype: `Iq`"}
{"_id": "q_10478", "text": "Create result response for the a \"get\" or \"set\" iq stanza.\n\n :return: new `Iq` object with the same \"id\" as self, \"from\" and \"to\"\n attributes replaced and type=\"result\".\n :returntype: `Iq`"}
{"_id": "q_10479", "text": "Request a TLS-encrypted connection.\n\n [initiating entity only]"}
{"_id": "q_10480", "text": "Verify the peer certificate on the `TLSConnectedEvent`."}
{"_id": "q_10481", "text": "Default certificate verification callback for TLS connections.\n\n :Parameters:\n - `cert`: certificate information\n :Types:\n - `cert`: `CertificateData`\n\n :return: computed verification result."}
{"_id": "q_10482", "text": "Parse the command-line arguments and run the tool."}
{"_id": "q_10483", "text": "Do the expiration of dictionary items.\n\n Remove items that expired by now from the dictionary.\n\n :Return: time, in seconds, when the next item expires or `None`\n :returntype: `float`"}
{"_id": "q_10484", "text": "Do the expiration of a dictionary item.\n\n Remove the item if it has expired by now.\n\n :Parameters:\n - `key`: key to the object.\n :Types:\n - `key`: any hashable value"}
{"_id": "q_10485", "text": "Decode error element of the stanza."}
{"_id": "q_10486", "text": "Set stanza payload to a single item.\n\n All current stanza content of will be dropped.\n Marks the stanza dirty.\n\n :Parameters:\n - `payload`: XML element or stanza payload object to use\n :Types:\n - `payload`: :etree:`ElementTree.Element` or `StanzaPayload`"}
{"_id": "q_10487", "text": "Add new the stanza payload.\n\n Marks the stanza dirty.\n\n :Parameters:\n - `payload`: XML element or stanza payload object to add\n :Types:\n - `payload`: :etree:`ElementTree.Element` or `StanzaPayload`"}
{"_id": "q_10488", "text": "Return list of stanza payload objects.\n\n :Parameters:\n - `specialize`: If `True`, then return objects of specialized\n `StanzaPayload` classes whenever possible, otherwise the\n representation already available will be used (often\n `XMLPayload`)\n\n :Returntype: `list` of `StanzaPayload`"}
{"_id": "q_10489", "text": "Serialize an XML element into a unicode string.\n\n This should work the same on Python2 and Python3 and with all\n :etree:`ElementTree` implementations.\n\n :Parameters:\n - `element`: the XML element to serialize\n :Types:\n - `element`: :etree:`ElementTree.Element`"}
{"_id": "q_10490", "text": "Bind to a resource.\n\n [initiating entity only]\n\n :Parameters:\n - `resource`: the resource name to bind to.\n :Types:\n - `resource`: `unicode`\n\n XMPP stream is authenticated for bare JID only. To use\n the full JID it must be bound to a resource."}
{"_id": "q_10491", "text": "Serialize an XMPP element.\n\n Utility function for debugging or logging.\n\n :Parameters:\n - `element`: the element to serialize\n :Types:\n - `element`: :etree:`ElementTree.Element`\n\n :Return: serialized element\n :Returntype: `unicode`"}
{"_id": "q_10492", "text": "Add a new namespace prefix.\n\n If the root element has not yet been emitted the prefix will\n be declared there, otherwise the prefix will be declared on the\n top-most element using this namespace in every stanza.\n\n :Parameters:\n - `namespace`: the namespace URI\n - `prefix`: the prefix string\n :Types:\n - `namespace`: `unicode`\n - `prefix`: `unicode`"}
{"_id": "q_10493", "text": "Return the opening tag of the stream root element.\n\n :Parameters:\n - `stream_from`: the 'from' attribute of the stream. May be `None`.\n - `stream_to`: the 'to' attribute of the stream. May be `None`.\n - `version`: the 'version' of the stream.\n - `language`: the 'xml:lang' of the stream\n :Types:\n - `stream_from`: `unicode`\n - `stream_to`: `unicode`\n - `version`: `unicode`\n - `language`: `unicode`"}
{"_id": "q_10494", "text": "Split an element of attribute qname into namespace and local\n name.\n\n :Parameters:\n - `name`: element or attribute QName\n - `is_element`: `True` for an element, `False` for an attribute\n :Types:\n - `name`: `unicode`\n - `is_element`: `bool`\n\n :Return: namespace URI, local name\n :returntype: `unicode`, `unicode`"}
{"_id": "q_10495", "text": "Make up a new namespace prefix, which won't conflict\n with `_prefixes` and prefixes declared in the current scope.\n\n :Parameters:\n - `declared_prefixes`: namespace to prefix mapping for the current\n scope\n :Types:\n - `declared_prefixes`: `unicode` to `unicode` dictionary\n\n :Returns: a new prefix\n :Returntype: `unicode`"}
{"_id": "q_10496", "text": "Return namespace-prefixed tag or attribute name.\n\n Add appropriate declaration to `declarations` when neccessary.\n\n If no prefix for an element namespace is defined, make the elements\n namespace default (no prefix). For attributes, make up a prefix in such\n case.\n\n :Parameters:\n - `name`: QName ('{namespace-uri}local-name')\n to convert\n - `is_element`: `True` for element, `False` for an attribute\n - `declared_prefixes`: mapping of prefixes already declared\n at this scope\n - `declarations`: XMLNS declarations on the current element.\n :Types:\n - `name`: `unicode`\n - `is_element`: `bool`\n - `declared_prefixes`: `unicode` to `unicode` dictionary\n - `declarations`: `unicode` to `unicode` dictionary\n\n :Returntype: `unicode`"}
{"_id": "q_10497", "text": "Build namespace declarations and remove obsoleted mappings\n from `declared_prefixes`.\n\n :Parameters:\n - `declarations`: namespace to prefix mapping of the new\n declarations\n - `declared_prefixes`: namespace to prefix mapping of already\n declared prefixes.\n :Types:\n - `declarations`: `unicode` to `unicode` dictionary\n - `declared_prefixes`: `unicode` to `unicode` dictionary\n\n :Return: string of namespace declarations to be used in a start tag\n :Returntype: `unicode`"}
{"_id": "q_10498", "text": "Recursive XML element serializer.\n\n :Parameters:\n - `element`: the element to serialize\n - `level`: nest level (0 - root element, 1 - stanzas, etc.)\n - `declared_prefixes`: namespace to prefix mapping of already\n declared prefixes.\n :Types:\n - `element`: :etree:`ElementTree.Element`\n - `level`: `int`\n - `declared_prefixes`: `unicode` to `unicode` dictionary\n\n :Return: serialized element\n :Returntype: `unicode`"}
{"_id": "q_10499", "text": "Serialize a stanza.\n\n Must be called after `emit_head`.\n\n :Parameters:\n - `element`: the element to serialize\n :Types:\n - `element`: :etree:`ElementTree.Element`\n\n :Return: serialized element\n :Returntype: `unicode`"}
{"_id": "q_10500", "text": "Update user information.\n\n :Parameters:\n - `presence`: a presence stanza with user information update.\n :Types:\n - `presence`: `MucPresence`"}
{"_id": "q_10501", "text": "Get a room user with given nick or JID.\n\n :Parameters:\n - `nick_or_jid`: the nickname or room JID of the user requested.\n - `create`: if `True` and `nick_or_jid` is a JID, then a new\n user object will be created if there is no such user in the room.\n :Types:\n - `nick_or_jid`: `unicode` or `JID`\n - `create`: `bool`\n\n :return: the named user or `None`\n :returntype: `MucRoomUser`"}
{"_id": "q_10502", "text": "Called when current stream changes.\n\n Mark the room not joined and inform `self.handler` that it was left.\n\n :Parameters:\n - `stream`: the new stream.\n :Types:\n - `stream`: `pyxmpp.stream.Stream`"}
{"_id": "q_10503", "text": "Send a join request for the room.\n\n :Parameters:\n - `password`: password to the room.\n - `history_maxchars`: limit of the total number of characters in\n history.\n - `history_maxstanzas`: limit of the total number of messages in\n history.\n - `history_seconds`: send only messages received in the last\n `history_seconds` seconds.\n - `history_since`: Send only the messages received since the\n dateTime specified (UTC).\n :Types:\n - `password`: `unicode`\n - `history_maxchars`: `int`\n - `history_maxstanzas`: `int`\n - `history_seconds`: `int`\n - `history_since`: `datetime.datetime`"}
{"_id": "q_10504", "text": "Send a subject change request to the room.\n\n :Parameters:\n - `subject`: the new subject.\n :Types:\n - `subject`: `unicode`"}
{"_id": "q_10505", "text": "Send a nick change request to the room.\n\n :Parameters:\n - `new_nick`: the new nickname requested.\n :Types:\n - `new_nick`: `unicode`"}
{"_id": "q_10506", "text": "Get own room JID or a room JID for given `nick`.\n\n :Parameters:\n - `nick`: a nick for which the room JID is requested.\n :Types:\n - `nick`: `unicode`\n\n :return: the room JID.\n :returntype: `JID`"}
{"_id": "q_10507", "text": "Process successful result of a room configuration form request.\n\n :Parameters:\n - `stanza`: the stanza received.\n :Types:\n - `stanza`: `Presence`"}
{"_id": "q_10508", "text": "Request a configuration form for the room.\n\n When the form is received `self.handler.configuration_form_received` will be called.\n When an error response is received then `self.handler.error` will be called.\n\n :return: id of the request stanza.\n :returntype: `unicode`"}
{"_id": "q_10509", "text": "Process success response for a room configuration request.\n\n :Parameters:\n - `stanza`: the stanza received.\n :Types:\n - `stanza`: `Presence`"}
{"_id": "q_10510", "text": "Change the stream assigned to `self`.\n\n :Parameters:\n - `stream`: the new stream to be assigned to `self`.\n :Types:\n - `stream`: `pyxmpp.stream.Stream`"}
{"_id": "q_10511", "text": "Create and return a new room state object and request joining\n to a MUC room.\n\n :Parameters:\n - `room`: the name of a room to be joined\n - `nick`: the nickname to be used in the room\n - `handler`: is an object to handle room events.\n - `password`: password for the room, if any\n - `history_maxchars`: limit of the total number of characters in\n history.\n - `history_maxstanzas`: limit of the total number of messages in\n history.\n - `history_seconds`: send only messages received in the last\n `history_seconds` seconds.\n - `history_since`: Send only the messages received since the\n dateTime specified (UTC).\n\n :Types:\n - `room`: `JID`\n - `nick`: `unicode`\n - `handler`: `MucRoomHandler`\n - `password`: `unicode`\n - `history_maxchars`: `int`\n - `history_maxstanzas`: `int`\n - `history_seconds`: `int`\n - `history_since`: `datetime.datetime`\n\n :return: the room state object created.\n :returntype: `MucRoomState`"}
{"_id": "q_10512", "text": "Process a groupchat message from a MUC room.\n\n :Parameters:\n - `stanza`: the stanza received.\n :Types:\n - `stanza`: `Message`\n\n :return: `True` if the message was properly recognized as directed to\n one of the managed rooms, `False` otherwise.\n :returntype: `bool`"}
{"_id": "q_10513", "text": "Process an error message from a MUC room.\n\n :Parameters:\n - `stanza`: the stanza received.\n :Types:\n - `stanza`: `Message`\n\n :return: `True` if the message was properly recognized as directed to\n one of the managed rooms, `False` otherwise.\n :returntype: `bool`"}
{"_id": "q_10514", "text": "Process an presence error from a MUC room.\n\n :Parameters:\n - `stanza`: the stanza received.\n :Types:\n - `stanza`: `Presence`\n\n :return: `True` if the stanza was properly recognized as generated by\n one of the managed rooms, `False` otherwise.\n :returntype: `bool`"}
{"_id": "q_10515", "text": "Process an available presence from a MUC room.\n\n :Parameters:\n - `stanza`: the stanza received.\n :Types:\n - `stanza`: `Presence`\n\n :return: `True` if the stanza was properly recognized as generated by\n one of the managed rooms, `False` otherwise.\n :returntype: `bool`"}
{"_id": "q_10516", "text": "Get a parameter value.\n\n If parameter is not set, return `local_default` if it is not `None`\n or the PyXMPP global default otherwise.\n\n :Raise `KeyError`: if parameter has no value and no global default\n\n :Return: parameter value"}
{"_id": "q_10517", "text": "Validator for string lists to be used with `add_setting`."}
{"_id": "q_10518", "text": "List known settings.\n\n :Parameters:\n - `basic`: When `True` then limit output to the basic settings,\n when `False` list only the extra settings."}
{"_id": "q_10519", "text": "Make a command-line option parser.\n\n The returned parser may be used as a parent parser for application\n argument parser.\n\n :Parameters:\n - `settings`: list of PyXMPP2 settings to consider. By default\n all 'basic' settings are provided.\n - `option_prefix`: custom prefix for PyXMPP2 options. E.g.\n ``'--xmpp'`` to differentiate them from not xmpp-related\n application options.\n - `add_help`: when `True` a '--help' option will be included\n (probably already added in the application parser object)\n :Types:\n - `settings`: list of `unicode`\n - `option_prefix`: `str`\n - `add_help`:\n\n :return: an argument parser object.\n :returntype: :std:`argparse.ArgumentParser`"}
{"_id": "q_10520", "text": "Compare two International Domain Names.\n\n :Parameters:\n - `domain1`: domains name to compare\n - `domain2`: domains name to compare\n :Types:\n - `domain1`: `unicode`\n - `domain2`: `unicode`\n\n :return: True `domain1` and `domain2` are equal as domain names."}
{"_id": "q_10521", "text": "Prepare localpart of the JID\n\n :Parameters:\n - `data`: localpart of the JID\n :Types:\n - `data`: `unicode`\n\n :raise JIDError: if the local name is too long.\n :raise pyxmpp.xmppstringprep.StringprepError: if the\n local name fails Nodeprep preparation."}
{"_id": "q_10522", "text": "Prepare domainpart of the JID.\n\n :Parameters:\n - `data`: Domain part of the JID\n :Types:\n - `data`: `unicode`\n\n :raise JIDError: if the domain name is too long."}
{"_id": "q_10523", "text": "Unicode string JID representation.\n\n :return: JID as Unicode string."}
{"_id": "q_10524", "text": "Check if IPv6 is available.\n\n :Return: `True` when an IPv6 socket can be created."}
{"_id": "q_10525", "text": "Check if IPv4 is available.\n\n :Return: `True` when an IPv4 socket can be created."}
{"_id": "q_10526", "text": "Reorder SRV records using their priorities and weights.\n\n :Parameters:\n - `records`: SRV records to shuffle.\n :Types:\n - `records`: `list` of :dns:`dns.rdtypes.IN.SRV`\n\n :return: reordered records.\n :returntype: `list` of :dns:`dns.rdtypes.IN.SRV`"}
{"_id": "q_10527", "text": "Stop the resolver threads."}
{"_id": "q_10528", "text": "Start looking up an A or AAAA record.\n\n `callback` will be called with a list of (family, address) tuples\n on success. Family is :std:`socket.AF_INET` or :std:`socket.AF_INET6`,\n the address is IPv4 or IPv6 literal. The list will be empty on error.\n\n :Parameters:\n - `hostname`: the host name to look up\n - `callback`: a function to be called with a list of received\n addresses\n - `allow_cname`: `True` if CNAMEs should be followed\n :Types:\n - `hostname`: `unicode`\n - `callback`: function accepting a single argument\n - `allow_cname`: `bool`"}
{"_id": "q_10529", "text": "Star an XMPP session and send a message, then exit.\n\n :Parameters:\n - `source_jid`: sender JID\n - `password`: sender password\n - `target_jid`: recipient JID\n - `body`: message body\n - `subject`: message subject\n - `message_type`: message type\n - `message_thread`: message thread id\n - `settings`: other settings\n :Types:\n - `source_jid`: `pyxmpp2.jid.JID` or `basestring`\n - `password`: `basestring`\n - `target_jid`: `pyxmpp.jid.JID` or `basestring`\n - `body`: `basestring`\n - `subject`: `basestring`\n - `message_type`: `basestring`\n - `settings`: `pyxmpp2.settings.XMPPSettings`"}
{"_id": "q_10530", "text": "Compute the authentication handshake value.\n\n :return: the computed hash value.\n :returntype: `str`"}
{"_id": "q_10531", "text": "Authenticate on the server.\n\n [component only]"}
{"_id": "q_10532", "text": "Set `_state` and notify any threads waiting for the change."}
{"_id": "q_10533", "text": "Start establishing TCP connection with given address.\n\n One of: `port` or `service` must be provided and `addr` must be\n a domain name and not an IP address if `port` is not given.\n\n When `service` is given try an SRV lookup for that service\n at domain `addr`. If `service` is not given or `addr` is an IP address,\n or the SRV lookup fails, connect to `port` at host `addr` directly.\n\n [initiating entity only]\n\n :Parameters:\n - `addr`: peer name or IP address\n - `port`: port number to connect to\n - `service`: service name (to be resolved using SRV DNS records)"}
{"_id": "q_10534", "text": "Same as `connect`, but assumes `lock` acquired."}
{"_id": "q_10535", "text": "Start resolving the SRV record."}
{"_id": "q_10536", "text": "Handle SRV lookup result.\n\n :Parameters:\n - `addrs`: properly sorted list of (hostname, port) tuples"}
{"_id": "q_10537", "text": "Start hostname resolution for the next name to try.\n\n [called with `lock` acquired]"}
{"_id": "q_10538", "text": "Handler DNS address record lookup result.\n\n :Parameters:\n - `name`: the name requested\n - `port`: port number to connect to\n - `addrs`: list of (family, address) tuples"}
{"_id": "q_10539", "text": "Start connecting to the next address on the `_dst_addrs` list.\n\n [ called with `lock` acquired ]"}
{"_id": "q_10540", "text": "Handle connection success."}
{"_id": "q_10541", "text": "Continue connecting.\n\n [called with `lock` acquired]\n\n :Return: `True` when just connected"}
{"_id": "q_10542", "text": "Write raw data to the socket.\n\n :Parameters:\n - `data`: data to send\n :Types:\n - `data`: `bytes`"}
{"_id": "q_10543", "text": "Make the `stream` the target for this transport instance.\n\n The 'stream_start', 'stream_end' and 'stream_element' methods\n of the target will be called when appropriate content is received.\n\n :Parameters:\n - `stream`: the stream handler to receive stream content\n from the transport\n :Types:\n - `stream`: `StreamBase`"}
{"_id": "q_10544", "text": "Send stream head via the transport.\n\n :Parameters:\n - `stanza_namespace`: namespace of stream stanzas (e.g.\n 'jabber:client')\n - `stream_from`: the 'from' attribute of the stream. May be `None`.\n - `stream_to`: the 'to' attribute of the stream. May be `None`.\n - `version`: the 'version' of the stream.\n - `language`: the 'xml:lang' of the stream\n :Types:\n - `stanza_namespace`: `unicode`\n - `stream_from`: `unicode`\n - `stream_to`: `unicode`\n - `version`: `unicode`\n - `language`: `unicode`"}
{"_id": "q_10545", "text": "Send an element via the transport."}
{"_id": "q_10546", "text": "Stop current thread until the channel is readable.\n\n :Return: `False` if it won't be readable (e.g. is closed)"}
{"_id": "q_10547", "text": "Stop current thread until the channel is writable.\n\n :Return: `False` if it won't be readable (e.g. is closed)"}
{"_id": "q_10548", "text": "Request a TLS handshake on the socket ans switch\n to encrypted output.\n The handshake will start after any currently buffered data is sent.\n\n :Parameters:\n - `kwargs`: arguments for :std:`ssl.wrap_socket`"}
{"_id": "q_10549", "text": "Return the peer certificate.\n\n :ReturnType: `pyxmpp2.cert.Certificate`"}
{"_id": "q_10550", "text": "Initiate starttls handshake over the socket."}
{"_id": "q_10551", "text": "Handle the 'channel hungup' state. The handler should not be writable\n after this."}
{"_id": "q_10552", "text": "Handle an error reported."}
{"_id": "q_10553", "text": "Disconnect the stream gracefully."}
{"_id": "q_10554", "text": "Same as `_close` but expects `lock` acquired."}
{"_id": "q_10555", "text": "Feed the stream reader with data received.\n\n [ called with `lock` acquired ]\n\n If `data` is None or empty, then stream end (peer disconnected) is\n assumed and the stream is closed.\n\n `lock` is acquired during the operation.\n\n :Parameters:\n - `data`: data received from the stream socket.\n :Types:\n - `data`: `unicode`"}
{"_id": "q_10556", "text": "Pass an event to the target stream or just log it."}
{"_id": "q_10557", "text": "Handle an end tag.\n\n Call the handler's 'stream_end' method with\n an the root element (built by the `start` method).\n\n On the first level below root, sent the built element tree\n to the handler via the 'stanza methods'.\n\n Any tag below will be just added to the tree builder."}
{"_id": "q_10558", "text": "Start threads for an IOHandler."}
{"_id": "q_10559", "text": "Add a TimeoutHandler to the pool."}
{"_id": "q_10560", "text": "Start threads for a TimeoutHandler."}
{"_id": "q_10561", "text": "Remove a TimeoutHandler from the pool."}
{"_id": "q_10562", "text": "Start the threads."}
{"_id": "q_10563", "text": "Wait up to `timeout` seconds, raise any exception from the\n threads."}
{"_id": "q_10564", "text": "Initialize authentication when the connection is established\n and we are the initiator."}
{"_id": "q_10565", "text": "Unregister legacy authentication handlers after successfull\n authentication."}
{"_id": "q_10566", "text": "Try to authenticate using the first one of allowed authentication\n methods left.\n\n [client only]"}
{"_id": "q_10567", "text": "Handle legacy authentication timeout.\n\n [client only]"}
{"_id": "q_10568", "text": "Handle legacy authentication error.\n\n [client only]"}
{"_id": "q_10569", "text": "Handle success of the legacy authentication."}
{"_id": "q_10570", "text": "Handle in-band registration error.\n\n [client only]\n\n :Parameters:\n - `stanza`: the error stanza received or `None` on timeout.\n :Types:\n - `stanza`: `pyxmpp.stanza.Stanza`"}
{"_id": "q_10571", "text": "Submit a registration form.\n\n [client only]\n\n :Parameters:\n - `form`: the filled-in form. When form is `None` or its type is\n \"cancel\" the registration is to be canceled.\n\n :Types:\n - `form`: `pyxmpp.jabber.dataforms.Form`"}
{"_id": "q_10572", "text": "Handle registration success.\n\n [client only]\n\n Clean up registration stuff, change state to \"registered\" and initialize\n authentication.\n\n :Parameters:\n - `stanza`: the stanza received.\n :Types:\n - `stanza`: `pyxmpp.iq.Iq`"}
{"_id": "q_10573", "text": "Request software version information from a remote entity.\n\n When a valid response is received the `callback` will be handled\n with a `VersionPayload` instance as its only argument. The object will\n provide the requested infromation.\n\n In case of error stanza received or invalid response the `error_callback`\n (if provided) will be called with the offending stanza (which can\n be ``<iq type='error'/>`` or ``<iq type='result'>``) as its argument.\n\n The same function will be called on timeout, with the argument set to\n `None`.\n\n :Parameters:\n - `stanza_processor`: a object used to send the query and handle\n response. E.g. a `pyxmpp2.client.Client` instance\n - `target_jid`: the JID of the entity to query\n - `callback`: function to be called with a valid response\n - `error_callback`: function to be called on error\n :Types:\n - `stanza_processor`: `StanzaProcessor`\n - `target_jid`: `JID`"}
{"_id": "q_10574", "text": "Set up stream element handlers.\n\n Scans the `handlers` list for `StreamFeatureHandler`\n instances and updates `_element_handlers` mapping with their\n methods decorated with @`stream_element_handler`"}
{"_id": "q_10575", "text": "Handle a stream event.\n\n Called when connection state is changed.\n\n Should not be called with self.lock acquired!"}
{"_id": "q_10576", "text": "Send stream start tag."}
{"_id": "q_10577", "text": "Same as `send` but assume `lock` is acquired."}
{"_id": "q_10578", "text": "Handle stanza received from the stream."}
{"_id": "q_10579", "text": "Process stream error element received.\n\n :Parameters:\n - `error`: error received\n\n :Types:\n - `error`: `StreamErrorElement`"}
{"_id": "q_10580", "text": "Mark stream authenticated as `me`.\n\n :Parameters:\n - `me`: local JID just authenticated\n - `restart_stream`: `True` when stream should be restarted (needed\n after SASL authentication)\n :Types:\n - `me`: `JID`\n - `restart_stream`: `bool`"}
{"_id": "q_10581", "text": "Authentication properties of the stream.\n\n Derived from the transport with 'local-jid' and 'service-type' added."}
{"_id": "q_10582", "text": "Fix an incoming stanza.\n\n Ona server replace the sender address with authorized client JID."}
{"_id": "q_10583", "text": "Initialize `Register` from an XML node.\n\n :Parameters:\n - `xmlnode`: the jabber:x:register XML element.\n :Types:\n - `xmlnode`: `libxml2.xmlNode`"}
{"_id": "q_10584", "text": "Return Data Form for the `Register` object.\n\n Convert legacy fields to a data form if `self.form` is `None`, return `self.form` otherwise.\n\n :Parameters:\n - `form_type`: If \"form\", then a form to fill-in should be\n returned. If \"sumbit\", then a form with submitted data.\n :Types:\n - `form_type`: `unicode`\n\n :return: `self.form` or a form created from the legacy fields\n :returntype: `pyxmpp.jabber.dataforms.Form`"}
{"_id": "q_10585", "text": "Make `Register` object for submitting the registration form.\n\n Convert form data to legacy fields if `self.form` is `None`.\n\n :Parameters:\n - `form`: The form to submit. Its type doesn't have to be \"submit\"\n (a \"submit\" form will be created here), so it could be the form\n obtained from `get_form` just with the data entered.\n\n :return: new registration element\n :returntype: `Register`"}
{"_id": "q_10586", "text": "Initialize Delay object from an XML node.\n\n :Parameters:\n - `xmlnode`: the jabber:x:delay XML element.\n :Types:\n - `xmlnode`: `libxml2.xmlNode`"}
{"_id": "q_10587", "text": "Accept any incoming connections."}
{"_id": "q_10588", "text": "Activate an plan in a CREATED state."}
{"_id": "q_10589", "text": "Execute the PreparedBillingAgreement by creating and executing a\n\t\tmatching BillingAgreement."}
{"_id": "q_10590", "text": "Create, validate and process a WebhookEventTrigger given a Django\n\t\trequest object.\n\n\t\tThe webhook_id parameter expects the ID of the Webhook that was\n\t\ttriggered (defaults to settings.PAYPAL_WEBHOOK_ID). This is required\n\t\tfor Webhook verification.\n\n\t\tThe process is three-fold:\n\t\t1. Create a WebhookEventTrigger object from a Django request.\n\t\t2. Verify the WebhookEventTrigger as a Paypal webhook using the SDK.\n\t\t3. If valid, process it into a WebhookEvent object (and child resource)."}
{"_id": "q_10591", "text": "Make an ArgumentParser that accepts DNS RRs"}
{"_id": "q_10592", "text": "Go through each line of the text and ensure that \n a name is defined. Use '@' if there is none."}
{"_id": "q_10593", "text": "Parse a zonefile into a dict.\n @text must be flattened--each record must be on one line.\n Also, all comments must be removed."}
{"_id": "q_10594", "text": "Quote a field in a list of DNS records.\n Return the new data records."}
{"_id": "q_10595", "text": "This function can be used to build a python package representation of pyschema classes.\n One module is created per namespace in a package matching the namespace hierarchy.\n\n Args:\n classes: A collection of classes to build the package from\n target_folder: Root folder of the package\n parent_package: Prepended on all import statements in order to support absolute imports.\n parent_package is not used when building the package file structure\n indent: Indent level. Defaults to 4 spaces"}
{"_id": "q_10596", "text": "Generate Python source code for one specific class\n\n Doesn't include or take into account any dependencies between record types"}
{"_id": "q_10597", "text": "Temporarily disable automatic registration of records in the auto_store\n\n Decorator factory. This is _NOT_ thread safe\n\n >>> @no_auto_store()\n ... class BarRecord(Record):\n ... pass\n >>> BarRecord in auto_store\n False"}
{"_id": "q_10598", "text": "Create a Record instance from a json-compatible dictionary\n\n The dictionary values should have types that are json compatible,\n as if just loaded from a json serialized record string.\n\n :param dct:\n Python dictionary with key/value pairs for the record\n\n :param record_store:\n Record store to use for schema lookups (when $schema field is present)\n\n :param schema:\n PySchema Record class for the record to load.\n This will override any $schema fields specified in `dct`"}
{"_id": "q_10599", "text": "Create a Record instance from a json serialized dictionary\n\n :param s:\n String with a json-serialized dictionary\n\n :param record_store:\n Record store to use for schema lookups (when $schema field is present)\n\n :param loader:\n Function called to fetch attributes from json. Typically shouldn't be used by end users\n\n :param schema:\n PySchema Record class for the record to load.\n This will override any $schema fields specified in `s`\n\n :param record_class:\n DEPRECATED option, old name for the `schema` parameter"}
{"_id": "q_10600", "text": "Add record class to record store for retrieval at record load time.\n\n Can be used as a class decorator"}
{"_id": "q_10601", "text": "Return a dictionary the field definition\n\n Should contain all fields that are required for the definition of this field in a pyschema class"}
{"_id": "q_10602", "text": "Decorator for mixing in additional functionality into field type\n\n Example:\n\n >>> @Integer.mixin\n ... class IntegerPostgresExtensions:\n ... postgres_type = 'INT'\n ...\n ... def postgres_dump(self, obj):\n ... self.dump(obj) + \"::integer\"\n\n Is roughly equivalent to:\n\n >>> Integer.postgres_type = 'INT'\n ...\n ... def postgres_dump(self, obj):\n ... self.dump(obj) + \"::integer\"\n ...\n ... Integer.postgres_dump = postgres_dump"}
{"_id": "q_10603", "text": "Create proper PySchema class from cls\n\n Any methods and attributes will be transferred to the\n new object"}
{"_id": "q_10604", "text": "Return a python dict representing the jsonschema of a record\n\n Any references to sub-schemas will be URI fragments that won't be\n resolvable without a root schema, available from get_root_schema_dict."}
{"_id": "q_10605", "text": "Converts a file object with json serialised pyschema records\n to a stream of pyschema objects\n\n Can be used as job.reader in luigi.hadoop.JobTask"}
{"_id": "q_10606", "text": "Set a value at the front of an OrderedDict\n\n The original dict isn't modified, instead a copy is returned"}
{"_id": "q_10607", "text": "Send a message upstream to a de-multiplexed application.\n\n If stream_name is includes will send just to that upstream steam, if not included will send ot all upstream\n steams."}
{"_id": "q_10608", "text": "Handle a downstream message coming from an upstream steam.\n\n if there is not handling method set for this method type it will propagate the message further downstream.\n\n This is called as part of the co-routine of an upstream steam, not the same loop as used for upstream messages\n in the de-multiplexer."}
{"_id": "q_10609", "text": "Rout the message down the correct stream."}
{"_id": "q_10610", "text": "Handle the disconnect message.\n\n This is propagated to all upstream applications."}
{"_id": "q_10611", "text": "Capture downstream websocket.send messages from the upstream applications."}
{"_id": "q_10612", "text": "Handle downstream `websocket.close` message.\n\n Will disconnect this upstream application from receiving any new frames.\n\n If there are not more upstream applications accepting messages it will then call `close`."}
{"_id": "q_10613", "text": "creator method to build FxCurve\n\n :param float fx_spot: fx spot rate\n :param RateCurve domestic_curve: domestic discount curve\n :param RateCurve foreign_curve: foreign discount curve\n :return:"}
{"_id": "q_10614", "text": "Decorator to retry a function 'max_retries' amount of times\n\n :param tuple exceptions: Exceptions to be caught for retries\n :param int interval: Interval between retries in seconds\n :param int max_retries: Maximum number of retries to have, if\n set to -1 the decorator will loop forever\n :param function success: Function to indicate success criteria\n :param int timeout: Timeout interval in seconds, if -1 will retry forever\n :raises MaximumRetriesExceeded: Maximum number of retries hit without\n reaching the success criteria\n :raises TypeError: Both exceptions and success were left None causing the\n decorator to have no valid exit criteria.\n\n Example:\n Use it to decorate a function!\n\n .. sourcecode:: python\n\n from retry import retry\n\n @retry(exceptions=(ArithmeticError,), success=lambda x: x > 0)\n def foo(bar):\n if bar < 0:\n raise ArithmeticError('testing this')\n return bar\n foo(5)\n # Should return 5\n foo(-1)\n # Should raise ArithmeticError\n foo(0)\n # Should raise MaximumRetriesExceeded"}
{"_id": "q_10615", "text": "Button action event"}
{"_id": "q_10616", "text": "Byte-for-byte comparison on an arbitrary number of files in parallel.\n\n This operates by opening all files in parallel and comparing\n chunk-by-chunk. This has the following implications:\n\n - Reads the same total amount of data as hash comparison.\n - Performs a *lot* of disk seeks. (Best suited for SSDs)\n - Vulnerable to file handle exhaustion if used on its own.\n\n :param paths: List of potentially identical files.\n :type paths: iterable\n\n :returns: A dict mapping one path to a list of all paths (self included)\n with the same contents.\n\n .. todo:: Start examining the ``while handles:`` block to figure out how to\n minimize thrashing in situations where read-ahead caching is active.\n Compare savings by read-ahead to savings due to eliminating false\n positives as quickly as possible. This is a 2-variable min/max problem.\n\n .. todo:: Look into possible solutions for pathological cases of thousands\n of files with the same size and same pre-filter results. (File handle\n exhaustion)"}
{"_id": "q_10617", "text": "Group a list of file handles based on equality of the next chunk of\n data read from them.\n\n :param handles: A list of open handles for file-like objects with\n otentially-identical contents.\n :param chunk_size: The amount of data to read from each handle every time\n this function is called.\n\n :returns: Two lists of lists:\n\n * Lists to be fed back into this function individually\n * Finished groups of duplicate paths. (including unique files as\n single-file lists)\n\n :rtype: ``(list, list)``\n\n .. attention:: File handles will be closed when no longer needed\n .. todo:: Discard chunk contents immediately once they're no longer needed"}
{"_id": "q_10618", "text": "High-level code to walk a set of paths and find duplicate groups.\n\n :param exact: Whether to compare file contents by hash or by reading\n chunks in parallel.\n :type exact: :class:`~__builtins__.bool`\n\n :param paths: See :meth:`~fastdupes.getPaths`\n :param ignores: See :meth:`~fastdupes.getPaths`\n :param min_size: See :meth:`~fastdupes.sizeClassifier`\n\n :returns: A list of groups of files with identical contents\n :rtype: ``[[path, ...], [path, ...]]``"}
{"_id": "q_10619", "text": "Specify query string to use with the collection.\n\n Returns: :py:class:`SearchResult`"}
{"_id": "q_10620", "text": "Returns entity in correct collection.\n\n If the \"href\" value in result doesn't match the current collection,\n try to find the collection that the \"href\" refers to."}
{"_id": "q_10621", "text": "When you pass a quote character, returns you an another one if possible"}
{"_id": "q_10622", "text": "Tries to escape the values that are passed to filter as correctly as possible.\n\n No standard way is followed, but at least it is simple."}
{"_id": "q_10623", "text": "import summarizers on-demand"}
{"_id": "q_10624", "text": "Construct an elementary rotation matrix describing a rotation around the x, y, or z-axis.\n\n Parameters\n ----------\n\n axis - Axis around which to rotate (\"x\", \"y\", or \"z\")\n rotationAngle - the rotation angle in radians\n\n Returns\n -------\n\n The rotation matrix\n\n Example usage\n -------------\n\n rotmat = elementaryRotationMatrix(\"y\", pi/6.0)"}
{"_id": "q_10625", "text": "Take the astrometric parameter standard uncertainties and the uncertainty correlations as quoted in\n the Gaia catalogue and construct the covariance matrix.\n\n Parameters\n ----------\n\n cvec : array_like\n Array of shape (15,) (1 source) or (n,15) (n sources) for the astrometric parameter standard\n uncertainties and their correlations, as listed in the Gaia catalogue [ra_error, dec_error,\n parallax_error, pmra_error, pmdec_error, ra_dec_corr, ra_parallax_corr, ra_pmra_corr,\n ra_pmdec_corr, dec_parallax_corr, dec_pmra_corr, dec_pmdec_corr, parallax_pmra_corr,\n parallax_pmdec_corr, pmra_pmdec_corr]. Units are (mas^2, mas^2/yr, mas^2/yr^2).\n \n parallax : array_like (n elements)\n Source parallax (mas).\n \n radial_velocity : array_like (n elements)\n Source radial velocity (km/s, does not have to be from Gaia RVS!). If the radial velocity is not\n known it can be set to zero.\n\n radial_velocity_error : array_like (n elements)\n Source radial velocity uncertainty (km/s). If the radial velocity is not know this can be set to\n the radial velocity dispersion for the population the source was drawn from.\n\n Returns\n -------\n\n Covariance matrix as a 6x6 array."}
{"_id": "q_10626", "text": "Calculate radial velocity error from V and the spectral type. The value of the error is an average over\n the sky.\n\n Parameters\n ----------\n\n vmag - Value of V-band magnitude.\n spt - String representing the spectral type of the star.\n\n Returns\n -------\n\n The radial velocity error in km/s."}
{"_id": "q_10627", "text": "Calculate the angular distance between pairs of sky coordinates.\n\n Parameters\n ----------\n\n phi1 : float\n Longitude of first coordinate (radians).\n theta1 : float\n Latitude of first coordinate (radians).\n phi2 : float\n Longitude of second coordinate (radians).\n theta2 : float\n Latitude of second coordinate (radians).\n\n Returns\n -------\n\n Angular distance in radians."}
{"_id": "q_10628", "text": "Rotates Cartesian coordinates from one reference system to another using the rotation matrix with\n which the class was initialized. The inputs can be scalars or 1-dimensional numpy arrays.\n\n Parameters\n ----------\n\n x - Value of X-coordinate in original reference system\n y - Value of Y-coordinate in original reference system\n z - Value of Z-coordinate in original reference system\n\n Returns\n -------\n\n xrot - Value of X-coordinate after rotation\n yrot - Value of Y-coordinate after rotation\n zrot - Value of Z-coordinate after rotation"}
{"_id": "q_10629", "text": "Converts sky coordinates from one reference system to another, making use of the rotation matrix with\n which the class was initialized. Inputs can be scalars or 1-dimensional numpy arrays.\n\n Parameters\n ----------\n\n phi - Value of the azimuthal angle (right ascension, longitude) in radians.\n theta - Value of the elevation angle (declination, latitude) in radians.\n\n Returns\n -------\n\n phirot - Value of the transformed azimuthal angle in radians.\n thetarot - Value of the transformed elevation angle in radians."}
{"_id": "q_10630", "text": "Transform the astrometric covariance matrix to its representation in the new coordinate system.\n\n Parameters\n ----------\n\n phi - The longitude-like angle of the position of the source (radians).\n theta - The latitude-like angle of the position of the source (radians).\n covmat - Covariance matrix (5x5) of the astrometric parameters.\n\n Returns\n -------\n\n covmat_rot - Covariance matrix in its representation in the new coordinate system."}
{"_id": "q_10631", "text": "Make the plot with radial velocity performance predictions.\n\n :argument args: command line arguments"}
{"_id": "q_10632", "text": "A utility function for selecting the first non-null query.\n\n Parameters:\n\n funcs: One or more functions\n\n Returns:\n\n A function that, when called with a :class:`Node`, will\n pass the input to each `func`, and return the first non-Falsey\n result.\n\n Examples:\n\n >>> s = Soupy(\"<p>hi</p>\")\n >>> s.apply(either(Q.find('a'), Q.find('p').text))\n Scalar('hi')"}
{"_id": "q_10633", "text": "Convert to unicode, and add quotes if initially a string"}
{"_id": "q_10634", "text": "Call `func` on each element in the collection.\n\n If multiple functions are provided, each item\n in the output will be a tuple of each\n func(item) in self.\n\n Returns a new Collection.\n\n Example:\n\n >>> col = Collection([Scalar(1), Scalar(2)])\n >>> col.each(Q * 10)\n Collection([Scalar(10), Scalar(20)])\n >>> col.each(Q * 10, Q - 1)\n Collection([Scalar((10, 0)), Scalar((20, 1))])"}
{"_id": "q_10635", "text": "Return a new Collection excluding some items\n\n Parameters:\n\n func : function(Node) -> Scalar\n\n A function that, when called on each item\n in the collection, returns a boolean-like\n value. If no function is provided, then\n truthy items will be removed.\n\n Returns:\n\n A new Collection consisting of the items\n where bool(func(item)) == False"}
{"_id": "q_10636", "text": "Return a new Collection with some items removed.\n\n Parameters:\n\n func : function(Node) -> Scalar\n\n A function that, when called on each item\n in the collection, returns a boolean-like\n value. If no function is provided, then\n false-y items will be removed.\n\n Returns:\n\n A new Collection consisting of the items\n where bool(func(item)) == True\n\n Examples:\n\n node.find_all('a').filter(Q['href'].startswith('http'))"}
{"_id": "q_10637", "text": "Return a new Collection with the first few items removed.\n\n Parameters:\n\n func : function(Node) -> Node\n\n Returns:\n\n A new Collection, discarding all items\n before the first item where bool(func(item)) == True"}
{"_id": "q_10638", "text": "Zip the items of this collection with one or more\n other sequences, and wrap the result.\n\n Unlike Python's zip, all sequences must be the same length.\n\n Parameters:\n\n others: One or more iterables or Collections\n\n Returns:\n\n A new collection.\n\n Examples:\n\n >>> c1 = Collection([Scalar(1), Scalar(2)])\n >>> c2 = Collection([Scalar(3), Scalar(4)])\n >>> c1.zip(c2).val()\n [(1, 3), (2, 4)]"}
{"_id": "q_10639", "text": "Return potential locations of IACA installation."}
{"_id": "q_10640", "text": "Yild all groups of simple regex-like expression.\n\n The only special character is a dash (-), which take the preceding and the following chars to\n compute a range. If the range is non-sensical (e.g., b-a) it will be empty\n\n Example:\n >>> list(group_iterator('a-f'))\n ['a', 'b', 'c', 'd', 'e', 'f']\n >>> list(group_iterator('148'))\n ['1', '4', '8']\n >>> list(group_iterator('7-9ab'))\n ['7', '8', '9', 'a', 'b']\n >>> list(group_iterator('0B-A1'))\n ['0', '1']"}
{"_id": "q_10641", "text": "Very reduced regular expressions for describing a group of registers.\n\n Only groups in square bracktes and unions with pipes (|) are supported.\n\n Examples:\n >>> list(register_options('PMC[0-3]'))\n ['PMC0', 'PMC1', 'PMC2', 'PMC3']\n >>> list(register_options('MBOX0C[01]'))\n ['MBOX0C0', 'MBOX0C1']\n >>> list(register_options('CBOX2C1'))\n ['CBOX2C1']\n >>> list(register_options('CBOX[0-3]C[01]'))\n ['CBOX0C0', 'CBOX0C1', 'CBOX1C0', 'CBOX1C1', 'CBOX2C0', 'CBOX2C1', 'CBOX3C0', 'CBOX3C1']\n >>> list(register_options('PMC[0-1]|PMC[23]'))\n ['PMC0', 'PMC1', 'PMC2', 'PMC3']"}
{"_id": "q_10642", "text": "Return a LIKWID event string from an event tuple or keyword arguments.\n\n *event_tuple* may have two or three arguments: (event, register) or\n (event, register, parameters)\n\n Keyword arguments will be overwritten by *event_tuple*.\n\n >>> eventstr(('L1D_REPLACEMENT', 'PMC0', None))\n 'L1D_REPLACEMENT:PMC0'\n >>> eventstr(('L1D_REPLACEMENT', 'PMC0'))\n 'L1D_REPLACEMENT:PMC0'\n >>> eventstr(('MEM_UOPS_RETIRED_LOADS', 'PMC3', {'EDGEDETECT': None, 'THRESHOLD': 2342}))\n 'MEM_UOPS_RETIRED_LOADS:PMC3:EDGEDETECT:THRESHOLD=0x926'\n >>> eventstr(event='DTLB_LOAD_MISSES_WALK_DURATION', register='PMC3')\n 'DTLB_LOAD_MISSES_WALK_DURATION:PMC3'"}
{"_id": "q_10643", "text": "Compile list of minimal runs for given events."}
{"_id": "q_10644", "text": "Report analysis outcome in human readable form."}
{"_id": "q_10645", "text": "Round float to next multiple of base."}
{"_id": "q_10646", "text": "Split list of integers into blocks of block_size and return block indices.\n\n First block element will be located at initial_boundary (default 0).\n\n >>> blocking([0, -1, -2, -3, -4, -5, -6, -7, -8, -9], 8)\n [0,-1]\n >>> blocking([0], 8)\n [0]\n >>> blocking([0], 8, initial_boundary=32)\n [-4]"}
{"_id": "q_10647", "text": "Run complete anaylysis and return results."}
{"_id": "q_10648", "text": "Strip whitespaces and comments from asm lines."}
{"_id": "q_10649", "text": "Let user interactively select byte increment."}
{"_id": "q_10650", "text": "Insert IACA marker into list of ASM instructions at given indices."}
{"_id": "q_10651", "text": "Add IACA markers to an assembly file.\n\n If instrumentation fails because loop increment could not determined automatically, a ValueError\n is raised.\n\n :param input_file: file-like object to read from\n :param output_file: file-like object to write to\n :param block_selection: index of the assembly block to instrument, or 'auto' for automatically\n using block with the\n most vector instructions, or 'manual' to read index to prompt user\n :param pointer_increment: number of bytes the pointer is incremented after the loop or\n - 'auto': automatic detection, otherwise RuntimeError is raised\n - 'auto_with_manual_fallback': like auto with fallback to manual input\n - 'manual': prompt user\n :param debug: output additional internal analysis information. Only works with manual selection.\n :return: the instrumented assembly block"}
{"_id": "q_10652", "text": "Execute command line interface."}
{"_id": "q_10653", "text": "Setup and execute model with given blocking length"}
{"_id": "q_10654", "text": "Check arguments passed by user that are not checked by argparse itself."}
{"_id": "q_10655", "text": "Comand line interface of picklemerge."}
{"_id": "q_10656", "text": "Create a sympy.Symbol with positive and integer assumptions."}
{"_id": "q_10657", "text": "Check that information about kernel makes sens and is valid."}
{"_id": "q_10658", "text": "Set constant of name to value.\n\n :param name: may be a str or a sympy.Symbol\n :param value: must be an int"}
{"_id": "q_10659", "text": "Return the offset from the iteration center in number of elements.\n\n The order of indices used in access is preserved."}
{"_id": "q_10660", "text": "Yield loop stack dictionaries in order from outer to inner."}
{"_id": "q_10661", "text": "Return the order of indices as they appear in array references.\n\n Use *source* and *destination* to filter output"}
{"_id": "q_10662", "text": "Return a dictionary of lists of sympy accesses, for each variable.\n\n Use *source* and *destination* to filter output"}
{"_id": "q_10663", "text": "Return sympy expressions translating global_iterator to loop indices.\n\n If global_iterator is given, an integer is returned"}
{"_id": "q_10664", "text": "Return global iterator sympy expression"}
{"_id": "q_10665", "text": "Transform a dictionary of indices to a global iterator integer.\n\n Inverse of global_iterator_to_indices()."}
{"_id": "q_10666", "text": "Print kernel information in human readble format."}
{"_id": "q_10667", "text": "Print variables information in human readble format."}
{"_id": "q_10668", "text": "Print constants information in human readble format."}
{"_id": "q_10669", "text": "Print source code of kernel."}
{"_id": "q_10670", "text": "Convert mathematical expressions to a sympy representation.\n\n May only contain paranthesis, addition, subtraction and multiplication from AST."}
{"_id": "q_10671", "text": "Return a tuple of offsets of an ArrayRef object in all dimensions.\n\n The index order is right to left (c-code order).\n e.g. c[i+1][j-2] -> (-2, +1)\n\n If aref is actually a c_ast.ID, None will be returned."}
{"_id": "q_10672", "text": "Return base name of ArrayRef object.\n\n e.g. c[i+1][j-2] -> 'c'"}
{"_id": "q_10673", "text": "Generate constants declarations\n\n :return: list of declarations"}
{"_id": "q_10674", "text": "Return array declarations."}
{"_id": "q_10675", "text": "Return kernel loop nest including any preceding pragmas and following swaps."}
{"_id": "q_10676", "text": "Generate initialization statements for arrays.\n\n :param array_dimensions: dictionary of array dimensions\n\n :return: list of nodes"}
{"_id": "q_10677", "text": "Generate false if branch with dummy calls\n\n Requires kerncraft.h to be included, which defines dummy(...) and var_false.\n\n :return: dummy statement"}
{"_id": "q_10678", "text": "Build and return kernel function declaration"}
{"_id": "q_10679", "text": "Build and return scalar variable declarations"}
{"_id": "q_10680", "text": "Generate and return compilable source code with kernel function from AST.\n\n :param openmp: if true, OpenMP code will be generated\n :param as_filename: if true, will save to file and return filename\n :param name: name of kernel function"}
{"_id": "q_10681", "text": "Generate and return kernel call ast."}
{"_id": "q_10682", "text": "Generate and return compilable source code from AST."}
{"_id": "q_10683", "text": "Run an IACA analysis and return its outcome.\n\n *asm_block* controls how the to-be-marked block is chosen. \"auto\" (default) results in\n the largest block, \"manual\" results in interactive and a number in the according block.\n\n *pointer_increment* is the number of bytes the pointer is incremented after the loop or\n - 'auto': automatic detection, RuntimeError is raised in case of failure\n - 'auto_with_manual_fallback': automatic detection, fallback to manual input\n - 'manual': prompt user"}
{"_id": "q_10684", "text": "Compile source to executable with likwid capabilities and return the executable name."}
{"_id": "q_10685", "text": "Convert any string to a sympy object or None."}
{"_id": "q_10686", "text": "Return identifier which is either the machine file name or sha256 checksum of data."}
{"_id": "q_10687", "text": "Return a cachesim.CacheSimulator object based on the machine description.\n\n :param cores: core count (default: 1)"}
{"_id": "q_10688", "text": "Return best fitting bandwidth according to number of threads, read and write streams.\n\n :param cache_level: integer of cache (0 is L1, 1 is L2 ...)\n :param read_streams: number of read streams expected\n :param write_streams: number of write streams expected\n :param threads_per_core: number of threads that are run on each core\n :param cores: if not given, will choose maximum bandwidth for single NUMA domain"}
{"_id": "q_10689", "text": "Return tuple of compiler and compiler flags.\n\n Selects compiler and flags from machine description file, commandline arguments or call\n arguements."}
{"_id": "q_10690", "text": "Parse events in machine description to tuple representation used in Benchmark module.\n\n Examples:\n >>> parse_perfctr_event('PERF_EVENT:REG[0-3]')\n ('PERF_EVENT', 'REG[0-3]')\n >>> parse_perfctr_event('PERF_EVENT:REG[0-3]:STAY:FOO=23:BAR=0x23')\n ('PERF_EVENT', 'REG[0-3]', {'STAY': None, 'FOO': 23, 'BAR': 35})"}
{"_id": "q_10691", "text": "Enforce that no ranges overlap in internal storage."}
{"_id": "q_10692", "text": "Align iteration with cacheline boundary."}
{"_id": "q_10693", "text": "Return a list with number of loaded cache lines per memory hierarchy level."}
{"_id": "q_10694", "text": "Return a list with number of hit cache lines per memory hierarchy level."}
{"_id": "q_10695", "text": "Return a list with number of missed cache lines per memory hierarchy level."}
{"_id": "q_10696", "text": "Return a list with number of stored cache lines per memory hierarchy level."}
{"_id": "q_10697", "text": "Return a list with number of evicted cache lines per memory hierarchy level."}
{"_id": "q_10698", "text": "Return verbose information about the predictor."}
{"_id": "q_10699", "text": "Report gathered analysis data in human readable form."}
{"_id": "q_10700", "text": "Return True iff this function should be considered public."}
{"_id": "q_10701", "text": "Return True iff this method should be considered public."}
{"_id": "q_10702", "text": "Parse the given file-like object and return its Module object."}
{"_id": "q_10703", "text": "Consume one token and verify it is of the expected kind."}
{"_id": "q_10704", "text": "Skip tokens in the stream until a certain token kind is reached.\n\n If `value` is specified, tokens whose values are different will also\n be skipped."}
{"_id": "q_10705", "text": "Parse a single docstring and return its value."}
{"_id": "q_10706", "text": "Parse a 'from x import y' statement.\n\n The purpose is to find __future__ statements."}
{"_id": "q_10707", "text": "Parse the 'y' part in a 'from x import y' statement."}
{"_id": "q_10708", "text": "Use docutils to check docstrings are valid RST."}
{"_id": "q_10709", "text": "Load the source for the specified file."}
{"_id": "q_10710", "text": "Trusted commands cannot be run remotely\n\n :param f:\n :return:"}
{"_id": "q_10711", "text": "Escape newlines in any responses"}
{"_id": "q_10712", "text": "Attempt to restart the bot."}
{"_id": "q_10713", "text": "Resume playback if bot is paused"}
{"_id": "q_10714", "text": "List bot variables and values"}
{"_id": "q_10715", "text": "Make the current window fullscreen"}
{"_id": "q_10716", "text": "Set a variable."}
{"_id": "q_10717", "text": "Allow commands to have a last parameter of 'cookie=somevalue'\n\n TODO somevalue will be prepended onto any output lines so\n that editors can distinguish output from certain kinds\n of events they have sent.\n\n :param line:\n :return:"}
{"_id": "q_10718", "text": "Draw a daisy at x, y"}
{"_id": "q_10719", "text": "load context-free grammar"}
{"_id": "q_10720", "text": "Replaces HTML special characters by readable characters.\n\n As taken from Leif K-Brooks algorithm on:\n http://groups-beta.google.com/group/comp.lang.python"}
{"_id": "q_10721", "text": "Returns True when the url generates a \"404 Not Found\" error."}
{"_id": "q_10722", "text": "Determine the MIME-type of the document behind the url.\n\n MIME is more reliable than simply checking the document extension.\n Returns True when the MIME-type starts with anything in the list of types."}
{"_id": "q_10723", "text": "Draws a image form path, in x,y and resize it to width, height dimensions."}
{"_id": "q_10724", "text": "Draw a rectangle from x, y of width, height.\n\n :param startx: top left x-coordinate\n :param starty: top left y-coordinate\n\n :param width: height Size of rectangle.\n :roundness: Corner roundness defaults to 0.0 (a right-angle).\n :draw: If True draws immediately.\n :fill: Optionally pass a fill color.\n\n :return: path representing the rectangle."}
{"_id": "q_10725", "text": "Set the current rectmode.\n\n :param mode: CORNER, CENTER, CORNERS\n :return: rectmode if mode is None or valid."}
{"_id": "q_10726", "text": "Set the current ellipse drawing mode.\n\n :param mode: CORNER, CENTER, CORNERS\n :return: ellipsemode if mode is None or valid."}
{"_id": "q_10727", "text": "Draw an arrow.\n\n Arrows can be two types: NORMAL or FORTYFIVE.\n\n :param x: top left x-coordinate\n :param y: top left y-coordinate\n :param width: width of arrow\n :param type: NORMAL or FORTYFIVE\n :draw: If True draws arrow immediately\n\n :return: Path object representing the arrow."}
{"_id": "q_10728", "text": "Move relatively to the last point."}
{"_id": "q_10729", "text": "Draw a line using relative coordinates."}
{"_id": "q_10730", "text": "Set the current transform mode.\n\n :param mode: CENTER or CORNER"}
{"_id": "q_10731", "text": "Set a scale at which to draw objects.\n\n 1.0 draws objects at their natural size\n\n :param x: Scale on the horizontal plane\n :param y: Scale on the vertical plane"}
{"_id": "q_10732", "text": "Stop applying strokes to new paths.\n\n :return: stroke color before nostroke was called."}
{"_id": "q_10733", "text": "Set the stroke width.\n\n :param w: Stroke width.\n :return: If no width was specified then current width is returned."}
{"_id": "q_10734", "text": "Set the font to be used with new text instances.\n\n :param fontpath: path to truetype or opentype font.\n :param fontsize: size of font\n\n :return: current current fontpath (if fontpath param not set)\n Accepts TrueType and OpenType files. Depends on FreeType being\n installed."}
{"_id": "q_10735", "text": "Returns the height of a string of text according to the current\n font settings.\n\n :param txt: string to measure\n :param width: width of a line of text in a block"}
{"_id": "q_10736", "text": "Graph background color."}
{"_id": "q_10737", "text": "Visualization of a node's id."}
{"_id": "q_10738", "text": "Visualization of a single edge between two nodes."}
{"_id": "q_10739", "text": "Visualization of the label accompanying an edge."}
{"_id": "q_10740", "text": "Creates a new style which inherits from the default style,\n or any other style which name is supplied to the optional template parameter."}
{"_id": "q_10741", "text": "Returns a copy of all styles and a copy of the styleguide."}
{"_id": "q_10742", "text": "Check the rules for each node in the graph and apply the style."}
{"_id": "q_10743", "text": "Returns a copy of the styleguide for the given graph."}
{"_id": "q_10744", "text": "Loads all possible TUIO profiles and returns a dictionary with the\n profile addresses as keys and an instance of a profile as the value"}
{"_id": "q_10745", "text": "Tells the connection manager to receive the next 1024 byte of messages\n to analyze."}
{"_id": "q_10746", "text": "copytree that works even if folder already exists"}
{"_id": "q_10747", "text": "Returns a Google news query formatted as a GoogleSearch list object."}
{"_id": "q_10748", "text": "Returns a Google blogs query formatted as a GoogleSearch list object."}
{"_id": "q_10749", "text": "Creates a unique filename in the cache for the id."}
{"_id": "q_10750", "text": "Returns the age of the cache entry, in days."}
{"_id": "q_10751", "text": "Returns a list of points where the circle and the line intersect.\r\n Returns an empty list when the circle and the line do not intersect."}
{"_id": "q_10752", "text": "Returns a BezierPath object with the transformation applied."}
{"_id": "q_10753", "text": "Returns bounds that encompass the intersection of the two.\r\n If there is no overlap between the two, None is returned."}
{"_id": "q_10754", "text": "Prints an error message, the help message and quits"}
{"_id": "q_10755", "text": "Draws an outlined path of the input text"}
{"_id": "q_10756", "text": "Returns a Yahoo web query formatted as a YahooSearch list object."}
{"_id": "q_10757", "text": "Returns a Yahoo images query formatted as a YahooSearch list object."}
{"_id": "q_10758", "text": "Returns a Yahoo news query formatted as a YahooSearch list object."}
{"_id": "q_10759", "text": "Flattens the given layers on the canvas.\n \n Merges the given layers with the indices in the list\n on the bottom layer in the list.\n The other layers are discarded."}
{"_id": "q_10760", "text": "Exports the flattened canvas.\n \n Flattens the canvas.\n PNG retains the alpha channel information.\n Other possibilities are JPEG and GIF."}
{"_id": "q_10761", "text": "Moves the layer up in the stacking order."}
{"_id": "q_10762", "text": "Moves the layer down in the stacking order."}
{"_id": "q_10763", "text": "Creates a copy of the current layer.\n \n This copy becomes the top layer on the canvas."}
{"_id": "q_10764", "text": "Increases or decreases the contrast in the layer.\n \n The given value is a percentage to increase\n or decrease the image contrast,\n for example 1.2 means contrast at 120%."}
{"_id": "q_10765", "text": "Inverts the layer."}
{"_id": "q_10766", "text": "Positions the layer at the given coordinates.\n \n The x and y parameters define where to position \n the top left corner of the layer,\n measured from the top left of the canvas."}
{"_id": "q_10767", "text": "Resizes the layer to the given width and height.\n \n When width w or height h is a floating-point number,\n scales percentual, \n otherwise scales to the given size in pixels."}
{"_id": "q_10768", "text": "Rotates the layer.\n \n Rotates the layer by given angle.\n Positive numbers rotate counter-clockwise,\n negative numbers rotate clockwise.\n \n Rotate commands are executed instantly,\n so many subsequent rotates will distort the image."}
{"_id": "q_10769", "text": "Flips the layer, either HORIZONTAL or VERTICAL."}
{"_id": "q_10770", "text": "Increases or decreases the sharpness in the layer.\n \n The given value is a percentage to increase\n or decrease the image sharpness,\n for example 0.8 means sharpness at 80%."}
{"_id": "q_10771", "text": "Returns a histogram for each RGBA channel.\n \n Returns a 4-tuple of lists, r, g, b, and a.\n Each list has 255 items, a count for each pixel value."}
{"_id": "q_10772", "text": "Initialise bot namespace with info in shoebot.data\n\n :param filename: Will be set to __file__ in the namespace"}
{"_id": "q_10773", "text": "Return False if bot should quit"}
{"_id": "q_10774", "text": "Sets a new accessible variable.\n\n :param v: Variable."}
{"_id": "q_10775", "text": "Creates a new table.\n \n Creates a table with the given name,\n containing the list of given fields.\n Since SQLite uses manifest typing, no data type need be supplied.\n The primary key is \"id\" by default,\n an integer that can be set or otherwise autoincrements."}
{"_id": "q_10776", "text": "Creates a table index.\n \n Creates an index on the given table,\n on the given field with unique values enforced or not,\n in ascending or descending order."}
{"_id": "q_10777", "text": "Commits any pending transactions and closes the database."}
{"_id": "q_10778", "text": "Edits the row with given id."}
{"_id": "q_10779", "text": "Deletes the row with given id."}
{"_id": "q_10780", "text": "Get the next available event or None\n\n :param block:\n :param timeout:\n :return: None or (event, data)"}
{"_id": "q_10781", "text": "Publish an event ot any subscribers.\n\n :param event_t: event type\n :param data: event data\n :param extra_channels:\n :param wait:\n :return:"}
{"_id": "q_10782", "text": "Sets call_transform_mode to point to the\n center_transform or corner_transform"}
{"_id": "q_10783", "text": "Doesn't store exactly the same items as Nodebox for ease of implementation,\n it has enough to get the Nodebox Dentrite example working."}
{"_id": "q_10784", "text": "Load changed code into the execution environment.\n\n Until the code is executed correctly, it will be\n in the 'tenuous' state."}
{"_id": "q_10785", "text": "Attempt to known good or tenuous source."}
{"_id": "q_10786", "text": "Context in which the user can run the source in a custom manner.\n\n If no exceptions occur then the source will move from 'tenuous'\n to 'known good'.\n\n >>> with run_context() as (known_good, source, ns):\n >>> ... exec source in ns\n >>> ... ns['draw']()"}
{"_id": "q_10787", "text": "Boids keep a small distance from other boids.\n \n Ensures that boids don't collide into each other,\n in a smoothly accelerated motion."}
{"_id": "q_10788", "text": "Returns the angle towards which the boid is steering."}
{"_id": "q_10789", "text": "Tendency towards a particular place."}
{"_id": "q_10790", "text": "Returns a copy of the layout for the given graph."}
{"_id": "q_10791", "text": "Remove nodes and edges and reset the layout."}
{"_id": "q_10792", "text": "Remove edges between nodes with given id's."}
{"_id": "q_10793", "text": "Returns the edge between the nodes with given id1 and id2."}
{"_id": "q_10794", "text": "Iterates the graph layout and updates node positions."}
{"_id": "q_10795", "text": "Removes all nodes with less or equal links than depth."}
{"_id": "q_10796", "text": "Calculates betweenness centrality and returns an node id -> weight dictionary.\n Node betweenness weights are updated in the process."}
{"_id": "q_10797", "text": "Calculates eigenvector centrality and returns an node id -> weight dictionary.\n Node eigenvalue weights are updated in the process."}
{"_id": "q_10798", "text": "Returns nodes sorted by betweenness centrality.\n Nodes with a lot of passing traffic will be at the front of the list."}
{"_id": "q_10799", "text": "Returns nodes with the given category attribute."}
{"_id": "q_10800", "text": "Returns a list of leaves, nodes connected to leaves, etc."}
{"_id": "q_10801", "text": "The number of edges in relation to the total number of possible edges."}
{"_id": "q_10802", "text": "Compute a cubic Bezier approximation of an elliptical arc.\n\n (x1, y1) and (x2, y2) are the corners of the enclosing rectangle.\n The coordinate system has coordinates that increase to the right and down.\n Angles, measured in degress, start with 0 to the right (the positive X axis) \n and increase counter-clockwise.\n The arc extends from start_angle to start_angle+extent.\n I.e. start_angle=0 and extent=180 yields an openside-down semi-circle.\n\n The resulting coordinates are of the form (x1,y1, x2,y2, x3,y3, x4,y4)\n such that the curve goes from (x1, y1) to (x4, y4) with (x2, y2) and\n (x3, y3) as their respective Bezier control points."}
{"_id": "q_10803", "text": "The angle in degrees between two vectors."}
{"_id": "q_10804", "text": "If size is not set, otherwise set size to DEFAULT_SIZE\n and return it.\n\n This means, only the first call to size() is valid."}
{"_id": "q_10805", "text": "Passes the drawqueue to the sink for rendering"}
{"_id": "q_10806", "text": "Reflects the point x, y through origin x0, y0."}
{"_id": "q_10807", "text": "Returns true when x, y is on the path stroke outline."}
{"_id": "q_10808", "text": "Exports the path as SVG.\n \n Uses the filename given when creating this object.\n The file is automatically updated to reflect\n changes to the path."}
{"_id": "q_10809", "text": "Handle right mouse button clicks"}
{"_id": "q_10810", "text": "Hide the variables window"}
{"_id": "q_10811", "text": "Toggle fullscreen from outside the GUI,\n causes the GUI to updated and run all its actions."}
{"_id": "q_10812", "text": "Widget Action to set Windowed Mode."}
{"_id": "q_10813", "text": "Widget Action to Close the window, triggering the quit event."}
{"_id": "q_10814", "text": "Widget Action to Toggle fullscreen from the GUI"}
{"_id": "q_10815", "text": "Widget Action to toggle showing the variables window."}
{"_id": "q_10816", "text": "Called from main loop, if your sink needs to handle GUI events\n do it here.\n\n Check any GUI flags then call Gtk.main_iteration to update things."}
{"_id": "q_10817", "text": "GUI callback for key pressed"}
{"_id": "q_10818", "text": "Create an object, if fill, stroke or strokewidth\n is not specified, get them from the _canvas\n\n :param clazz:\n :param args:\n :param kwargs:\n :return:"}
{"_id": "q_10819", "text": "Returns an Image object of the current surface. Used for displaying\n output in Jupyter notebooks. Adapted from the cairo-jupyter project."}
{"_id": "q_10820", "text": "Set the canvas size\n\n Only the first call will actually be effective.\n\n :param w: Width\n :param h: height"}
{"_id": "q_10821", "text": "Set animation framerate.\n\n :param framerate: Frames per second to run bot.\n :return: Current framerate of animation."}
{"_id": "q_10822", "text": "Set callbacks for input events"}
{"_id": "q_10823", "text": "Returns the color and its complement in a list."}
{"_id": "q_10824", "text": "Returns a list of complementary colors.\n\n The complement is the color 180 degrees across\n the artistic RYB color wheel.\n The list contains darker and softer contrasting\n and complementing colors."}
{"_id": "q_10825", "text": "Returns a list with the split complement of the color.\n\n The split complement are the two colors to the left and right\n of the color's complement."}
{"_id": "q_10826", "text": "Returns the left half of the split complement.\n\n A list is returned with the same darker and softer colors\n as in the complementary list, but using the hue of the\n left split complement instead of the complement itself."}
{"_id": "q_10827", "text": "Returns the right half of the split complement."}
{"_id": "q_10828", "text": "Returns a triad of colors.\n\n The triad is made up of this color and two other colors\n that together make up an equilateral triangle on\n the artistic color wheel."}
{"_id": "q_10829", "text": "Returns a tetrad of colors.\n\n The tetrad is made up of this color and three other colors\n that together make up a cross on the artistic color wheel."}
{"_id": "q_10830", "text": "Guesses the shade and hue name of a color.\n\n If the given color is named in the named_colors list, return that name.\n Otherwise guess its nearest hue and shade range."}
{"_id": "q_10831", "text": "A dictionary of all aggregated words.\n\n They keys in the dictionary correspond to subfolders in the aggregated cache.\n Each key has a list of words. Each of these words is the name of an XML-file\n in the subfolder. The XML-file contains color information harvested from the web\n (or handmade)."}
{"_id": "q_10832", "text": "Returns a list of colors drawn from a morgueFile image.\n\n With the Web library installed,\n downloads a thumbnail from morgueFile and retrieves pixel colors."}
{"_id": "q_10833", "text": "Returns the name of the nearest named hue.\n\n For example,\n if you supply an indigo color (a color between blue and violet),\n the return value is \"violet\". If primary is set to True,\n the return value is \"purple\".\n\n Primary colors leave out the fuzzy lime, teal,\n cyan, azure and violet hues."}
{"_id": "q_10834", "text": "Returns a mix of two colors."}
{"_id": "q_10835", "text": "Rectangle swatch for this color."}
{"_id": "q_10836", "text": "Returns a list of colors based on pixel values in the image.\n\n The Core Image library must be present to determine pixel colors.\n F. Albers: http://nodebox.net/code/index.php/shared_2007-06-11-11-37-05"}
{"_id": "q_10837", "text": "Sorts the list by cmp1, then cuts it into n pieces which are sorted by cmp2.\n\n If you want to cluster by hue, use n=12 (since there are 12 primary/secondary hues).\n The resulting list will not contain n even slices:\n n is used rather to slice up the cmp1 property of the colors,\n e.g. cmp1=brightness and n=3 will cluster colors by brightness >= 0.66, 0.33, 0.0"}
{"_id": "q_10838", "text": "Returns a list that is a repetition of the given list.\n\n When oscillate is True,\n moves from the end back to the beginning,\n and then from the beginning to the end, and so on."}
{"_id": "q_10839", "text": "Rectangle swatches for all the colors in the list."}
{"_id": "q_10840", "text": "Fancy random ovals for all the colors in the list."}
{"_id": "q_10841", "text": "Returns intermediary colors for given list of colors."}
{"_id": "q_10842", "text": "Populates the list with a number of gradient colors.\n\n The list has Gradient.steps colors that interpolate between\n the fixed base Gradient.colors.\n\n The spread parameter controls the midpoint of the gradient,\n you can shift it right and left. A separate gradient is\n calculated for each half and then glued together."}
{"_id": "q_10843", "text": "Returns a copy of the range.\n\n Optionally, supply a color to get a range copy\n limited to the hue of that color."}
{"_id": "q_10844", "text": "Returns a color with random values in the defined h, s b, a ranges.\n\n If a color is given, use that color's hue and alpha,\n and generate its saturation and brightness from the shade.\n The hue is varied with the given d.\n\n In this way you could have a \"warm\" color range\n that returns all kinds of warm colors.\n When a red color is given as parameter it would generate\n all kinds of warm red colors."}
{"_id": "q_10845", "text": "Returns True if the given color is part of this color range.\n\n Check whether each h, s, b, a component of the color\n falls within the defined range for that component.\n\n If the given color is grayscale,\n checks against the definitions for black and white."}
{"_id": "q_10846", "text": "Returns the color information as XML.\n\n The XML has the following structure:\n <colors query=\"\">\n <color name=\"\" weight=\"\" />\n <rgb r=\"\" g=\"\" b=\"\" />\n <shade name=\"\" weight=\"\" />\n </color>\n </colors>\n\n Notice that ranges are stored by name and retrieved in the _load()\n method with the shade() command - and are thus expected to be\n shades (e.g. intense, warm, ...) unless the shade() command would\n return any custom ranges as well. This can be done by appending custom\n ranges to the shades list."}
{"_id": "q_10847", "text": "Saves the color information in the cache as XML."}
{"_id": "q_10848", "text": "Returns a random color within the theme.\n\n Fetches a random range (the weight is taken into account,\n so ranges with a bigger weight have a higher chance of propagating)\n and hues it with the associated color."}
{"_id": "q_10849", "text": "Returns a number of random colors from the theme."}
{"_id": "q_10850", "text": "fseq messages associate a unique frame id with a set of set\n and alive messages"}
{"_id": "q_10851", "text": "Returns a generator list of tracked objects which are recognized with\n this profile and are in the current session."}
{"_id": "q_10852", "text": "Returns coordinates for point at t on the spline.\n Calculates the coordinates of x and y for a point at t on the cubic bezier spline,\n and its control points, based on the de Casteljau interpolation algorithm.\n The t parameter is a number between 0.0 and 1.0,\n x0 and y0 define the starting point of the spline,\n x1 and y1 its control point,\n x3 and y3 the ending point of the spline,\n x2 and y2 its control point.\n If the handles parameter is set, returns not only the point at t,\n but the modified control points of p0 and p3 should this point split the path as well."}
{"_id": "q_10853", "text": "Returns the length of the path.\n Calculates the length of each spline in the path, using n as a number of points to measure.\n When segmented is True, returns a list containing the individual length of each spline\n as values between 0.0 and 1.0, defining the relative length of each spline\n in relation to the total path length."}
{"_id": "q_10854", "text": "Yields all elements as PathElements"}
{"_id": "q_10855", "text": "Iterate through a gtk container, `parent`,\n and return the widget with the name `name`."}
{"_id": "q_10856", "text": "Find shoebot executable"}
{"_id": "q_10857", "text": "Returns the meta description in the page."}
{"_id": "q_10858", "text": "Returns the meta keywords in the page."}
{"_id": "q_10859", "text": "Returns a sorted copy of the list."}
{"_id": "q_10860", "text": "Returns a copy of the list without duplicates."}
{"_id": "q_10861", "text": "Tries to interpret the next 8 bytes of the data\n as a 64-bit signed integer."}
{"_id": "q_10862", "text": "Converts a typetagged OSC message to a Python list."}
{"_id": "q_10863", "text": "Given OSC data, tries to call the callback with the\n right address."}
{"_id": "q_10864", "text": "Sends decoded OSC data to an appropriate calback"}
{"_id": "q_10865", "text": "Adds a callback to our set of callbacks,\n or removes the callback with name if callback\n is None."}
{"_id": "q_10866", "text": "Check whether there is no more content to expect."}
{"_id": "q_10867", "text": "Send new source code to the bot\n\n :param source:\n :param good_cb: callback called if code was good\n :param bad_cb: callback called if code was bad (will get contents of exception)\n :return:"}
{"_id": "q_10868", "text": "Close outputs of process."}
{"_id": "q_10869", "text": "Get responses to commands sent"}
{"_id": "q_10870", "text": "If python-gi-cairo is not installed, using PangoCairo.create_context\n dies with an unhelpful KeyError, check for that and output somethig\n useful."}
{"_id": "q_10871", "text": "Determines if an item in a paragraph is a list.\n\n If all of the lines in the markup start with a \"*\" or \"1.\" \n this indicates a list as parsed by parse_paragraphs().\n It can be drawn with draw_list()."}
{"_id": "q_10872", "text": "Uses mimetex to generate a GIF-image from the LaTeX equation."}
{"_id": "q_10873", "text": "Draws list markup with indentation in NodeBox.\n\n Draw list markup at x, y coordinates\n using indented bullets or numbers.\n The callback is a command that takes a str and an int."}
{"_id": "q_10874", "text": "This is a very poor algorithm to draw Wikipedia tables in NodeBox."}
{"_id": "q_10875", "text": "Returns a list of images found in the markup.\n \n An image has a pathname, a description in plain text\n and a list of properties Wikipedia uses to size and place images.\n\n # A Wikipedia image looks like:\n # [[Image:Columbia Supercomputer - NASA Advanced Supercomputing Facility.jpg|right|thumb|\n # The [[NASA]] [[Columbia (supercomputer)|Columbia Supercomputer]].]]\n # Parts are separated by \"|\".\n # The first part is the image file, the last part can be a description.\n # In between are display properties, like \"right\" or \"thumb\"."}
{"_id": "q_10876", "text": "Creates a link from the table to paragraph and vice versa.\n \n Finds the first heading above the table in the markup.\n This is the title of the paragraph the table belongs to."}
{"_id": "q_10877", "text": "Returns a list of tables in the markup.\n\n A Wikipedia table looks like:\n {| border=\"1\"\n |-\n |Cell 1 (no modifier - not aligned)\n |-\n |align=\"right\" |Cell 2 (right aligned)\n |-\n |}"}
{"_id": "q_10878", "text": "Returns a list of words that appear in bold in the article.\n \n Things like table titles are not added to the list,\n these are probably bold because it makes the layout nice,\n not necessarily because they are important."}
{"_id": "q_10879", "text": "Given a Variable and a value, cleans it out"}
{"_id": "q_10880", "text": "Convenience method that works with all 2.x versions of Python\n to determine whether or not something is listlike."}
{"_id": "q_10881", "text": "Convenience method that works with all 2.x versions of Python\n to determine whether or not something is stringlike."}
{"_id": "q_10882", "text": "Sets up the initial relations between this element and\n other elements."}
{"_id": "q_10883", "text": "Finds the last element beneath this object to be parsed."}
{"_id": "q_10884", "text": "Returns the first item that matches the given criteria and\n appears after this Tag in the document."}
{"_id": "q_10885", "text": "Returns the siblings of this Tag that match the given\n criteria and appear after this Tag in the document."}
{"_id": "q_10886", "text": "Returns the closest sibling to this Tag that matches the\n given criteria and appears before this Tag in the document."}
{"_id": "q_10887", "text": "Returns the siblings of this Tag that match the given\n criteria and appear before this Tag in the document."}
{"_id": "q_10888", "text": "Returns the parents of this Tag that match the given\n criteria."}
{"_id": "q_10889", "text": "Encodes an object to a string in some encoding, or to Unicode.\n ."}
{"_id": "q_10890", "text": "Cheap function to invert a hash."}
{"_id": "q_10891", "text": "Used in a call to re.sub to replace HTML, XML, and numeric\n entities with the appropriate Unicode characters. If HTML\n entities are being converted, any unrecognized entities are\n escaped."}
{"_id": "q_10892", "text": "Recursively destroys the contents of this tree."}
{"_id": "q_10893", "text": "Renders the contents of this tag as a string in the given\n encoding. If encoding is None, returns a Unicode string.."}
{"_id": "q_10894", "text": "Return only the first child of this Tag matching the given\n criteria."}
{"_id": "q_10895", "text": "Returns true iff the given string is the name of a\n self-closing tag according to this parser."}
{"_id": "q_10896", "text": "Adds a certain piece of text to the tree as a NavigableString\n subclass."}
{"_id": "q_10897", "text": "Handle a processing instruction as a ProcessingInstruction\n object, possibly one with a %SOUP-ENCODING% slot into which an\n encoding will be plugged later."}
{"_id": "q_10898", "text": "Handle character references as data."}
{"_id": "q_10899", "text": "Treat a bogus SGML declaration as raw data. Treat a CDATA\n declaration as a CData object."}
{"_id": "q_10900", "text": "Beautiful Soup can detect a charset included in a META tag,\n try to convert the document to that charset, and re-parse the\n document from the beginning."}
{"_id": "q_10901", "text": "Changes a MS smart quote character to an XML or HTML\n entity."}
{"_id": "q_10902", "text": "Given a string and its encoding, decodes the string into Unicode.\n %encoding is a string recognized by encodings.aliases"}
{"_id": "q_10903", "text": "Given a document, tries to detect its XML encoding."}
{"_id": "q_10904", "text": "Scale context based on difference between bot size and widget"}
{"_id": "q_10905", "text": "Draw just the exposed part of the backing store, scaled to fit"}
{"_id": "q_10906", "text": "Creates a recording surface for the bot to draw on\n\n :param size: The width and height of bot"}
{"_id": "q_10907", "text": "Return a JSON string representation of a Python data structure.\n\n >>> JSONEncoder().encode({\"foo\": [\"bar\", \"baz\"]})\n '{\"foo\": [\"bar\", \"baz\"]}'"}
{"_id": "q_10908", "text": "Called when CairoCanvas needs a cairo context to draw on"}
{"_id": "q_10909", "text": "Function to output to a cairo surface\n\n target is a cairo Context or filename\n if file_number is set, then files will be numbered\n (this is usually set to the current frame number)"}
{"_id": "q_10910", "text": "Create canvas and sink for attachment to a bot\n\n canvas is what draws images, 'sink' is the final consumer of the images\n\n :param src: Defaults for title or outputfile if not specified.\n\n :param format: CairoImageSink image format, if using buff instead of outputfile\n :param buff: CairoImageSink buffer object to send output to\n\n :param outputfile: CairoImageSink output filename e.g. \"hello.svg\"\n :param multifile: CairoImageSink if True,\n\n :param title: ShoebotWindow - set window title\n :param fullscreen: ShoebotWindow - set window title\n :param show_vars: ShoebotWindow - display variable window\n\n Two kinds of sink are provided: CairoImageSink and ShoebotWindow\n\n ShoebotWindow\n\n Displays a window to draw shoebot inside.\n\n\n CairoImageSink\n\n Output to a filename (or files if multifile is set), or a buffer object."}
{"_id": "q_10911", "text": "Return True if the buffer was saved"}
{"_id": "q_10912", "text": "Called when a slider is adjusted."}
{"_id": "q_10913", "text": "var was added in the bot while it ran, possibly\n by livecoding\n\n :param v:\n :return:"}
{"_id": "q_10914", "text": "var was added in the bot\n\n :param v:\n :return:"}
{"_id": "q_10915", "text": "Returns cached copies unless otherwise specified."}
{"_id": "q_10916", "text": "Expand the path with color information.\n \n Attempts to extract fill and stroke colors\n from the element and adds it to path attributes."}
{"_id": "q_10917", "text": "Returns a copy of the event handler, remembering the last node clicked."}
{"_id": "q_10918", "text": "Drags given node to mouse location."}
{"_id": "q_10919", "text": "Displays a popup when hovering over a node."}
{"_id": "q_10920", "text": "Returns a cached textpath of the given text in queue."}
{"_id": "q_10921", "text": "Draws a popup rectangle with a rotating text queue."}
{"_id": "q_10922", "text": "write FILENAME\n Write a local copy of FILENAME using FILENAME_tweaks for local tweaks."}
{"_id": "q_10923", "text": "Amend a filename with a suffix.\n\n amend_filename(\"foo.txt\", \"_tweak\") --> \"foo_tweak.txt\""}
{"_id": "q_10924", "text": "check FILENAME\n Check that FILENAME has not been edited since writing."}
{"_id": "q_10925", "text": "Check that a checker's visitors are correctly named.\n\n A checker has methods named visit_NODETYPE, but it's easy to mis-name\n a visit method, and it will never be called. This decorator checks\n the class to see that all of its visitors are named after an existing\n node class."}
{"_id": "q_10926", "text": "Parse the pylint output-format=parseable lines into PylintError tuples."}
{"_id": "q_10927", "text": "The edx_lint command entry point."}
{"_id": "q_10928", "text": "Print the help string for the edx_lint command."}
{"_id": "q_10929", "text": "Parse the description in the README file\n\n CommandLine:\n python -c \"import setup; print(setup.parse_description())\""}
{"_id": "q_10930", "text": "Create a transformation class object\n\n Parameters\n ----------\n name : str\n Name of the transformation\n transform : callable ``f(x)``\n A function (preferably a `ufunc`) that computes\n the transformation.\n inverse : callable ``f(x)``\n A function (preferably a `ufunc`) that computes\n the inverse of the transformation.\n breaks : callable ``f(limits)``\n Function to compute the breaks for this transform.\n If None, then a default good enough for a linear\n domain is used.\n minor_breaks : callable ``f(major, limits)``\n Function to compute the minor breaks for this\n transform. If None, then a default good enough for\n a linear domain is used.\n _format : callable ``f(breaks)``\n Function to format the generated breaks.\n domain : array_like\n Domain over which the transformation is valid.\n It should be of length 2.\n doc : str\n Docstring for the class.\n **kwargs : dict\n Attributes of the transform, e.g if base is passed\n in kwargs, then `t.base` would be a valied attribute.\n\n Returns\n -------\n out : trans\n Transform class"}
{"_id": "q_10931", "text": "Return a trans object\n\n Parameters\n ----------\n t : str | callable | type | trans\n name of transformation function\n\n Returns\n -------\n out : trans"}
{"_id": "q_10932", "text": "Calculate breaks in data space and return them\n in transformed space.\n\n Expects limits to be in *transform space*, this\n is the same space as that where the domain is\n specified.\n\n This method wraps around :meth:`breaks_` to ensure\n that the calculated breaks are within the domain\n the transform. This is helpful in cases where an\n aesthetic requests breaks with limits expanded for\n some padding, yet the expansion goes beyond the\n domain of the transform. e.g for a probability\n transform the breaks will be in the domain\n ``[0, 1]`` despite any outward limits.\n\n Parameters\n ----------\n limits : tuple\n The scale limits. Size 2.\n\n Returns\n -------\n out : array_like\n Major breaks"}
{"_id": "q_10933", "text": "Transform from date to a numerical format"}
{"_id": "q_10934", "text": "Rescale numeric vector to have specified minimum and maximum.\n\n Parameters\n ----------\n x : array_like | numeric\n 1D vector of values to manipulate.\n to : tuple\n output range (numeric vector of length two)\n _from : tuple\n input range (numeric vector of length two).\n If not given, is calculated from the range of x\n\n Returns\n -------\n out : array_like\n Rescaled values\n\n Examples\n --------\n >>> x = [0, 2, 4, 6, 8, 10]\n >>> rescale(x)\n array([0. , 0.2, 0.4, 0.6, 0.8, 1. ])\n >>> rescale(x, to=(0, 2))\n array([0. , 0.4, 0.8, 1.2, 1.6, 2. ])\n >>> rescale(x, to=(0, 2), _from=(0, 20))\n array([0. , 0.2, 0.4, 0.6, 0.8, 1. ])"}
{"_id": "q_10935", "text": "Rescale numeric vector to have specified maximum.\n\n Parameters\n ----------\n x : array_like | numeric\n 1D vector of values to manipulate.\n to : tuple\n output range (numeric vector of length two)\n _from : tuple\n input range (numeric vector of length two).\n If not given, is calculated from the range of x.\n Only the 2nd (max) element is essential to the\n output.\n\n Returns\n -------\n out : array_like\n Rescaled values\n\n Examples\n --------\n >>> x = [0, 2, 4, 6, 8, 10]\n >>> rescale_max(x, (0, 3))\n array([0. , 0.6, 1.2, 1.8, 2.4, 3. ])\n\n Only the 2nd (max) element of the parameters ``to``\n and ``_from`` are essential to the output.\n\n >>> rescale_max(x, (1, 3))\n array([0. , 0.6, 1.2, 1.8, 2.4, 3. ])\n >>> rescale_max(x, (0, 20))\n array([ 0., 4., 8., 12., 16., 20.])\n\n If :python:`max(x) < _from[1]` then values will be\n scaled beyond the requested (:python:`to[1]`) maximum.\n\n >>> rescale_max(x, to=(1, 3), _from=(-1, 6))\n array([0., 1., 2., 3., 4., 5.])"}
{"_id": "q_10936", "text": "Truncate infinite values to a range.\n\n Parameters\n ----------\n x : array_like\n Values that should have infinities squished.\n range : tuple\n The range onto which to squish the infinites.\n Must be of size 2.\n\n Returns\n -------\n out : array_like\n Values with infinites squished.\n\n Examples\n --------\n >>> squish_infinite([0, .5, .25, np.inf, .44])\n [0.0, 0.5, 0.25, 1.0, 0.44]\n >>> squish_infinite([0, -np.inf, .5, .25, np.inf], (-10, 9))\n [0.0, -10.0, 0.5, 0.25, 9.0]"}
{"_id": "q_10937", "text": "Squish values into range.\n\n Parameters\n ----------\n x : array_like\n Values that should have out of range values squished.\n range : tuple\n The range onto which to squish the values.\n only_finite: boolean\n When true, only squishes finite values.\n\n Returns\n -------\n out : array_like\n Values with out of range values squished.\n\n Examples\n --------\n >>> squish([-1.5, 0.2, 0.5, 0.8, 1.0, 1.2])\n [0.0, 0.2, 0.5, 0.8, 1.0, 1.0]\n\n >>> squish([-np.inf, -1.5, 0.2, 0.5, 0.8, 1.0, np.inf], only_finite=False)\n [0.0, 0.0, 0.2, 0.5, 0.8, 1.0, 1.0]"}
{"_id": "q_10938", "text": "Determine if range of vector is close to zero.\n\n Parameters\n ----------\n x : array_like | numeric\n Value(s) to check. If it is an array_like, it\n should be of length 2.\n tol : float\n Tolerance. Default tolerance is the `machine epsilon`_\n times :math:`10^2`.\n\n Returns\n -------\n out : bool\n Whether ``x`` has zero range.\n\n Examples\n --------\n >>> zero_range([1, 1])\n True\n >>> zero_range([1, 2])\n False\n >>> zero_range([1, 2], tol=2)\n True\n\n .. _machine epsilon: https://en.wikipedia.org/wiki/Machine_epsilon"}
{"_id": "q_10939", "text": "Expand a range with a multiplicative or additive constants\n\n Similar to :func:`expand_range` but both sides of the range\n expanded using different constants\n\n Parameters\n ----------\n range : tuple\n Range of data. Size 2\n expand : tuple\n Length 2 or 4. If length is 2, then the same constants\n are used for both sides. If length is 4 then the first\n two are are the Multiplicative (*mul*) and Additive (*add*)\n constants for the lower limit, and the second two are\n the constants for the upper limit.\n zero_width : int | float | timedelta\n Distance to use if range has zero width\n\n Returns\n -------\n out : tuple\n Expanded range\n\n Examples\n --------\n >>> expand_range_distinct((3, 8))\n (3, 8)\n >>> expand_range_distinct((0, 10), (0.1, 0))\n (-1.0, 11.0)\n >>> expand_range_distinct((0, 10), (0.1, 0, 0.1, 0))\n (-1.0, 11.0)\n >>> expand_range_distinct((0, 10), (0.1, 0, 0, 0))\n (-1.0, 10)\n >>> expand_range_distinct((0, 10), (0, 2))\n (-2, 12)\n >>> expand_range_distinct((0, 10), (0, 2, 0, 2))\n (-2, 12)\n >>> expand_range_distinct((0, 10), (0, 0, 0, 2))\n (0, 12)\n >>> expand_range_distinct((0, 10), (.1, 2))\n (-3.0, 13.0)\n >>> expand_range_distinct((0, 10), (.1, 2, .1, 2))\n (-3.0, 13.0)\n >>> expand_range_distinct((0, 10), (0, 0, .1, 2))\n (0, 13.0)"}
{"_id": "q_10940", "text": "Append 2 extra breaks at either end of major\n\n If breaks of transform space are non-equidistant,\n :func:`minor_breaks` add minor breaks beyond the first\n and last major breaks. The solutions is to extend those\n breaks (in transformed space) before the minor break call\n is made. How the breaks depends on the type of transform."}
{"_id": "q_10941", "text": "Determine good units for representing a sequence of timedeltas"}
{"_id": "q_10942", "text": "Convert sequence of numerics to timedelta"}
{"_id": "q_10943", "text": "Convert timedelta to a number corresponding to the\n appropriate units. The appropriate units are those\n determined with the object is initialised."}
{"_id": "q_10944", "text": "Round to multiple of any number."}
{"_id": "q_10945", "text": "Return the minimum and maximum of x\n\n Parameters\n ----------\n x : array_like\n Sequence\n na_rm : bool\n Whether to remove ``nan`` values.\n finite : bool\n Whether to consider only finite values.\n\n Returns\n -------\n out : tuple\n (minimum, maximum) of x"}
{"_id": "q_10946", "text": "Return nearest long integer to x"}
{"_id": "q_10947", "text": "Check if value is close to an integer\n\n Parameters\n ----------\n x : float\n Numeric value to check\n\n Returns\n -------\n out : bool"}
{"_id": "q_10948", "text": "Helper to format and tidy up"}
{"_id": "q_10949", "text": "Get a set of evenly spaced colors in HLS hue space.\n\n h, l, and s should be between 0 and 1\n\n Parameters\n ----------\n\n n_colors : int\n number of colors in the palette\n h : float\n first hue\n l : float\n lightness\n s : float\n saturation\n\n Returns\n -------\n palette : list\n List of colors as RGB hex strings.\n\n See Also\n --------\n husl_palette : Make a palette using evenly spaced circular\n hues in the HUSL system.\n\n Examples\n --------\n >>> len(hls_palette(2))\n 2\n >>> len(hls_palette(9))\n 9"}
{"_id": "q_10950", "text": "Utility for making hue palettes for color schemes.\n\n Parameters\n ----------\n h : float\n first hue. In the [0, 1] range\n l : float\n lightness. In the [0, 1] range\n s : float\n saturation. In the [0, 1] range\n color_space : 'hls' | 'husl'\n Color space to use for the palette\n\n Returns\n -------\n out : function\n A discrete color palette that takes a single\n :class:`int` parameter ``n`` and returns ``n``\n equally spaced colors. Though the palette\n is continuous, since it is varies the hue it\n is good for categorical data. However if ``n``\n is large enough the colors show continuity.\n\n Examples\n --------\n >>> hue_pal()(5)\n ['#db5f57', '#b9db57', '#57db94', '#5784db', '#c957db']\n >>> hue_pal(color_space='husl')(5)\n ['#e0697e', '#9b9054', '#569d79', '#5b98ab', '#b675d7']"}
{"_id": "q_10951", "text": "Utility for making a brewer palette\n\n Parameters\n ----------\n type : 'sequential' | 'qualitative' | 'diverging'\n Type of palette. Sequential, Qualitative or\n Diverging. The following abbreviations may\n be used, ``seq``, ``qual`` or ``div``.\n\n palette : int | str\n Which palette to choose from. If is an integer,\n it must be in the range ``[0, m]``, where ``m``\n depends on the number sequential, qualitative or\n diverging palettes. If it is a string, then it\n is the name of the palette.\n\n Returns\n -------\n out : function\n A color palette that takes a single\n :class:`int` parameter ``n`` and returns ``n``\n colors. The maximum value of ``n`` varies\n depending on the parameters.\n\n Examples\n --------\n >>> brewer_pal()(5)\n ['#EFF3FF', '#BDD7E7', '#6BAED6', '#3182BD', '#08519C']\n >>> brewer_pal('qual')(5)\n ['#7FC97F', '#BEAED4', '#FDC086', '#FFFF99', '#386CB0']\n >>> brewer_pal('qual', 2)(5)\n ['#1B9E77', '#D95F02', '#7570B3', '#E7298A', '#66A61E']\n >>> brewer_pal('seq', 'PuBuGn')(5)\n ['#F6EFF7', '#BDC9E1', '#67A9CF', '#1C9099', '#016C59']\n\n The available color names for each palette type can be\n obtained using the following code::\n\n import palettable.colorbrewer as brewer\n\n print([k for k in brewer.COLOR_MAPS['Sequential'].keys()])\n print([k for k in brewer.COLOR_MAPS['Qualitative'].keys()])\n print([k for k in brewer.COLOR_MAPS['Diverging'].keys()])"}
{"_id": "q_10952", "text": "Create a n color gradient palette\n\n Parameters\n ----------\n colors : list\n list of colors\n values : list, optional\n list of points in the range [0, 1] at which to\n place each color. Must be the same size as\n `colors`. Default to evenly space the colors\n name : str\n Name to call the resultant MPL colormap\n\n Returns\n -------\n out : function\n Continuous color palette that takes a single\n parameter either a :class:`float` or a sequence\n of floats maps those value(s) onto the palette\n and returns color(s). The float(s) must be\n in the range [0, 1].\n\n Examples\n --------\n >>> palette = gradient_n_pal(['red', 'blue'])\n >>> palette([0, .25, .5, .75, 1])\n ['#ff0000', '#bf0040', '#7f0080', '#3f00c0', '#0000ff']"}
{"_id": "q_10953", "text": "Create a continuous palette using an MPL colormap\n\n Parameters\n ----------\n name : str\n Name of colormap\n lut : None | int\n This is the number of entries desired in the lookup table.\n Default is ``None``, leave it up Matplotlib.\n\n Returns\n -------\n out : function\n Continuous color palette that takes a single\n parameter either a :class:`float` or a sequence\n of floats maps those value(s) onto the palette\n and returns color(s). The float(s) must be\n in the range [0, 1].\n\n Examples\n --------\n >>> palette = cmap_pal('viridis')\n >>> palette([.1, .2, .3, .4, .5])\n ['#482475', '#414487', '#355f8d', '#2a788e', '#21918c']"}
{"_id": "q_10954", "text": "Create a discrete palette using an MPL Listed colormap\n\n Parameters\n ----------\n name : str\n Name of colormap\n lut : None | int\n This is the number of entries desired in the lookup table.\n Default is ``None``, leave it up Matplotlib.\n\n Returns\n -------\n out : function\n A discrete color palette that takes a single\n :class:`int` parameter ``n`` and returns ``n``\n colors. The maximum value of ``n`` varies\n depending on the parameters.\n\n Examples\n --------\n >>> palette = cmap_d_pal('viridis')\n >>> palette(5)\n ['#440154', '#3b528b', '#21918c', '#5cc863', '#fde725']"}
{"_id": "q_10955", "text": "Create a palette from a list of values\n\n Parameters\n ----------\n values : sequence\n Values that will be returned by the palette function.\n\n Returns\n -------\n out : function\n A function palette that takes a single\n :class:`int` parameter ``n`` and returns ``n`` values.\n\n Examples\n --------\n >>> palette = manual_pal(['a', 'b', 'c', 'd', 'e'])\n >>> palette(3)\n ['a', 'b', 'c']"}
{"_id": "q_10956", "text": "Scale data continuously\n\n Parameters\n ----------\n x : array_like\n Continuous values to scale\n palette : callable ``f(x)``\n Palette to use\n na_value : object\n Value to use for missing values.\n trans : trans\n How to transform the data before scaling. If\n ``None``, no transformation is done.\n\n Returns\n -------\n out : array_like\n Scaled values"}
{"_id": "q_10957", "text": "Map values to a continuous palette\n\n Parameters\n ----------\n x : array_like\n Continuous values to scale\n palette : callable ``f(x)``\n palette to use\n na_value : object\n Value to use for missing values.\n oob : callable ``f(x)``\n Function to deal with values that are\n beyond the limits\n\n Returns\n -------\n out : array_like\n Values mapped onto a palette"}
{"_id": "q_10958", "text": "Register a parser for a attribute type.\n\n Parsers will be used to parse `str` type objects from either\n the commandline arguments or environment variables.\n\n Args:\n type: the type the decorated function will be responsible\n for parsing a environment variable to."}
{"_id": "q_10959", "text": "Used to patch cookiecutter's ``run_hook`` function.\n\n This patched version ensures that the temple.yaml file is created before\n any cookiecutter hooks are executed"}
{"_id": "q_10960", "text": "Uses cookiecutter to generate files for the project.\n\n Monkeypatches cookiecutter's \"run_hook\" to ensure that the temple.yaml file is\n generated before any hooks run. This is important to ensure that hooks can also\n perform any actions involving temple.yaml"}
{"_id": "q_10961", "text": "Sets up a new project from a template\n\n Note that the `temple.constants.TEMPLE_ENV_VAR` is set to 'setup' during the duration\n of this function.\n\n Args:\n template (str): The git SSH path to a template\n version (str, optional): The version of the template to use when updating. Defaults\n to the latest version"}
{"_id": "q_10962", "text": "Parses Github's link header for pagination.\n\n TODO eventually use a github client for this"}
{"_id": "q_10963", "text": "Update package with latest template. Must be inside of the project\n folder to run.\n\n Using \"-e\" will prompt for re-entering the template parameters again\n even if the project is up to date.\n\n Use \"-v\" to update to a particular version of a template.\n\n Using \"-c\" will perform a check that the project is up to date\n with the latest version of the template (or the version specified by \"-v\").\n No updating will happen when using this option."}
{"_id": "q_10964", "text": "List packages created with temple. Enter a github user or\n organization to list all templates under the user or org.\n Using a template path as the second argument will list all projects\n that have been started with that template.\n\n Use \"-l\" to print the Github repository descriptions of templates\n or projects."}
{"_id": "q_10965", "text": "Switch a project's template to a different template."}
{"_id": "q_10966", "text": "Returns True if inside a git repo, False otherwise"}
{"_id": "q_10967", "text": "Return True if the target branch exists."}
{"_id": "q_10968", "text": "Raises `ExistingBranchError` if the specified branch exists."}
{"_id": "q_10969", "text": "Raises `InvalidTempleProjectError` if repository is not a temple project"}
{"_id": "q_10970", "text": "Determine the current git branch"}
{"_id": "q_10971", "text": "Cleans up temporary resources\n\n Tries to clean up:\n\n 1. The temporary update branch used during ``temple update``\n 2. The primary update branch used during ``temple update``"}
{"_id": "q_10972", "text": "Given an old version and new version, check if the cookiecutter.json files have changed\n\n When the cookiecutter.json files change, it means the user will need to be prompted for\n new context\n\n Args:\n template (str): The git SSH path to the template\n old_version (str): The git SHA of the old version\n new_version (str): The git SHA of the new version\n\n Returns:\n bool: True if the cookiecutter.json files have been changed in the old and new versions"}
{"_id": "q_10973", "text": "Apply a template to a temporary directory and then copy results to target."}
{"_id": "q_10974", "text": "Writes the temple YAML configuration"}
{"_id": "q_10975", "text": "Obtains the configuration used for cookiecutter templating\n\n Args:\n template: Path to the template\n default_config (dict, optional): The default configuration\n version (str, optional): The git SHA or branch to use when\n checking out template. Defaults to latest version\n\n Returns:\n tuple: The cookiecutter repo directory and the config dict"}
{"_id": "q_10976", "text": "Decorator that sets the temple command env var to value"}
{"_id": "q_10977", "text": "Perform a github API call\n\n Args:\n verb (str): Can be \"post\", \"put\", or \"get\"\n url (str): The base URL with a leading slash for Github API (v3)\n auth (str or HTTPBasicAuth): A Github API token or a HTTPBasicAuth object"}
{"_id": "q_10978", "text": "Deploys the package and documentation.\n\n Proceeds in the following steps:\n\n 1. Ensures proper environment variables are set and checks that we are on Circle CI\n 2. Tags the repository with the new version\n 3. Creates a standard distribution and a wheel\n 4. Updates version.py to have the proper version\n 5. Commits the ChangeLog, AUTHORS, and version.py file\n 6. Pushes to PyPI\n 7. Pushes the tags and newly committed files\n\n Raises:\n `EnvironmentError`:\n - Not running on CircleCI\n - `*_PYPI_USERNAME` and/or `*_PYPI_PASSWORD` environment variables\n are missing\n - Attempting to deploy to production from a branch that isn't master"}
{"_id": "q_10979", "text": "Record a purchase in Sailthru\n\n Arguments:\n sailthru_client (object): SailthruClient\n email (str): user's email address\n item (dict): Sailthru required information about the course\n purchase_incomplete (boolean): True if adding item to shopping cart\n message_id (str): Cookie used to identify marketing campaign\n options (dict): Sailthru purchase API options (e.g. template name)\n\n Returns:\n False if retryable error, else True"}
{"_id": "q_10980", "text": "Get course information using the Sailthru content api or from cache.\n\n If there is an error, just return with an empty response.\n\n Arguments:\n course_id (str): course key of the course\n course_url (str): LMS url for course info page.\n sailthru_client (object): SailthruClient\n site_code (str): site code\n config (dict): config options\n\n Returns:\n course information from Sailthru"}
{"_id": "q_10981", "text": "Get course information using the Ecommerce course api.\n\n In case of error returns empty response.\n Arguments:\n course_id (str): course key of the course\n site_code (str): site code\n\n Returns:\n course information from Ecommerce"}
{"_id": "q_10982", "text": "Sends the course refund email.\n\n Args:\n self: Ignore.\n email (str): Recipient's email address.\n refund_id (int): ID of the refund that initiated this task.\n amount (str): Formatted amount of the refund.\n course_name (str): Name of the course for which payment was refunded.\n order_number (str): Order number of the order that was refunded.\n order_url (str): Receipt URL of the refunded order.\n site_code (str): Identifier of the site sending the email."}
{"_id": "q_10983", "text": "Handles sending offer assignment notification emails and retrying failed emails when appropriate."}
{"_id": "q_10984", "text": "Returns a dictionary containing logging configuration.\n\n If dev_env is True, logging will not be done via local rsyslogd.\n Instead, application logs will be dropped into log_dir. 'edx_filename'\n is ignored unless dev_env is True."}
{"_id": "q_10985", "text": "Retry with exponential backoff until fulfillment\n succeeds or the retry limit is reached. If the retry limit is exceeded,\n the exception is re-raised."}
{"_id": "q_10986", "text": "Fulfills an order.\n\n Arguments:\n order_number (str): Order number indicating which order to fulfill.\n\n Returns:\n None"}
{"_id": "q_10987", "text": "Returns a Sailthru client for the specified site.\n\n Args:\n site_code (str): Site for which the client should be configured.\n\n Returns:\n SailthruClient\n\n Raises:\n SailthruNotEnabled: If Sailthru is not enabled for the specified site.\n ConfigurationError: If either the Sailthru API key or secret are not set for the site."}
{"_id": "q_10988", "text": "Get an object from the cache\n\n Arguments:\n key (str): Cache key\n\n Returns:\n Cached object"}
{"_id": "q_10989", "text": "Save an object in the cache\n\n Arguments:\n key (str): Cache key\n value (object): object to cache\n duration (int): time in seconds to keep object in cache"}
{"_id": "q_10990", "text": "Get a value from configuration.\n\n Retrieves the value corresponding to the given variable from the configuration module\n currently in use by the app. Specify a site_code value to check for a site-specific override.\n\n Arguments:\n variable (str): The name of a variable from the configuration module.\n\n Keyword Arguments:\n site_code (str): The SITE_OVERRIDES key to inspect for site-specific values\n\n Returns:\n The value corresponding to the variable, or None if the variable is not found."}
{"_id": "q_10991", "text": "Get the name of the file containing configuration overrides\n from the provided environment variable."}
{"_id": "q_10992", "text": "Finds .DS_Store files into path"}
{"_id": "q_10993", "text": "Method executed dynamically by framework. This method will do a http request to\n\t\tendpoint setted into config file with the issues and other data."}
{"_id": "q_10994", "text": "Parse the config values\n\n\t\tArgs:\n\t\t\tvalue (dict): Dictionary which contains the checker config\n\n\t\tReturns:\n\t\t\tdict: The checker config with parsed values"}
{"_id": "q_10995", "text": "Get the OS name. If OS is linux, returns the Linux distribution name\n\n\t\tReturns:\n\t\t\tstr: OS name"}
{"_id": "q_10996", "text": "Executes the command setted into class\n\n\t\tArgs:\n\t\t\tshell (boolean): Set True if command is a shell command. Default: True"}
{"_id": "q_10997", "text": "Print a message if the class attribute 'verbose' is enabled\n\n\t\tArgs:\n\t\t\tmessage (str): Message to print"}
{"_id": "q_10998", "text": "Creates required directories and copy checkers and reports."}
{"_id": "q_10999", "text": "Writes a section for a plugin.\n\n\t\tArgs:\n\t\t\tinstance (object): Class instance for plugin\n\t\t\tconfig (object): Object (ConfigParser) which the current config\n\t\t\tparent_section (str): Parent section for plugin. Usually 'checkers' or 'reports'"}
{"_id": "q_11000", "text": "Returns a class instance from a .py file.\n\n\t\tArgs:\n\t\t\tpath (str): Absolute path to .py file\n\t\t\targs (dict): Arguments passed via class constructor\n\n\t\tReturns:\n\t\t\tobject: Class instance or None"}
{"_id": "q_11001", "text": "Execute an specific method for each class instance located in path\n\n\t\tArgs:\n\t\t\tpath (str): Absolute path which contains the .py files\n\t\t\tmethod (str): Method to execute into class instance\n\n\t\tReturns:\n\t\t\tdict: Dictionary which contains the response for every class instance.\n\t\t\t\t The dictionary keys are the value of 'NAME' class variable."}
{"_id": "q_11002", "text": "Install all the dependences"}
{"_id": "q_11003", "text": "Setter for 'potential' property\n\n\t\tArgs:\n\t\t\tvalue (bool): True if a potential is required. False else"}
{"_id": "q_11004", "text": "Finds the value depending in current eplus version.\n\n\n Parameters\n ----------\n d: dict\n {(0, 0): value, (x, x): value, ...}\n for current version (cv), current value is the value of version v such as v <= cv < v+1"}
{"_id": "q_11005", "text": "if _eplus_version is defined => _eplus_version\n else most recent eplus available version"}
{"_id": "q_11006", "text": "!! Must only be called once, when empty !!"}
{"_id": "q_11007", "text": "An external file manages file paths."}
{"_id": "q_11008", "text": "All fields of Epm with a default value and that are null will be set to their default value."}
{"_id": "q_11009", "text": "This function finishes initialization, must be called once all field descriptors and tag have been filled."}
{"_id": "q_11010", "text": "manages extensible names"}
{"_id": "q_11011", "text": "we calculate on the fly to avoid managing registrations and un-registrations\n\n Returns\n -------\n {ref: short_ref, ..."}
{"_id": "q_11012", "text": "Updates simultaneously all given fields.\n\n Parameters\n ----------\n data: dictionary containing field lowercase names or index as keys, and field values as values (dict syntax)\n or_data: keyword arguments containing field names as keys (kwargs syntax)"}
{"_id": "q_11013", "text": "sets all empty fields for which a default value is defined to default value"}
{"_id": "q_11014", "text": "This method only works for extensible fields. It allows to add values without precising their fields' names\n or indexes.\n\n Parameters\n ----------\n args: field values"}
{"_id": "q_11015", "text": "This method only works for extensible fields. It allows to remove a value and shift all other values to fill\n the gap.\n\n Parameters\n ----------\n index: int, default None\n index of field to remove.\n\n Returns\n -------\n serialize value of popped field"}
{"_id": "q_11016", "text": "Deletes record, and removes it from database."}
{"_id": "q_11017", "text": "target record must have been set"}
{"_id": "q_11018", "text": "Shortcut method for getting a setting value.\n\n :param str name: Setting key name.\n :param default: Default value of setting if it's not explicitly\n set. Defaults to `None`\n :param bool allow_default: If true, use the parameter default as\n default if the key is not set, else raise\n :exc:`KeyError`. Defaults to `None`\n :raises: :exc:`KeyError` if allow_default is false and the setting is\n not set."}
{"_id": "q_11019", "text": "Try to get `key` from the environment.\n\n This mutates `key` to replace dots with underscores and makes it all\n uppercase.\n\n my.database.host => MY_DATABASE_HOST"}
{"_id": "q_11020", "text": "Return a setting value.\n\n :param str name: Setting key name.\n :param default: Default value of setting if it's not explicitly\n set.\n :param bool allow_default: If true, use the parameter default as\n default if the key is not set, else raise\n :exc:`LookupError`\n :raises: :exc:`LookupError` if allow_default is false and the setting is\n not set."}
{"_id": "q_11021", "text": "Return a dictionary of settings loaded from etcd."}
{"_id": "q_11022", "text": "Return a etcd watching generator which yields events as they happen."}
{"_id": "q_11023", "text": "Return hosts parsed into a tuple of tuples.\n\n :param hosts: String or list of hosts"}
{"_id": "q_11024", "text": "Handles the -m argument."}
{"_id": "q_11025", "text": "Outputs `calls`.\n\n :param calls: List of :class:`_PyconfigCall` instances\n :param args: :class:`~argparse.ArgumentParser` instance\n :type calls: list\n :type args: argparse.ArgumentParser"}
{"_id": "q_11026", "text": "Return `output` colorized with Pygments, if available."}
{"_id": "q_11027", "text": "Return this call as if it were being assigned in a pyconfig namespace.\n\n If `namespace` is specified and matches the top level of this call's\n :attr:`key`, then that section of the key will be removed."}
{"_id": "q_11028", "text": "Return this call as if it were being assigned in a pyconfig namespace,\n but load the actual value currently available in pyconfig."}
{"_id": "q_11029", "text": "Return this call as it is called in its source."}
{"_id": "q_11030", "text": "Return the call key, even if it has to be parsed from the source."}
{"_id": "q_11031", "text": "Return only the default value, if there is one."}
{"_id": "q_11032", "text": "Return the default argument, formatted nicely."}
{"_id": "q_11033", "text": "Create regex and return. If error occurs returns None."}
{"_id": "q_11034", "text": "Returns the remaining duration for a recording."}
{"_id": "q_11035", "text": "Make an HTTP request to a given URL with optional parameters."}
{"_id": "q_11036", "text": "Get available service endpoints for a given service type from the\n Opencast ServiceRegistry."}
{"_id": "q_11037", "text": "Try to create a directory. Pass without error if it already exists."}
{"_id": "q_11038", "text": "Send the state of the current recording to the Matterhorn core.\n\n :param recording_id: ID of the current recording\n :param status: Status of the recording"}
{"_id": "q_11039", "text": "Update the status of a particular event in the database."}
{"_id": "q_11040", "text": "Update the current agent state in opencast."}
{"_id": "q_11041", "text": "Find the best match for the configuration file."}
{"_id": "q_11042", "text": "Update configuration from file.\n\n :param cfgfile: Configuration file to load."}
{"_id": "q_11043", "text": "Check configuration for sanity."}
{"_id": "q_11044", "text": "Initialize logger based on configuration"}
{"_id": "q_11045", "text": "Serve the status page of the capture agent."}
{"_id": "q_11046", "text": "Serve the preview image with the given id"}
{"_id": "q_11047", "text": "Start all services."}
{"_id": "q_11048", "text": "Parse Opencast schedule iCalendar file and return events as dict"}
{"_id": "q_11049", "text": "Main loop, retrieving the schedule."}
{"_id": "q_11050", "text": "Main loop, updating the capture agent state."}
{"_id": "q_11051", "text": "Return a response with a jsonapi error object"}
{"_id": "q_11052", "text": "Serve a JSON representation of events"}
{"_id": "q_11053", "text": "Delete a specific event identified by its uid. Note that only recorded\n events can be deleted. Events in the buffer for upcoming events are\n regularly replaced anyway and a manual removal could have unpredictable\n effects.\n\n Use ?hard=true parameter to delete the recorded files on disk as well.\n\n Returns 204 if the action was successful.\n Returns 404 if event does not exist"}
{"_id": "q_11054", "text": "Modify an event specified by its uid. The modifications for the event\n are expected as JSON with the content type correctly set in the request.\n\n Note that this method works for recorded events only. Upcoming events part\n of the scheduler cache cannot be modified."}
{"_id": "q_11055", "text": "Extract the set of configuration parameters from the properties attached\n to the schedule"}
{"_id": "q_11056", "text": "Ingest a finished recording to the Opencast server."}
{"_id": "q_11057", "text": "Start the capture process, creating all necessary files and directories\n as well as ingesting the captured files if no backup mode is configured."}
{"_id": "q_11058", "text": "Returns a simple fragment"}
{"_id": "q_11059", "text": "Returns a new Fragment from a dictionary representation."}
{"_id": "q_11060", "text": "Add content to this fragment.\n\n `content` is a Unicode string, HTML to append to the body of the\n fragment. It must not contain a ``<body>`` tag, or otherwise assume\n that it is the only content on the page."}
{"_id": "q_11061", "text": "Add a resource needed by this Fragment.\n\n Other helpers, such as :func:`add_css` or :func:`add_javascript` are\n more convenient for those common types of resource.\n\n `text`: the actual text of this resource, as a unicode string.\n\n `mimetype`: the MIME type of the resource.\n\n `placement`: where on the page the resource should be placed:\n\n None: let the Fragment choose based on the MIME type.\n\n \"head\": put this resource in the ``<head>`` of the page.\n\n \"foot\": put this resource at the end of the ``<body>`` of the\n page."}
{"_id": "q_11062", "text": "Add a resource by URL needed by this Fragment.\n\n Other helpers, such as :func:`add_css_url` or\n :func:`add_javascript_url` are more convenent for those common types of\n resource.\n\n `url`: the URL to the resource.\n\n Other parameters are as defined for :func:`add_resource`."}
{"_id": "q_11063", "text": "Get some resource HTML for this Fragment.\n\n `placement` is \"head\" or \"foot\".\n\n Returns a unicode string, the HTML for the head or foot of the page."}
{"_id": "q_11064", "text": "Returns `resource` wrapped in the appropriate html tag for it's mimetype."}
{"_id": "q_11065", "text": "Render a fragment to HTML or return JSON describing it, based on the request."}
{"_id": "q_11066", "text": "Render the specified fragment to HTML for a standalone page."}
{"_id": "q_11067", "text": "Construct a pylearn2 dataset.\n\n Parameters\n ----------\n X : array_like\n Training examples.\n y : array_like, optional\n Labels."}
{"_id": "q_11068", "text": "Build a trainer and run main_loop.\n\n Parameters\n ----------\n X : array_like\n Training examples.\n y : array_like, optional\n Labels."}
{"_id": "q_11069", "text": "Get model predictions.\n\n See pylearn2.scripts.mlp.predict_csv and\n http://fastml.com/how-to-get-predictions-from-pylearn2/.\n\n Parameters\n ----------\n X : array_like\n Test dataset.\n method : str\n Model method to call for prediction."}
{"_id": "q_11070", "text": "Load the dataset using pylearn2.config.yaml_parse."}
{"_id": "q_11071", "text": "Create a Config object from config dict directly."}
{"_id": "q_11072", "text": "SHA1 hash of the config file itself."}
{"_id": "q_11073", "text": "t-SNE embedding of the parameters, colored by score"}
{"_id": "q_11074", "text": "Scatter plot of score vs each param"}
{"_id": "q_11075", "text": "A floating point-valued dimension bounded `min` <= x < `max`\n\n When `warp` is None, the base measure associated with this dimension\n is a uniform distribution on [min, max). With `warp == 'log'`, the\n base measure is a uniform distribution on the log of the variable,\n with bounds at `log(min)` and `log(max)`. This is appropriate for\n variables that are \"naturally\" in log-space. Other `warp` functions\n are not supported (yet), but may be at a later time."}
{"_id": "q_11076", "text": "An enumeration-valued dimension.\n\n The base measure associated with this dimension is a categorical\n distribution with equal weight on each element in `choices`."}
{"_id": "q_11077", "text": "Decorator that produces DEBUG level log messages before and after\n calling a parser method.\n\n If a callback raises an IgnoredMatchException the log will show 'IGNORED'\n instead to indicate that the parser will not create any objects from\n the matched string.\n\n Example:\n DEBUG:poyo.parser:parse_simple <- 123: 456.789\n DEBUG:poyo.parser:parse_int <- 123\n DEBUG:poyo.parser:parse_int -> 123\n DEBUG:poyo.parser:parse_float <- 456.789\n DEBUG:poyo.parser:parse_float -> 456.789\n DEBUG:poyo.parser:parse_simple -> <Simple name: 123, value: 456.789>"}
{"_id": "q_11078", "text": "Try to find a pattern that matches the source and calll a parser\n method to create Python objects.\n\n A callback that raises an IgnoredMatchException indicates that the\n given string data is ignored by the parser and no objects are created.\n\n If none of the pattern match a NoMatchException is raised."}
{"_id": "q_11079", "text": "get stats & show them"}
{"_id": "q_11080", "text": "Diff two thrift structs and return the result as a ThriftDiff instance"}
{"_id": "q_11081", "text": "Diff two thrift messages by comparing their args, raises exceptions if\n for some reason the messages can't be diffed. Only args of type 'struct'\n are compared.\n\n Returns a list of ThriftDiff results - one for each struct arg"}
{"_id": "q_11082", "text": "Check if two thrift messages are diff ready.\n\n Returns a tuple of (boolean, reason_string), i.e. (False, reason_string)\n if the messages can not be diffed along with the reason and\n (True, None) for the opposite case"}
{"_id": "q_11083", "text": "pops packets with _at least_ nbytes of payload"}
{"_id": "q_11084", "text": "similar to pop, but returns payload + last timestamp"}
{"_id": "q_11085", "text": "Save the given user configuration."}
{"_id": "q_11086", "text": "Determine the username\n\n If a username is given, this name is used. Otherwise the configuration\n file will be consulted if `use_config` is set to True. The user is asked\n for the username if the username is not available. Then the username is\n stored in the configuration file.\n\n :param username: Username (used directly if given)\n :type username: ``str``\n\n :param use_config: Whether to read username from configuration file\n :type use_config: ``bool``\n\n :param config_filename: Path to the configuration file\n :type config_filename: ``str``"}
{"_id": "q_11087", "text": "Determine the user password\n\n If the password is given, this password is used. Otherwise\n this function will try to get the password from the user's keyring\n if `use_keyring` is set to True.\n\n :param username: Username (used directly if given)\n :type username: ``str``\n\n :param use_config: Whether to read username from configuration file\n :type use_config: ``bool``\n\n :param config_filename: Path to the configuration file\n :type config_filename: ``str``"}
{"_id": "q_11088", "text": "Retrieves a data center by its ID.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11089", "text": "Retrieves a data center by its name.\n\n Either returns the data center response or raises an Exception\n if no or more than one data center was found with the name.\n The search for the name is done in this relaxing way:\n\n - exact name match\n - case-insentive name match\n - data center starts with the name\n - data center starts with the name (case insensitive)\n - name appears in the data center name\n - name appears in the data center name (case insensitive)\n\n :param name: The name of the data center.\n :type name: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11090", "text": "Retrieves a single firewall rule by ID.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param nic_id: The unique ID of the NIC.\n :type nic_id: ``str``\n\n :param firewall_rule_id: The unique ID of the firewall rule.\n :type firewall_rule_id: ``str``"}
{"_id": "q_11091", "text": "Removes a firewall rule from the NIC.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param nic_id: The unique ID of the NIC.\n :type nic_id: ``str``\n\n :param firewall_rule_id: The unique ID of the firewall rule.\n :type firewall_rule_id: ``str``"}
{"_id": "q_11092", "text": "Creates a firewall rule on the specified NIC and server.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param nic_id: The unique ID of the NIC.\n :type nic_id: ``str``\n\n :param firewall_rule: A firewall rule dict.\n :type firewall_rule: ``dict``"}
{"_id": "q_11093", "text": "Updates a firewall rule.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param nic_id: The unique ID of the NIC.\n :type nic_id: ``str``\n\n :param firewall_rule_id: The unique ID of the firewall rule.\n :type firewall_rule_id: ``str``"}
{"_id": "q_11094", "text": "Removes only user created images.\n\n :param image_id: The unique ID of the image.\n :type image_id: ``str``"}
{"_id": "q_11095", "text": "Replace all properties of an image."}
{"_id": "q_11096", "text": "Reserves an IP block within your account."}
{"_id": "q_11097", "text": "Retrieves a single LAN by ID.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param lan_id: The unique ID of the LAN.\n :type lan_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11098", "text": "Retrieves a list of LANs available in the account.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11099", "text": "Removes a LAN from the data center.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param lan_id: The unique ID of the LAN.\n :type lan_id: ``str``"}
{"_id": "q_11100", "text": "Creates a LAN in the data center.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param lan: The LAN object to be created.\n :type lan: ``dict``"}
{"_id": "q_11101", "text": "Retrieves the list of NICs that are part of the LAN.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param lan_id: The unique ID of the LAN.\n :type lan_id: ``str``"}
{"_id": "q_11102", "text": "Retrieves a single load balancer by ID.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param loadbalancer_id: The unique ID of the load balancer.\n :type loadbalancer_id: ``str``"}
{"_id": "q_11103", "text": "Removes the load balancer from the data center.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param loadbalancer_id: The unique ID of the load balancer.\n :type loadbalancer_id: ``str``"}
{"_id": "q_11104", "text": "Creates a load balancer within the specified data center.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param loadbalancer: The load balancer object to be created.\n :type loadbalancer: ``dict``"}
{"_id": "q_11105", "text": "Updates a load balancer\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param loadbalancer_id: The unique ID of the load balancer.\n :type loadbalancer_id: ``str``"}
{"_id": "q_11106", "text": "Associates a NIC with the given load balancer.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param loadbalancer_id: The unique ID of the load balancer.\n :type loadbalancer_id: ``str``\n\n :param nic_id: The ID of the NIC.\n :type nic_id: ``str``"}
{"_id": "q_11107", "text": "Gets the properties of a load balanced NIC.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param loadbalancer_id: The unique ID of the load balancer.\n :type loadbalancer_id: ``str``\n\n :param nic_id: The unique ID of the NIC.\n :type nic_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11108", "text": "Retrieves a NIC by its ID.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param nic_id: The unique ID of the NIC.\n :type nic_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11109", "text": "Creates a NIC on the specified server.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param nic: A NIC dict.\n :type nic: ``dict``"}
{"_id": "q_11110", "text": "Retrieves a single request by ID.\n\n :param request_id: The unique ID of the request.\n :type request_id: ``str``\n\n :param status: Retreive the full status of the request.\n :type status: ``bool``"}
{"_id": "q_11111", "text": "Retrieves a server by its ID.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11112", "text": "Removes the server from your data center.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``"}
{"_id": "q_11113", "text": "Creates a server within the data center.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server: A dict of the server to be created.\n :type server: ``dict``"}
{"_id": "q_11114", "text": "Updates a server with the parameters provided.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``"}
{"_id": "q_11115", "text": "Retrieves a list of volumes attached to the server.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11116", "text": "Retrieves volume information.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param volume_id: The unique ID of the volume.\n :type volume_id: ``str``"}
{"_id": "q_11117", "text": "Retrieves a list of CDROMs attached to the server.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11118", "text": "Retrieves an attached CDROM.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``\n\n :param cdrom_id: The unique ID of the CDROM.\n :type cdrom_id: ``str``"}
{"_id": "q_11119", "text": "Starts the server.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``"}
{"_id": "q_11120", "text": "Stops the server.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param server_id: The unique ID of the server.\n :type server_id: ``str``"}
{"_id": "q_11121", "text": "Creates a snapshot of the specified volume.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param volume_id: The unique ID of the volume.\n :type volume_id: ``str``\n\n :param name: The name given to the volume.\n :type name: ``str``\n\n :param description: The description given to the volume.\n :type description: ``str``"}
{"_id": "q_11122", "text": "Restores a snapshot to the specified volume.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param volume_id: The unique ID of the volume.\n :type volume_id: ``str``\n\n :param snapshot_id: The unique ID of the snapshot.\n :type snapshot_id: ``str``"}
{"_id": "q_11123", "text": "Removes a snapshot.\n\n :param snapshot_id: The ID of the snapshot\n you wish to remove.\n :type snapshot_id: ``str``"}
{"_id": "q_11124", "text": "Retrieves a single group by ID.\n\n :param group_id: The unique ID of the group.\n :type group_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11125", "text": "Creates a new group and set group privileges.\n\n :param group: The group object to be created.\n :type group: ``dict``"}
{"_id": "q_11126", "text": "Updates a group.\n\n :param group_id: The unique ID of the group.\n :type group_id: ``str``"}
{"_id": "q_11127", "text": "Removes a group.\n\n :param group_id: The unique ID of the group.\n :type group_id: ``str``"}
{"_id": "q_11128", "text": "Retrieves a list of all shares though a group.\n\n :param group_id: The unique ID of the group.\n :type group_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11129", "text": "Retrieves a specific resource share available to a group.\n\n :param group_id: The unique ID of the group.\n :type group_id: ``str``\n\n :param resource_id: The unique ID of the resource.\n :type resource_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11130", "text": "Shares a resource through a group.\n\n :param group_id: The unique ID of the group.\n :type group_id: ``str``\n\n :param resource_id: The unique ID of the resource.\n :type resource_id: ``str``"}
{"_id": "q_11131", "text": "Removes a resource share from a group.\n\n :param group_id: The unique ID of the group.\n :type group_id: ``str``\n\n :param resource_id: The unique ID of the resource.\n :type resource_id: ``str``"}
{"_id": "q_11132", "text": "Retrieves a single user by ID.\n\n :param user_id: The unique ID of the user.\n :type user_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11133", "text": "Creates a new user.\n\n :param user: The user object to be created.\n :type user: ``dict``"}
{"_id": "q_11134", "text": "Removes a user.\n\n :param user_id: The unique ID of the user.\n :type user_id: ``str``"}
{"_id": "q_11135", "text": "Retrieves a list of all users that are members of a particular group.\n\n :param group_id: The unique ID of the group.\n :type group_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11136", "text": "Adds an existing user to a group.\n\n :param group_id: The unique ID of the group.\n :type group_id: ``str``\n\n :param user_id: The unique ID of the user.\n :type user_id: ``str``"}
{"_id": "q_11137", "text": "Removes a user from a group.\n\n :param group_id: The unique ID of the group.\n :type group_id: ``str``\n\n :param user_id: The unique ID of the user.\n :type user_id: ``str``"}
{"_id": "q_11138", "text": "Retrieves a list of all resources.\n\n :param resource_type: The resource type: datacenter, image,\n snapshot or ipblock. Default is None,\n i.e., all resources are listed.\n :type resource_type: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11139", "text": "Retrieves a single resource of a particular type.\n\n :param resource_type: The resource type: datacenter, image,\n snapshot or ipblock.\n :type resource_type: ``str``\n\n :param resource_id: The unique ID of the resource.\n :type resource_id: ``str``\n\n :param depth: The depth of the response data.\n :type depth: ``int``"}
{"_id": "q_11140", "text": "Retrieves a single volume by ID.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param volume_id: The unique ID of the volume.\n :type volume_id: ``str``"}
{"_id": "q_11141", "text": "Creates a volume within the specified data center.\n\n :param datacenter_id: The unique ID of the data center.\n :type datacenter_id: ``str``\n\n :param volume: A volume dict.\n :type volume: ``dict``"}
{"_id": "q_11142", "text": "Poll resource request status until resource is provisioned.\n\n :param response: A response dict, which needs to have a 'requestId' item.\n :type response: ``dict``\n\n :param timeout: Maximum waiting time in seconds. None means infinite waiting time.\n :type timeout: ``int``\n\n :param initial_wait: Initial polling interval in seconds.\n :type initial_wait: ``int``\n\n :param scaleup: Double polling interval every scaleup steps, which will be doubled.\n :type scaleup: ``int``"}
{"_id": "q_11143", "text": "Convert Python snake case back to mixed case."}
{"_id": "q_11144", "text": "Find a item a given list by a matching name.\n\n The search for the name is done in this relaxing way:\n\n - exact name match\n - case-insentive name match\n - attribute starts with the name\n - attribute starts with the name (case insensitive)\n - name appears in the attribute\n - name appears in the attribute (case insensitive)\n\n :param list_: A list of elements\n :type list_: ``list``\n\n :param namegetter: Function that returns the name for a given\n element in the list\n :type namegetter: ``function``\n\n :param name: Name to search for\n :type name: ``str``"}
{"_id": "q_11145", "text": "gets info of servers of a data center"}
{"_id": "q_11146", "text": "gets states of a server"}
{"_id": "q_11147", "text": "Get the currently authenticated user ID"}
{"_id": "q_11148", "text": "Add a list of jobs to the currently authenticated user"}
{"_id": "q_11149", "text": "Remove a list of jobs from the currently authenticated user"}
{"_id": "q_11150", "text": "Get one or more users"}
{"_id": "q_11151", "text": "Create a fixed project"}
{"_id": "q_11152", "text": "Get one or more projects"}
{"_id": "q_11153", "text": "Get a single project by ID"}
{"_id": "q_11154", "text": "Search for all projects"}
{"_id": "q_11155", "text": "Place a bid on a project"}
{"_id": "q_11156", "text": "Get a specific milestone"}
{"_id": "q_11157", "text": "Accept a bid on a project"}
{"_id": "q_11158", "text": "Retract a bid on a project"}
{"_id": "q_11159", "text": "Highlight a bid on a project"}
{"_id": "q_11160", "text": "Create a milestone payment"}
{"_id": "q_11161", "text": "Start tracking a project by creating a track"}
{"_id": "q_11162", "text": "Updates the current location by creating a new track point and appending\n it to the given track"}
{"_id": "q_11163", "text": "Gets a specific track"}
{"_id": "q_11164", "text": "Create a milestone request"}
{"_id": "q_11165", "text": "Accept a milestone request"}
{"_id": "q_11166", "text": "Reject a milestone request"}
{"_id": "q_11167", "text": "Get a list of jobs"}
{"_id": "q_11168", "text": "Create a thread"}
{"_id": "q_11169", "text": "Create a project thread"}
{"_id": "q_11170", "text": "Get one or more messages"}
{"_id": "q_11171", "text": "Get one or more threads"}
{"_id": "q_11172", "text": "meaning pvalues presorted i descending order"}
{"_id": "q_11173", "text": "Compute posterior probabilities for each chromatogram\n\n For each chromatogram (each group_id / peptide precursor), all hypothesis of all peaks\n being correct (and all others false) as well as the h0 (all peaks are\n false) are computed.\n\n The prior probability that the are given in the function\n\n This assumes that the input data is sorted by tg_num_id\n\n Args:\n experiment(:class:`data_handling.Multipeptide`): the data of one experiment\n prior_chrom_null(float): the prior probability that any precursor\n is absent (all peaks are false)\n\n Returns:\n tuple(hypothesis, h0): two vectors that contain for each entry in\n the input dataframe the probabilities for the hypothesis that the\n peak is correct and the probability for the h0"}
{"_id": "q_11174", "text": "Finds cut off target score for specified false discovery rate fdr"}
{"_id": "q_11175", "text": "Conduct semi-supervised learning and error-rate estimation for MS1, MS2 and transition-level data."}
{"_id": "q_11176", "text": "Infer peptidoforms after scoring of MS1, MS2 and transition-level data."}
{"_id": "q_11177", "text": "Infer peptides and conduct error-rate estimation in different contexts."}
{"_id": "q_11178", "text": "Infer proteins and conduct error-rate estimation in different contexts."}
{"_id": "q_11179", "text": "Subsample OpenSWATH file to minimum for integrated scoring"}
{"_id": "q_11180", "text": "Reduce scored PyProphet file to minimum for global scoring"}
{"_id": "q_11181", "text": "Assumes zipcode is of type `str`"}
{"_id": "q_11182", "text": "List of zipcode dicts where zipcode prefix matches `partial_zipcode`"}
{"_id": "q_11183", "text": "Returns a restclients.Group object for the group identified by the\n passed group ID."}
{"_id": "q_11184", "text": "Creates a group from the passed restclients.Group object."}
{"_id": "q_11185", "text": "Deletes the group identified by the passed group ID."}
{"_id": "q_11186", "text": "Returns a list of restclients.GroupMember objects for the group\n identified by the passed group ID."}
{"_id": "q_11187", "text": "Updates the membership of the group represented by the passed group id.\n Returns a list of members not found."}
{"_id": "q_11188", "text": "Returns True if the netid is in the group, False otherwise."}
{"_id": "q_11189", "text": "Create 3 datasets in a group to represent the sparse array.\n\n Parameters\n ----------\n sparse_format:"}
{"_id": "q_11190", "text": "Pedantic yet imperfect. Test to see if \"name\" is a valid python identifier"}
{"_id": "q_11191", "text": "Build a palette definition from either a simple string or a dictionary,\n filling in defaults for items not specified.\n\n e.g.:\n \"dark green\"\n dark green foreground, black background\n\n {lo: dark gray, hi: \"#666\"}\n dark gray on 16-color terminals, #666 for 256+ color"}
{"_id": "q_11192", "text": "Decrypts context.io_manager's stdin and sends that to\n context.io_manager's stdout.\n\n See :py:mod:`swiftly.cli.decrypt` for context usage information.\n\n See :py:class:`CLIDecrypt` for more information."}
{"_id": "q_11193", "text": "Returns a stdout-suitable file-like object based on the\n optional os_path and optionally skipping any configured\n sub-command."}
{"_id": "q_11194", "text": "Returns a debug-output-suitable file-like object based on the\n optional os_path and optionally skipping any configured\n sub-command."}
{"_id": "q_11195", "text": "A context manager yielding a stdin-suitable file-like object\n based on the optional os_path and optionally skipping any\n configured sub-command.\n\n :param os_path: Optional path to base the file-like object\n on.\n :param skip_sub_command: Set True to skip any configured\n sub-command filter.\n :param disk_closed_callback: If the backing of the file-like\n object is an actual file that will be closed,\n disk_closed_callback (if set) will be called with the\n on-disk path just after closing it."}
{"_id": "q_11196", "text": "A context manager yielding a stdout-suitable file-like object\n based on the optional os_path and optionally skipping any\n configured sub-command.\n\n :param os_path: Optional path to base the file-like object\n on.\n :param skip_sub_command: Set True to skip any configured\n sub-command filter.\n :param disk_closed_callback: If the backing of the file-like\n object is an actual file that will be closed,\n disk_closed_callback (if set) will be called with the\n on-disk path just after closing it."}
{"_id": "q_11197", "text": "A context manager yielding a debug-output-suitable file-like\n object based on the optional os_path and optionally skipping\n any configured sub-command.\n\n :param os_path: Optional path to base the file-like object\n on.\n :param skip_sub_command: Set True to skip any configured\n sub-command filter.\n :param disk_closed_callback: If the backing of the file-like\n object is an actual file that will be closed,\n disk_closed_callback (if set) will be called with the\n on-disk path just after closing it."}
{"_id": "q_11198", "text": "Deletes all objects in the container.\n\n By default, this will perform one pass at deleting all objects in\n the container; so if objects revert to previous versions or if new\n objects otherwise arise during the process, the container may not be\n empty once done.\n\n Set `until_empty` to True if you want multiple passes to keep trying\n to fully empty the container. Note until_empty=True could run\n forever if something else is making new objects faster than they're\n being deleted.\n\n See :py:mod:`swiftly.cli.delete` for context usage information.\n\n See :py:class:`CLIDelete` for more information."}
{"_id": "q_11199", "text": "Instance method decorator to convert an optional file keyword\n argument into an actual value, whether it be a passed value, a\n value obtained from an io_manager, or sys.stderr."}
{"_id": "q_11200", "text": "Outputs the error msg to the file if specified, or to the\n io_manager's stderr if available, or to sys.stderr."}
{"_id": "q_11201", "text": "Outputs help information to the file if specified, or to the\n io_manager's stdout if available, or to sys.stdout."}
{"_id": "q_11202", "text": "Outputs version information to the file if specified, or to\n the io_manager's stdout if available, or to sys.stdout."}
{"_id": "q_11203", "text": "Performs a direct HTTP request to the Swift service.\n\n :param method: The request method ('GET', 'HEAD', etc.)\n :param path: The request path.\n :param contents: The body of the request. May be a string or\n a file-like object.\n :param headers: A dict of request headers and values.\n :param decode_json: If set True, the response body will be\n treated as JSON and decoded result returned instead of\n the raw contents.\n :param stream: If set True, the response body will return as\n a file-like object; otherwise, the response body will be\n read in its entirety and returned as a string. Overrides\n decode_json.\n :param query: A dict of query parameters and values to append\n to the path.\n :param cdn: If set True, the request will be sent to the CDN\n management endpoint instead of the default storage\n endpoint.\n :returns: A tuple of (status, reason, headers, contents).\n\n :status: An int for the HTTP status code.\n :reason: The str for the HTTP status (ex: \"Ok\").\n :headers: A dict with all lowercase keys of the HTTP\n headers; if a header has multiple values, it will be\n a list.\n :contents: Depending on the decode_json and stream\n settings, this will either be the raw response\n string, the JSON decoded object, or a file-like\n object."}
{"_id": "q_11204", "text": "POSTs the account and returns the results. This is usually\n done to set X-Account-Meta-xxx headers. Note that any existing\n X-Account-Meta-xxx headers will remain untouched. To remove an\n X-Account-Meta-xxx header, send the header with an empty\n string as its value.\n\n :param headers: Additional headers to send with the request.\n :param query: Set to a dict of query values to send on the\n query string of the request.\n :param cdn: If set True, the CDN management interface will be\n used.\n :param body: No known Swift POSTs take a body; but the option\n is there for the future.\n :returns: A tuple of (status, reason, headers, contents).\n\n :status: is an int for the HTTP status code.\n :reason: is the str for the HTTP status (ex: \"Ok\").\n :headers: is a dict with all lowercase keys of the HTTP\n headers; if a header has multiple values, it will be a\n list.\n :contents: is the str for the HTTP body."}
{"_id": "q_11205", "text": "Sends a DELETE request to the account and returns the results.\n\n With ``query['bulk-delete'] = ''`` this might mean a bulk\n delete request where the body of the request is new-line\n separated, url-encoded list of names to delete. Be careful\n with this! One wrong move and you might mark your account for\n deletion of you have the access to do so!\n\n For a plain DELETE to the account, on clusters that support\n it and, assuming you have permissions to do so, the account\n will be marked as deleted and immediately begin removing the\n objects from the cluster in the backgound.\n\n THERE IS NO GOING BACK!\n\n :param headers: Additional headers to send with the request.\n :param yes_i_mean_delete_the_account: Set to True to verify\n you really mean to delete the entire account. This is\n required unless ``body and 'bulk-delete' in query``.\n :param query: Set to a dict of query values to send on the\n query string of the request.\n :param cdn: If set True, the CDN management interface will be\n used.\n :param body: Some account DELETE requests, like the bulk\n delete request, take a body.\n :returns: A tuple of (status, reason, headers, contents).\n\n :status: is an int for the HTTP status code.\n :reason: is the str for the HTTP status (ex: \"Ok\").\n :headers: is a dict with all lowercase keys of the HTTP\n headers; if a header has multiple values, it will be\n a list.\n :contents: is the str for the HTTP body."}
{"_id": "q_11206", "text": "POSTs the object and returns the results. This is used to\n update the object's header values. Note that all headers must\n be sent with the POST, unlike the account and container POSTs.\n With account and container POSTs, existing headers are\n untouched. But with object POSTs, any existing headers are\n removed. The full list of supported headers depends on the\n Swift cluster, but usually include Content-Type,\n Content-Encoding, and any X-Object-Meta-xxx headers.\n\n :param container: The name of the container.\n :param obj: The name of the object.\n :param headers: Additional headers to send with the request.\n :param query: Set to a dict of query values to send on the\n query string of the request.\n :param cdn: If set True, the CDN management interface will be\n used.\n :param body: No known Swift POSTs take a body; but the option\n is there for the future.\n :returns: A tuple of (status, reason, headers, contents).\n\n :status: is an int for the HTTP status code.\n :reason: is the str for the HTTP status (ex: \"Ok\").\n :headers: is a dict with all lowercase keys of the HTTP\n headers; if a header has multiple values, it will be a\n list.\n :contents: is the str for the HTTP body."}
{"_id": "q_11207", "text": "Returns a new CLIContext instance that is a shallow copy of\n the original, much like dict's copy method."}
{"_id": "q_11208", "text": "Convenience function to output headers in a formatted fashion\n to a file-like fp, optionally muting any headers in the mute\n list."}
{"_id": "q_11209", "text": "Authenticates and then outputs the resulting information.\n\n See :py:mod:`swiftly.cli.auth` for context usage information.\n\n See :py:class:`CLIAuth` for more information."}
{"_id": "q_11210", "text": "Returns a TempURL good for the given request method, url, and\n number of seconds from now, signed by the given key."}
{"_id": "q_11211", "text": "Issues commands for each item in an account or container listing.\n\n See :py:mod:`swiftly.cli.fordo` for context usage information.\n\n See :py:class:`CLIForDo` for more information."}
{"_id": "q_11212", "text": "Obtains a client for use, whether an existing unused client\n or a brand new one if none are available."}
{"_id": "q_11213", "text": "Generator that decrypts a content stream using AES 256 in CBC\n mode.\n\n :param key: Any string to use as the decryption key.\n :param stdin: Where to read the encrypted data from.\n :param chunk_size: Largest amount to read at once."}
{"_id": "q_11214", "text": "Performs PUTs rooted at the path using a directory structure\n pointed to by context.input\\_.\n\n See :py:mod:`swiftly.cli.put` for context usage information.\n\n See :py:class:`CLIPut` for more information."}
{"_id": "q_11215", "text": "Performs a PUT on the account.\n\n See :py:mod:`swiftly.cli.put` for context usage information.\n\n See :py:class:`CLIPut` for more information."}
{"_id": "q_11216", "text": "Performs a PUT on the container.\n\n See :py:mod:`swiftly.cli.put` for context usage information.\n\n See :py:class:`CLIPut` for more information."}
{"_id": "q_11217", "text": "Returns body for manifest file and modifies put_headers.\n\n path2info is a dict like {\"path\": (size, etag)}"}
{"_id": "q_11218", "text": "Creates container for segments of file with `path`"}
{"_id": "q_11219", "text": "Generates a TempURL and sends that to the context.io_manager's\n stdout.\n\n See :py:mod:`swiftly.cli.tempurl` for context usage information.\n\n See :py:class:`CLITempURL` for more information.\n\n :param context: The :py:class:`swiftly.cli.context.CLIContext` to\n use.\n :param method: The method for the TempURL (GET, PUT, etc.)\n :param path: The path the TempURL should direct to.\n :param seconds: The number of seconds the TempURL should be good\n for. Default: 3600\n :param use_container: If True, will create a container level TempURL\n useing X-Container-Meta-Temp-Url-Key instead of\n X-Account-Meta-Temp-Url-Key."}
{"_id": "q_11220", "text": "Translates any information that can be determined from the\n x_trans_id and sends that to the context.io_manager's stdout.\n\n See :py:mod:`swiftly.cli.trans` for context usage information.\n\n See :py:class:`CLITrans` for more information."}
{"_id": "q_11221", "text": "Outputs help information.\n\n See :py:mod:`swiftly.cli.help` for context usage information.\n\n See :py:class:`CLIHelp` for more information.\n\n :param context: The :py:class:`swiftly.cli.context.CLIContext` to\n use.\n :param command_name: The command_name to output help information\n for, or set to None or an empty string to output the general\n help information.\n :param general_parser: The\n :py:class:`swiftly.cli.optionparser.OptionParser` for general\n usage.\n :param command_parsers: A dict of (name, :py:class:`CLICommand`)\n for specific command usage."}
{"_id": "q_11222", "text": "Last 30 pull requests from a repository.\n\n :param app: Flask app\n :param repo_config: dict with ``github_repo`` key\n\n :returns: id for a pull request"}
{"_id": "q_11223", "text": "Migrate all keys in a source stash to a destination stash\n\n The migration process will decrypt all keys using the source\n stash's passphrase and then encrypt them based on the destination\n stash's passphrase.\n\n re-encryption will take place only if the passphrases are differing"}
{"_id": "q_11224", "text": "Return a generate string `size` long based on lowercase, uppercase,\n and digit chars"}
{"_id": "q_11225", "text": "Return a dict from a list of key=value pairs"}
{"_id": "q_11226", "text": "r\"\"\"Init a stash\n\n `STASH_PATH` is the path to the storage endpoint. If this isn't supplied,\n a default path will be used. In the path, you can specify a name\n for the stash (which, if omitted, will default to `ghost`) like so:\n `ghost init http://10.10.1.1:8500;stash1`.\n\n After initializing a stash, don't forget you can set environment\n variables for both your stash's path and its passphrase.\n On Linux/OSx you can run:\n\n export GHOST_STASH_PATH='http://10.10.1.1:8500;stash1'\n\n export GHOST_PASSPHRASE=$(cat passphrase.ghost)\n\n export GHOST_BACKEND='tinydb'"}
{"_id": "q_11227", "text": "Insert a key to the stash\n\n `KEY_NAME` is the name of the key to insert\n\n `VALUE` is a key=value argument which can be provided multiple times.\n it is the encrypted value of your key"}
{"_id": "q_11228", "text": "Lock a key to prevent it from being deleted, purged or modified\n\n `KEY_NAME` is the name of the key to lock"}
{"_id": "q_11229", "text": "Delete a key from the stash\n\n `KEY_NAME` is the name of the key to delete\n You can provide that multiple times to delete multiple keys at once"}
{"_id": "q_11230", "text": "List all keys in the stash\n\n If `KEY_NAME` is provided, will look for keys containing `KEY_NAME`.\n If `KEY_NAME` starts with `~`, close matches will be provided according\n to `max_suggestions` and `cutoff`."}
{"_id": "q_11231", "text": "Purge the stash from all of its keys"}
{"_id": "q_11232", "text": "Load all keys from an exported key file to the stash\n\n `KEY_FILE` is the exported stash file to load keys from"}
{"_id": "q_11233", "text": "Return a list of all keys."}
{"_id": "q_11234", "text": "Delete a key if it exists."}
{"_id": "q_11235", "text": "Export all keys in the stash to a list or a file"}
{"_id": "q_11236", "text": "Turn a json serializable value into an jsonified, encrypted,\n hexa string."}
{"_id": "q_11237", "text": "The exact opposite of _encrypt"}
{"_id": "q_11238", "text": "Delete the key and return true if the key was deleted, else false"}
{"_id": "q_11239", "text": "Return a dictionary representing a key from a list of columns\n and a tuple of values"}
{"_id": "q_11240", "text": "Put and return the only unique identifier possible, its url"}
{"_id": "q_11241", "text": "Create an Elasticsearch index if necessary"}
{"_id": "q_11242", "text": "Write your forwards methods here."}
{"_id": "q_11243", "text": "Returns the published slider items."}
{"_id": "q_11244", "text": "Renders the hero slider."}
{"_id": "q_11245", "text": "Release the lock after reading"}
{"_id": "q_11246", "text": "Acquire the lock to write"}
{"_id": "q_11247", "text": "Remove a task from the registry.\n\n To remove it, pass its identifier with `taks_id` parameter.\n When the identifier is not found, a `NotFoundError` exception\n is raised.\n\n :param task_id: identifier of the task to remove\n\n :raises NotFoundError: raised when the given task identifier\n is not found on the registry"}
{"_id": "q_11248", "text": "Get a task from the registry.\n\n Retrieve a task from the registry using its task identifier. When\n the task does not exist, a `NotFoundError` exception will be\n raised.\n\n :param task_id: task identifier\n\n :returns: a task object\n\n :raises NotFoundError: raised when the requested task is not\n found on the registry"}
{"_id": "q_11249", "text": "Get the list of tasks"}
{"_id": "q_11250", "text": "Returns a dict with the representation of this task configuration object."}
{"_id": "q_11251", "text": "Create an configuration object from a dictionary.\n\n Key,value pairs will be used to initialize a task configuration\n object. If 'config' contains invalid configuration parameters\n a `ValueError` exception will be raised.\n\n :param config: dictionary used to create an instance of this object\n\n :returns: a task config instance\n\n :raises ValueError: when an invalid configuration parameter is found"}
{"_id": "q_11252", "text": "Run the backend with the given parameters.\n\n The method will run the backend assigned to this job,\n storing the fetched items in a Redis queue. The ongoing\n status of the job, can be accessed through the property\n `result`. When `resume` is set, the job will start from\n the last execution, overwriting 'from_date' and 'offset'\n parameters, if needed.\n\n Setting to `True` the parameter `fetch_from_archive`, items can\n be fetched from the archive assigned to this job.\n\n Any exception during the execution of the process will\n be raised.\n\n :param backend_args: parameters used to un the backend\n :param archive_args: archive arguments\n :param resume: fetch items starting where the last\n execution stopped"}
{"_id": "q_11253", "text": "Execute a backend of Perceval.\n\n Run the backend of Perceval assigned to this job using the\n given arguments. It will raise an `AttributeError` when any of\n the required parameters to run the backend are not found.\n Other exceptions related to the execution of the backend\n will be raised too.\n\n This method will return an iterator of the items fetched\n by the backend. These items will include some metadata\n related to this job.\n\n It will also be possible to retrieve the items from the\n archive setting to `True` the parameter `fetch_from_archive`.\n\n :param backend_args: arguments to execute the backend\n :param archive_args: archive arguments\n\n :returns: iterator of items fetched by the backend\n\n :raises AttributeError: raised when any of the required\n parameters is not found"}
{"_id": "q_11254", "text": "Create a mapping"}
{"_id": "q_11255", "text": "Custom JSON encoder handler"}
{"_id": "q_11256", "text": "Write items to the queue\n\n :param writer: the writer object\n :param items_generator: items to be written in the queue"}
{"_id": "q_11257", "text": "Add and schedule a task.\n\n :param task_id: id of the task\n :param backend: name of the backend\n :param category: category of the items to fecth\n :param backend_args: args needed to initialize the backend\n :param archive_args: args needed to initialize the archive\n :param sched_args: scheduling args for this task\n\n :returns: the task created"}
{"_id": "q_11258", "text": "Remove and cancel a task.\n\n :param task_id: id of the task to be removed"}
{"_id": "q_11259", "text": "Get the items fetched by the jobs."}
{"_id": "q_11260", "text": "Parse the archive arguments of a task"}
{"_id": "q_11261", "text": "Cancel the job related to the given task."}
{"_id": "q_11262", "text": "Listen for completed jobs and reschedule successful ones."}
{"_id": "q_11263", "text": "Schedule a task.\n\n :param task_id: identifier of the task to schedule\n\n :raises NotFoundError: raised when the requested task is not\n found in the registry"}
{"_id": "q_11264", "text": "Handle successufl jobs"}
{"_id": "q_11265", "text": "Handle failed jobs"}
{"_id": "q_11266", "text": "Build the set of arguments required for running a job"}
{"_id": "q_11267", "text": "Register a generic class based view wrapped with ModelAdmin and fake model\n\n :param view: The AdminView to register.\n :param admin_site: The AdminSite to register the view on.\n Defaults to bananas.admin.ExtendedAdminSite.\n :param admin_class: The ModelAdmin class to use for eg. permissions.\n Defaults to bananas.admin.ModelAdminView.\n\n Example:\n\n @register # Or with args @register(admin_class=MyModelAdminSubclass)\n class MyAdminView(bananas.admin.AdminView):\n def get(self, request):\n return self.render('template.html', {})\n\n # Also possible:\n register(MyAdminView, admin_class=MyModelAdminSublass)"}
{"_id": "q_11268", "text": "Extended DRF with fallback to requested namespace if request.version is missing"}
{"_id": "q_11269", "text": "Get or generate human readable view name.\n Extended version from DRF to support usage from both class and instance."}
{"_id": "q_11270", "text": "Get engine or raise exception, resolves Alias-instances to a sibling target.\n\n :param cursor: The object so search in\n :param key: The key to get\n :return: The object found"}
{"_id": "q_11271", "text": "Get database name and database schema from path.\n\n :param path: \"/\"-delimited path, parsed as\n \"/<database name>/<database schema>\"\n :return: tuple with (database or None, schema or None)"}
{"_id": "q_11272", "text": "Parse a database URL and return a DatabaseInfo named tuple.\n\n :param url: Database URL\n :return: DatabaseInfo instance\n\n Example:\n >>> conf = parse_database_url(\n ... 'pgsql://joar:hunter2@5monkeys.se:4242/tweets/tweetschema'\n ... '?hello=world')\n >>> conf # doctest: +NORMALIZE_WHITESPACE\n DatabaseInfo(engine='django.db.backends.postgresql_psycopg2',\n name='tweets',\n schema='tweetschema',\n user='joar',\n password='hunter2',\n host='5monkeys.se',\n port=4242,\n params={'hello': 'world'})"}
{"_id": "q_11273", "text": "Log in django staff user"}
{"_id": "q_11274", "text": "Retrieve logged in user info"}
{"_id": "q_11275", "text": "This is needed due to DRF's model serializer uses the queryset to build url name\n\n # TODO: Move this to own serializer mixin or fix problem elsewhere?"}
{"_id": "q_11276", "text": "Parse numeric string to int. Supports oct formatted string.\n\n :param str value: String value to parse as int\n :return int:"}
{"_id": "q_11277", "text": "Return appropriate parser for given type.\n\n :param typ: Type to get parser for.\n :return function: Parser"}
{"_id": "q_11278", "text": "Implementation of Y64 non-standard URL-safe base64 variant.\n\n See http://en.wikipedia.org/wiki/Base64#Variants_summary_table\n\n :return: base64-encoded result with substituted\n ``{\"+\", \"/\", \"=\"} => {\".\", \"_\", \"-\"}``."}
{"_id": "q_11279", "text": "will wait for exp to be returned from nodemcu or timeout"}
{"_id": "q_11280", "text": "write data on the nodemcu port. If 'binary' is True the debug log\n will show the intended output as hex, otherwise as string"}
{"_id": "q_11281", "text": "restores the nodemcu to default baudrate and then closes the port"}
{"_id": "q_11282", "text": "reading data from device into local file"}
{"_id": "q_11283", "text": "sends a file to the device using the transfer protocol"}
{"_id": "q_11284", "text": "Tries to verify if path has same checksum as destination.\n Valid options for verify is 'raw', 'sha1' or 'none'"}
{"_id": "q_11285", "text": "execute the lines in the local file 'path"}
{"_id": "q_11286", "text": "Returns true if ACK is received"}
{"_id": "q_11287", "text": "Read a chunk of data"}
{"_id": "q_11288", "text": "list files on the device"}
{"_id": "q_11289", "text": "Execute a file on the device using 'do"}
{"_id": "q_11290", "text": "Formats device filesystem"}
{"_id": "q_11291", "text": "Compiles a file specified by path on the device"}
{"_id": "q_11292", "text": "Removes a file on the device"}
{"_id": "q_11293", "text": "Backup all files from the device"}
{"_id": "q_11294", "text": "The upload operation"}
{"_id": "q_11295", "text": "The download operation"}
{"_id": "q_11296", "text": "List file on target"}
{"_id": "q_11297", "text": "Generates a Cartesian product of the input parameter dictionary.\n\n For example:\n\n >>> print cartesian_product({'param1':[1,2,3], 'param2':[42.0, 52.5]})\n {'param1':[1,1,2,2,3,3],'param2': [42.0,52.5,42.0,52.5,42.0,52.5]}\n\n :param parameter_dict:\n\n Dictionary containing parameter names as keys and iterables of data to explore.\n\n :param combined_parameters:\n\n Tuple of tuples. Defines the order of the parameters and parameters that are\n linked together.\n If an inner tuple contains only a single item, you can spare the\n inner tuple brackets.\n\n\n For example:\n\n >>> print cartesian_product( {'param1': [42.0, 52.5], 'param2':['a', 'b'], 'param3' : [1,2,3]}, ('param3',('param1', 'param2')))\n {param3':[1,1,2,2,3,3],'param1' : [42.0,52.5,42.0,52.5,42.0,52.5], 'param2':['a','b','a','b','a','b']}\n\n :returns: Dictionary with cartesian product lists."}
{"_id": "q_11298", "text": "Takes a list of explored parameters and finds unique parameter combinations.\n\n If parameter ranges are hashable operates in O(N), otherwise O(N**2).\n\n :param explored_parameters:\n\n List of **explored** parameters\n\n :return:\n\n List of tuples, first entry being the parameter values, second entry a list\n containing the run position of the unique combination."}
{"_id": "q_11299", "text": "Helper function to turn the simple logging kwargs into a `log_config`."}
{"_id": "q_11300", "text": "Tries to make directories for a given `filename`.\n\n Ignores any error but notifies via stderr."}
{"_id": "q_11301", "text": "Renames a given `filename` with valid wildcard placements.\n\n :const:`~pypet.pypetconstants.LOG_ENV` ($env) is replaces by the name of the\n trajectory`s environment.\n\n :const:`~pypet.pypetconstants.LOG_TRAJ` ($traj) is replaced by the name of the\n trajectory.\n\n :const:`~pypet.pypetconstants.LOG_RUN` ($run) is replaced by the name of the current\n run. If the trajectory is not set to a run 'run_ALL' is used.\n\n :const:`~pypet.pypetconstants.LOG_SET` ($set) is replaced by the name of the current\n run set. If the trajectory is not set to a run 'run_set_ALL' is used.\n\n :const:`~pypet.pypetconstants.LOG_PROC` ($proc) is replaced by the name fo the\n current process.\n\n :const:`~pypet.pypetconstant.LOG_HOST` ($host) is replaced by the name of the current host.\n\n :param filename: A filename string\n :param traj: A trajectory container, leave `None` if you provide all the parameters below\n :param env_name: Name of environemnt, leave `None` to get it from `traj`\n :param traj_name: Name of trajectory, leave `None` to get it from `traj`\n :param set_name: Name of run set, leave `None` to get it from `traj`\n :param run_name: Name of run, leave `None` to get it from `traj`\n :param process_name:\n\n The name of the desired process. If `None` the name of the current process is\n taken determined by the multiprocessing module.\n\n :param host_name:\n\n Name of host, leave `None` to determine it automatically with the platform module.\n\n :return: The new filename"}
{"_id": "q_11302", "text": "Displays a progressbar"}
{"_id": "q_11303", "text": "Searches for parser settings that define filenames.\n\n If such settings are found, they are renamed according to the wildcard\n rules. Moreover, it is also tried to create the corresponding folders.\n\n :param parser: A config parser\n :param section: A config section\n :param option: The section option\n :param rename_func: A function to rename found files\n :param make_dirs: If the directories of the file should be created."}
{"_id": "q_11304", "text": "Turns a ConfigParser into a StringIO stream."}
{"_id": "q_11305", "text": "Searches for multiprocessing options within a ConfigParser.\n\n If such options are found, they are copied (without the `'multiproc_'` prefix)\n into a new parser."}
{"_id": "q_11306", "text": "Searches for multiprocessing options in a given `dictionary`.\n\n If found they are copied (without the `'multiproc_'` prefix)\n into a new dictionary"}
{"_id": "q_11307", "text": "Checks for filenames within a config file and translates them.\n\n Moreover, directories for the files are created as well.\n\n :param log_config: Config file as a stream (like StringIO)"}
{"_id": "q_11308", "text": "Recursively walks and copies the `log_config` dict and searches for filenames.\n\n Translates filenames and creates directories if necessary."}
{"_id": "q_11309", "text": "Creates logging handlers and redirects stdout."}
{"_id": "q_11310", "text": "Finalizes the manager, closes and removes all handlers if desired."}
{"_id": "q_11311", "text": "Starts redirection of `stdout`"}
{"_id": "q_11312", "text": "Writes data from buffer to logger"}
{"_id": "q_11313", "text": "Compares two result instances\n\n Checks full name and all data. Does not consider the comment.\n\n :return: True or False\n\n :raises: ValueError if both inputs are no result instances"}
{"_id": "q_11314", "text": "Can be used to decorate a function as a manual run function.\n\n This can be helpful if you want the run functionality without using an environment.\n\n :param turn_into_run:\n\n If the trajectory should become a `single run` with more specialized functionality\n during a single run.\n\n :param store_meta_data:\n\n If meta-data like runtime should be automatically stored\n\n :param clean_up:\n\n If all data added during the single run should be removed, only works\n if ``turn_into_run=True``."}
{"_id": "q_11315", "text": "If there exist mutually exclusive parameters checks for them and maps param2 to 1."}
{"_id": "q_11316", "text": "This is a decorator which can be used if a kwarg has changed\n its name over versions to also support the old argument name.\n\n Issues a warning if the old keyword argument is detected and\n converts call to new API.\n\n :param old_name:\n\n Old name of the keyword argument\n\n :param new_name:\n\n New name of keyword argument"}
{"_id": "q_11317", "text": "Creates and runs BRIAN network based on the parameters in `traj`."}
{"_id": "q_11318", "text": "Simulation function for Euler integration.\n\n :param traj:\n\n Container for parameters and results\n\n :param diff_func:\n\n The differential equation we want to integrate"}
{"_id": "q_11319", "text": "Adds all necessary parameters to the `traj` container"}
{"_id": "q_11320", "text": "The Lorenz attractor differential equation\n\n :param value_array: 3d array containing the x,y, and z component values.\n :param sigma: Constant attractor parameter\n :param beta: FConstant attractor parameter\n :param rho: Constant attractor parameter\n\n :return: 3d array of the Lorenz system evaluated at `value_array`"}
{"_id": "q_11321", "text": "Creates a storage service, to be extended if new storage services are added\n\n :param storage_service:\n\n Storage Service instance of constructor or a string pointing to a file\n\n :param trajectory:\n\n A trajectory instance\n\n :param kwargs:\n\n Arguments passed to the storage service\n\n :return:\n\n A storage service and a set of not used keyword arguments from kwargs"}
{"_id": "q_11322", "text": "Computes model equations for the excitatory and inhibitory population.\n\n Equation objects are created by fusing `model.eqs` and `model.synaptic.eqs`\n and replacing `PRE` by `i` (for inhibitory) or `e` (for excitatory) depending\n on the type of population.\n\n :return: Dictionary with 'i' equation object for inhibitory neurons and 'e' for excitatory"}
{"_id": "q_11323", "text": "Builds the neuron groups from `traj`.\n\n Adds the neuron groups to `brian_list` and `network_dict`."}
{"_id": "q_11324", "text": "Pre-builds the connections.\n\n Pre-build is only performed if none of the\n relevant parameters is explored and the relevant neuron groups\n exist.\n\n :param traj: Trajectory container\n\n :param brian_list:\n\n List of objects passed to BRIAN network constructor.\n\n Adds:\n\n Connections, amount depends on clustering\n\n :param network_dict:\n\n Dictionary of elements shared among the components\n\n Expects:\n\n 'neurons_i': Inhibitory neuron group\n\n 'neurons_e': Excitatory neuron group\n\n Adds:\n\n Connections, amount depends on clustering"}
{"_id": "q_11325", "text": "Builds the connections.\n\n Build is only performed if connections have not\n been pre-build.\n\n :param traj: Trajectory container\n\n :param brian_list:\n\n List of objects passed to BRIAN network constructor.\n\n Adds:\n\n Connections, amount depends on clustering\n\n :param network_dict:\n\n Dictionary of elements shared among the components\n\n Expects:\n\n 'neurons_i': Inhibitory neuron group\n\n 'neurons_e': Excitatory neuron group\n\n Adds:\n\n Connections, amount depends on clustering"}
{"_id": "q_11326", "text": "Adds all necessary parameters to `traj` container."}
{"_id": "q_11327", "text": "Computes Fano Factor for one neuron.\n\n :param spike_res:\n\n Result containing the spiketimes of all neurons\n\n :param neuron_id:\n\n Index of neuron for which FF is computed\n\n :param time_window:\n\n Length of the consecutive time windows to compute the FF\n\n :param start_time:\n\n Start time of measurement to consider\n\n :param end_time:\n\n End time of measurement to consider\n\n :return:\n\n Fano Factor (float) or\n returns 0 if mean firing activity is 0."}
{"_id": "q_11328", "text": "Adds monitors to the network if the measurement run is carried out.\n\n :param traj: Trajectory container\n\n :param network: The BRIAN network\n\n :param current_subrun: BrianParameter\n\n :param subrun_list: List of coming subrun_list\n\n :param network_dict:\n\n Dictionary of items shared among the components\n\n Expects:\n\n 'neurons_e': Excitatory neuron group\n\n Adds:\n\n 'monitors': List of monitors\n\n 0. SpikeMonitor of excitatory neurons\n\n 1. StateMonitor of membrane potential of some excitatory neurons\n (specified in `neuron_records`)\n\n 2. StateMonitor of excitatory synaptic currents of some excitatory neurons\n\n 3. State monitor of inhibitory currents of some excitatory neurons"}
{"_id": "q_11329", "text": "Plots a state variable graph for several neurons into one figure"}
{"_id": "q_11330", "text": "Makes some plots and stores them into subfolders"}
{"_id": "q_11331", "text": "Extracts monitor data and plots.\n\n Data extraction is done if all subruns have been completed,\n i.e. `len(subrun_list)==0`\n\n First, extracts results from the monitors and stores them into `traj`.\n\n Next, uses the extracted data for plots.\n\n :param traj:\n\n Trajectory container\n\n Adds:\n\n Data from monitors\n\n :param network: The BRIAN network\n\n :param current_subrun: BrianParameter\n\n :param subrun_list: List of coming subruns\n\n :param network_dict: Dictionary of items shared among all components"}
{"_id": "q_11332", "text": "Function that parses the batch id from the command line arguments"}
{"_id": "q_11333", "text": "Chooses exploration according to `batch`"}
{"_id": "q_11334", "text": "Alternative naming, you can use `node.vars.name` instead of `node.v_name`"}
{"_id": "q_11335", "text": "Alternative naming, you can use `node.func.name` instead of `node.f_func`"}
{"_id": "q_11336", "text": "Renames the tree node"}
{"_id": "q_11337", "text": "Sets some details for internal handling."}
{"_id": "q_11338", "text": "Maps a given node and a store_load constant to the message that is understood by\n the storage service."}
{"_id": "q_11339", "text": "Deletes a single node from the tree.\n\n Removes all references to the node.\n\n Note that the 'parameters', 'results', 'derived_parameters', and 'config' groups\n hanging directly below root cannot be deleted. Also the root node itself cannot be\n deleted. (This would cause a tremendous wave of uncontrollable self destruction, which\n would finally lead to the Apocalypse!)"}
{"_id": "q_11340", "text": "Removes a given node from the tree.\n\n Starts from a given node and walks recursively down the tree to the location of the node\n we want to remove.\n\n We need to walk from a start node in case we want to check on the way back whether we got\n empty group nodes due to deletion.\n\n :param actual_node: Current node\n\n :param split_name: DEQUE of names to get the next nodes.\n\n :param recursive:\n\n To also delete all children of a group node\n\n :return: True if node was deleted, otherwise False"}
{"_id": "q_11341", "text": "Maps a given shortcut to corresponding name\n\n * 'run_X' or 'r_X' to 'run_XXXXXXXXX'\n\n * 'crun' to the current run name in case of a\n single run instance if trajectory is used via `v_crun`\n\n * 'par' 'parameters'\n\n * 'dpar' to 'derived_parameters'\n\n * 'res' to 'results'\n\n * 'conf' to 'config'\n\n :return: True or False and the mapped name."}
{"_id": "q_11342", "text": "Adds the correct sub branch prefix to a given name.\n\n Usually the prefix is the full name of the parent node. In case items are added\n directly to the trajectory the prefixes are chosen according to the matching subbranch.\n\n For example, this could be 'parameters' for parameters or 'results.run_00000004' for\n results added to the fifth single run.\n\n :param split_names:\n\n List of names of the new node (e.g. ``['mynewgroupA', 'mynewgroupB', 'myresult']``).\n\n :param start_node:\n\n Parent node under which the new node should be added.\n\n :param group_type_name:\n\n Type name of subbranch the item belongs to\n (e.g. 'PARAMETER_GROUP', 'RESULT_GROUP' etc).\n\n\n :return: The name with the added prefix."}
{"_id": "q_11343", "text": "Determines types for generic additions"}
{"_id": "q_11344", "text": "Creates a link and checks if names are appropriate"}
{"_id": "q_11345", "text": "Checks if a list contains strings with invalid names.\n\n Returns a description of the name violations. If names are correct the empty\n string is returned.\n\n :param split_names: List of strings\n\n :param parent_node:\n\n The parental node from where to start (only applicable for node names)"}
{"_id": "q_11346", "text": "Generically creates a new group inferring from the `type_name`."}
{"_id": "q_11347", "text": "Generically creates a novel parameter or result instance inferring from the `type_name`.\n\n If the instance is already supplied it is NOT constructed new.\n\n :param parent_node:\n\n Parent trajectory node\n\n :param name:\n\n Name of the new result or parameter. Here the name no longer contains colons.\n\n :param type_name:\n\n Whether it is a parameter below parameters, config, derived parameters or whether\n it is a result.\n\n :param instance:\n\n The instance if it has been constructed somewhere else, otherwise None.\n\n :param constructor:\n\n A constructor used if instance needs to be constructed. If None the current standard\n constructor is chosen.\n\n :param args:\n\n Additional arguments passed to the constructor\n\n :param kwargs:\n\n Additional keyword arguments passed to the constructor\n\n :return: The new instance"}
{"_id": "q_11348", "text": "Renames a given `instance` based on `parent_node` and `name`.\n\n Adds meta information like depth as well."}
{"_id": "q_11349", "text": "Returns an iterator over a node's children.\n\n In case of using a trajectory as a run (setting 'v_crun') some sub branches\n that do not belong to the run are blinded out."}
{"_id": "q_11350", "text": "Fast search for a node in the tree.\n\n The tree is not traversed but the reference dictionaries are searched.\n\n :param node:\n\n Parent node to start from\n\n :param key:\n\n Name of node to find\n\n :param max_depth:\n\n Maximum depth.\n\n :param with_links:\n\n If we work with links than we can only be sure to found the node in case we\n have a single match. Otherwise the other match might have been linked as well.\n\n :param crun:\n\n If given only nodes belonging to this particular run are searched and the rest\n is blinded out.\n\n :return: The found node and its depth\n\n :raises:\n\n TooManyGroupsError:\n\n If search cannot performed fast enough, an alternative search method is needed.\n\n NotUniqueNodeError:\n\n If several nodes match the key criterion"}
{"_id": "q_11351", "text": "Searches for an item in the tree below `node`\n\n :param node:\n\n The parent node below which the search is performed\n\n :param key:\n\n Name to search for. Can be the short name, the full name or parts of it\n\n :param max_depth:\n\n maximum search depth.\n\n :param with_links:\n\n If links should be considered\n\n :param crun:\n\n Used for very fast search if we know we operate in a single run branch\n\n :return: The found node and the depth it was found for"}
{"_id": "q_11352", "text": "Alternative naming, you can use `node.kids.name` instead of `node.name`\n for easier tab completion."}
{"_id": "q_11353", "text": "Can be called from storage service to create a new group to bypass name checking"}
{"_id": "q_11354", "text": "Creates a dummy object containing the whole tree to make unfolding easier.\n\n This method is only useful for debugging purposes.\n If you use an IDE and want to unfold the trajectory tree, you always need to\n open the private attribute `_children`. Use to this function to create a new\n object that contains the tree structure in its attributes.\n\n Manipulating the returned object does not change the original tree!"}
{"_id": "q_11355", "text": "Removes a link from from the current group node with a given name.\n\n Does not delete the link from the hard drive. If you want to do this,\n checkout :func:`~pypet.trajectory.Trajectory.f_delete_links`"}
{"_id": "q_11356", "text": "Adds an empty generic leaf under the current node.\n\n You can add to a generic leaves anywhere you want. So you are free to build\n your trajectory tree with any structure. You do not necessarily have to follow the\n four subtrees `config`, `parameters`, `derived_parameters`, `results`.\n\n If you are operating within these subtrees this simply calls the corresponding adding\n function.\n\n Be aware that if you are within a single run and you add items not below a group\n `run_XXXXXXXX` that you have to manually\n save the items. Otherwise they will be lost after the single run is completed."}
{"_id": "q_11357", "text": "Removes a child of the group.\n\n Note that groups and leaves are only removed from the current trajectory in RAM.\n If the trajectory is stored to disk, this data is not affected. Thus, removing children\n can be only be used to free RAM memory!\n\n If you want to free memory on disk via your storage service,\n use :func:`~pypet.trajectory.Trajectory.f_delete_items` of your trajectory.\n\n :param name:\n\n Name of child, naming by grouping is NOT allowed ('groupA.groupB.childC'),\n child must be direct successor of current node.\n\n :param recursive:\n\n Must be true if child is a group that has children. Will remove\n the whole subtree in this case. Otherwise a Type Error is thrown.\n\n :param predicate:\n\n Predicate which can evaluate for each node to ``True`` in order to remove the node or\n ``False`` if the node should be kept. Leave ``None`` if you want to remove all nodes.\n\n :raises:\n\n TypeError if recursive is false but there are children below the node.\n\n ValueError if child does not exist."}
{"_id": "q_11358", "text": "Checks if the node contains a specific parameter or result.\n\n It is checked if the item can be found via the\n :func:`~pypet.naturalnaming.NNGroupNode.f_get` method.\n\n :param item: Parameter/Result name or instance.\n\n If a parameter or result instance is supplied it is also checked if\n the provided item and the found item are exactly the same instance, i.e.\n `id(item)==id(found_item)`.\n\n :param with_links:\n\n If links are considered.\n\n :param shortcuts:\n\n Shortcuts is `False` the name you supply must\n be found in the tree WITHOUT hopping over nodes in between.\n If `shortcuts=False` and you supply a\n non colon separated (short) name, than the name must be found\n in the immediate children of your current node.\n Otherwise searching via shortcuts is allowed.\n\n :param max_depth:\n\n If shortcuts is `True` than the maximum search depth\n can be specified. `None` means no limit.\n\n :return: True or False"}
{"_id": "q_11359", "text": "Similar to `f_get`, but returns the default value if `name` is not found in the\n trajectory.\n\n This function uses the `f_get` method and will return the default value\n in case `f_get` raises an AttributeError or a DataNotInStorageError.\n Other errors are not handled.\n\n In contrast to `f_get`, fast access is True by default."}
{"_id": "q_11360", "text": "Returns a children dictionary.\n\n :param copy:\n\n Whether the group's original dictionary or a shallow copy is returned.\n If you want the real dictionary please do not modify it at all!\n\n :returns: Dictionary of nodes"}
{"_id": "q_11361", "text": "Returns a dictionary of groups hanging immediately below this group.\n\n :param copy:\n\n Whether the group's original dictionary or a shallow copy is returned.\n If you want the real dictionary please do not modify it at all!\n\n :returns: Dictionary of nodes"}
{"_id": "q_11362", "text": "Stores a child or recursively a subtree to disk.\n\n :param name:\n\n Name of child to store. If grouped ('groupA.groupB.childC') the path along the way\n to last node in the chain is stored. Shortcuts are NOT allowed!\n\n :param recursive:\n\n Whether recursively all children's children should be stored too.\n\n :param store_data:\n\n For how to choose 'store_data' see :ref:`more-on-storing`.\n\n :param max_depth:\n\n In case `recursive` is `True`, you can specify the maximum depth to store\n data relative from current node. Leave `None` if you don't want to limit\n the depth.\n\n :raises: ValueError if the child does not exist."}
{"_id": "q_11363", "text": "Loads a group from disk.\n\n :param recursive:\n\n Default is ``True``.\n Whether recursively all nodes below the current node should be loaded, too.\n Note that links are never evaluated recursively. Only the linked node\n will be loaded if it does not exist in the tree, yet. Any nodes or links\n of this linked node are not loaded.\n\n :param load_data:\n\n Flag how to load the data.\n For how to choose 'load_data' see :ref:`more-on-loading`.\n\n :param max_depth:\n\n In case `recursive` is `True`, you can specify the maximum depth to load\n load data relative from current node.\n\n :returns:\n\n The node itself."}
{"_id": "q_11364", "text": "Adds a parameter under the current node.\n\n There are two ways to add a new parameter either by adding a parameter instance:\n\n >>> new_parameter = Parameter('group1.group2.myparam', data=42, comment='Example!')\n >>> traj.f_add_parameter(new_parameter)\n\n Or by passing the values directly to the function, with the name being the first\n (non-keyword!) argument:\n\n >>> traj.f_add_parameter('group1.group2.myparam', 42, comment='Example!')\n\n If you want to create a different parameter than the standard parameter, you can\n give the constructor as the first (non-keyword!) argument followed by the name\n (non-keyword!):\n\n >>> traj.f_add_parameter(PickleParameter,'group1.group2.myparam', data=42, comment='Example!')\n\n The full name of the current node is added as a prefix to the given parameter name.\n If the current node is the trajectory the prefix `'parameters'` is added to the name.\n\n Note, all non-keyword and keyword parameters apart from the optional constructor\n are passed on as is to the constructor.\n\n Moreover, you always should specify a default data value of a parameter,\n even if you want to explore it later."}
{"_id": "q_11365", "text": "Adds a result under the current node.\n\n There are two ways to add a new result either by adding a result instance:\n\n >>> new_result = Result('group1.group2.myresult', 1666, x=3, y=4, comment='Example!')\n >>> traj.f_add_result(new_result)\n\n Or by passing the values directly to the function, with the name being the first\n (non-keyword!) argument:\n\n >>> traj.f_add_result('group1.group2.myresult', 1666, x=3, y=3,comment='Example!')\n\n\n If you want to create a different result than the standard result, you can\n give the constructor as the first (non-keyword!) argument followed by the name\n (non-keyword!):\n\n >>> traj.f_add_result(PickleResult,'group1.group2.myresult', 1666, x=3, y=3, comment='Example!')\n\n Additional arguments (here `1666`) or keyword arguments (here `x=3, y=3`) are passed\n onto the constructor of the result.\n\n\n Adds the full name of the current node as prefix to the name of the result.\n If current node is a single run (root) adds the prefix `'results.runs.run_08%d%'` to the\n full name where `'08%d'` is replaced by the index of the current run."}
{"_id": "q_11366", "text": "Adds a derived parameter under the current group.\n\n Similar to\n :func:`~pypet.naturalnaming.ParameterGroup.f_add_parameter`\n\n Naming prefixes are added as in\n :func:`~pypet.naturalnaming.DerivedParameterGroup.f_add_derived_parameter_group`"}
{"_id": "q_11367", "text": "Adds an empty config group under the current node.\n\n Adds the full name of the current node as prefix to the name of the group.\n If current node is the trajectory (root), the prefix `'config'` is added to the full name.\n\n The `name` can also contain subgroups separated via colons, for example:\n `name=subgroup1.subgroup2.subgroup3`. These other parent groups will be automatically\n be created."}
{"_id": "q_11368", "text": "Adds a config parameter under the current group.\n\n Similar to\n :func:`~pypet.naturalnaming.ParameterGroup.f_add_parameter`.\n\n If current group is the trajectory the prefix `'config'` is added to the name."}
{"_id": "q_11369", "text": "The fitness function"}
{"_id": "q_11370", "text": "Adds commit information to the trajectory."}
{"_id": "q_11371", "text": "Nests a given flat dictionary.\n\n Nested keys are created by splitting given keys around the `separator`."}
{"_id": "q_11372", "text": "Plots a progress bar to the given `logger` for large for loops.\n\n To be used inside a for-loop at the end of the loop:\n\n .. code-block:: python\n\n for irun in range(42):\n my_costly_job() # Your expensive function\n progressbar(index=irun, total=42, reprint=True) # shows a growing progressbar\n\n\n There is no initialisation of the progressbar necessary before the for-loop.\n The progressbar will be reset automatically if used in another for-loop.\n\n :param index: Current index of for-loop\n :param total: Total size of for-loop\n :param percentage_step: Steps with which the bar should be plotted\n :param logger:\n\n Logger to write to - with level INFO. If string 'print' is given, the print statement is\n used. Use ``None`` if you don't want to print or log the progressbar statement.\n\n :param log_level: Log level with which to log.\n :param reprint:\n\n If no new line should be plotted but carriage return (works only for printing)\n\n :param time: If the remaining time should be estimated and displayed\n :param length: Length of the bar in `=` signs.\n :param fmt_string:\n\n A string which contains exactly one `%s` in order to incorporate the progressbar.\n If such a string is given, ``fmt_string % progressbar`` is printed/logged.\n\n :param reset:\n\n If the progressbar should be restarted. If progressbar is called with a lower\n index than the one before, the progressbar is automatically restarted.\n\n :return:\n\n The progressbar string or `None` if the string has not been updated."}
{"_id": "q_11373", "text": "Helper function to support both Python versions"}
{"_id": "q_11374", "text": "Takes a function and keyword arguments and returns the ones that can be passed."}
{"_id": "q_11375", "text": "Formats timestamp to human readable format"}
{"_id": "q_11376", "text": "Returns local tcp address for a given `port`, automatic port if `None`"}
{"_id": "q_11377", "text": "Resets to the progressbar to start a new one"}
{"_id": "q_11378", "text": "Removes `key` from annotations"}
{"_id": "q_11379", "text": "Turns a given shared data item into a an ordinary one.\n\n :param result: Result container with shared data\n :param key: The name of the shared data\n :param trajectory:\n\n The trajectory, only needed if shared data has\n no access to the trajectory, yet.\n\n :param reload: If data should be reloaded after conversion\n\n :return: The result"}
{"_id": "q_11380", "text": "Turns an ordinary data item into a shared one.\n\n Removes the old result from the trajectory and replaces it.\n Empties the given result.\n\n :param result: The result containing ordinary data\n :param key: Name of ordinary data item\n :param trajectory: Trajectory container\n :param new_class:\n\n Class of new shared data item.\n Leave `None` for automatic detection.\n\n :return: The `result`"}
{"_id": "q_11381", "text": "Creates shared data on disk with a StorageService on disk.\n\n Needs to be called before shared data can be used later on.\n\n Actual arguments of ``kwargs`` depend on the type of data to be\n created. For instance, creating an array one can use the keyword\n ``obj`` to pass a numpy array (``obj=np.zeros((10,20,30))``).\n Whereas for a PyTables table may need a description dictionary\n (``description={'column_1': pt.StringCol(2, pos=0),'column_2': pt.FloatCol( pos=1)}``)\n Refer to the PyTables documentation on how to create tables."}
{"_id": "q_11382", "text": "Interface with the underlying storage.\n\n Passes request to the StorageService that performs the appropriate action.\n For example, given a shared table ``t``.\n ``t.remove_row(4)`` is parsed into ``request='remove_row', args=(4,)`` and\n passed onto the storage service. In case of the HDF5StorageService,\n this is again translated back into ``hdf5_table_node.remove_row(4)``."}
{"_id": "q_11383", "text": "Returns the actula node of the underlying data.\n\n In case one uses HDF5 this will be the HDF5 leaf node."}
{"_id": "q_11384", "text": "Checks if outer data structure is supported."}
{"_id": "q_11385", "text": "Calls the corresponding function of the shared data item"}
{"_id": "q_11386", "text": "Hanldes locking of locks\n\n If a lock is already locked sends a WAIT command,\n else LOCKs it and sends GO.\n\n Complains if a given client re-locks a lock without releasing it before."}
{"_id": "q_11387", "text": "Notifies the Server to shutdown"}
{"_id": "q_11388", "text": "Closes socket and terminates context\n\n NO-OP if already closed."}
{"_id": "q_11389", "text": "Starts connection to server if not existent.\n\n NO-OP if connection is already established.\n Makes ping-pong test as well if desired."}
{"_id": "q_11390", "text": "Returns response and number of retries"}
{"_id": "q_11391", "text": "Acquires lock and returns `True`\n\n Blocks until lock is available."}
{"_id": "q_11392", "text": "Handles listening requests from the client.\n\n There are 4 types of requests:\n\n 1- Check space in the queue\n 2- Tests the socket\n 3- If there is a space, it sends data\n 4- after data is sent, puts it to queue for storing"}
{"_id": "q_11393", "text": "Handles data and returns `True` or `False` if everything is done."}
{"_id": "q_11394", "text": "Starts listening to the queue."}
{"_id": "q_11395", "text": "Gets data from pipe"}
{"_id": "q_11396", "text": "Simply keeps a reference to the stored data"}
{"_id": "q_11397", "text": "Stores references to disk and may collect garbage."}
{"_id": "q_11398", "text": "Decorator wrapping the environment to use a config file"}
{"_id": "q_11399", "text": "Collects all settings within a section"}
{"_id": "q_11400", "text": "Copies parsed arguments into the kwargs passed to the environment"}
{"_id": "q_11401", "text": "Adds parameters and config from the `.ini` file to the trajectory"}
{"_id": "q_11402", "text": "Converts a rule given as an integer into a binary list representation.\n\n It reads from left to right (contrary to the Wikipedia article given below),\n i.e. the 2**0 is found on the left hand side and 2**7 on the right.\n\n For example:\n\n ``convert_rule(30)`` returns [0, 1, 1, 1, 1, 0, 0, 0]\n\n\n The resulting binary list can be interpreted as\n the following transition table:\n\n neighborhood new cell state\n 000 0\n 001 1\n 010 1\n 011 1\n 100 1\n 101 0\n 110 0\n 111 0\n\n For more information about this rule\n see: http://en.wikipedia.org/wiki/Rule_30"}
{"_id": "q_11403", "text": "Creates an initial state for the automaton.\n\n :param name:\n\n Either ``'single'`` for a single live cell in the middle of the cell ring,\n or ``'random'`` for uniformly distributed random pattern of zeros and ones.\n\n :param ncells: Number of cells in the automaton\n\n :param seed: Random number seed for the ``#random'`` condition\n\n :return: Numpy array of zeros and ones (or just a one lonely one surrounded by zeros)\n\n :raises: ValueError if the ``name`` is unknown"}
{"_id": "q_11404", "text": "Plots an automaton ``pattern`` and stores the image under a given ``filename``.\n\n For axes labels the ``rule_number`` is also required."}
{"_id": "q_11405", "text": "Simulates a 1 dimensional cellular automaton.\n\n :param initial_state:\n\n The initial state of *dead* and *alive* cells as a 1D numpy array.\n It's length determines the size of the simulation.\n\n :param rule_number:\n\n The update rule as an integer from 0 to 255.\n\n :param steps:\n\n Number of cell iterations\n\n :return:\n\n A 2D numpy array (steps x len(initial_state)) containing zeros and ones representing\n the automaton development over time."}
{"_id": "q_11406", "text": "Main simulation function"}
{"_id": "q_11407", "text": "Convert a numeric timestamp to a timezone-aware datetime.\n\n A client may override this function to change the default behavior,\n such as to use local time or timezone-na\u00efve times."}
{"_id": "q_11408", "text": "Construct a DelayedCommand to come due at `at`, where `at` may be\n a datetime or timestamp."}
{"_id": "q_11409", "text": "Schedule a command to run at a specific time each day."}
{"_id": "q_11410", "text": "Signals the process timer.\n\n If more time than the display time has passed a message is emitted."}
{"_id": "q_11411", "text": "Direct link to the overview group"}
{"_id": "q_11412", "text": "Loads several items from an iterable\n\n Iterables are supposed to be of a format like `[(msg, item, args, kwarg),...]`\n If `args` and `kwargs` are not part of a tuple, they are taken from the\n current `args` and `kwargs` provided to this function."}
{"_id": "q_11413", "text": "Reads out the properties for storing new data into the hdf5file\n\n :param traj:\n\n The trajectory"}
{"_id": "q_11414", "text": "Routine to close an hdf5 file\n\n The file is closed only when `closing=True`. `closing=True` means that\n the file was opened in the current highest recursion level. This prevents re-opening\n and closing of the file if `store` or `load` are called recursively."}
{"_id": "q_11415", "text": "Extracts file information from kwargs.\n\n Note that `kwargs` is not passed as `**kwargs` in order to also\n `pop` the elements on the level of the function calling `_srvc_extract_file_information`."}
{"_id": "q_11416", "text": "Prepares a trajectory for merging.\n\n This function will already store extended parameters.\n\n :param traj: Target of merge\n :param changed_parameters: List of extended parameters (i.e. their names)."}
{"_id": "q_11417", "text": "Loads meta information about the trajectory\n\n Checks if the version number does not differ from current pypet version\n Loads, comment, timestamp, name, version from disk in case trajectory is not loaded\n as new. Updates the run information as well."}
{"_id": "q_11418", "text": "Checks for version mismatch\n\n Raises a VersionMismatchError if version of loaded trajectory and current pypet version\n do not match. In case of `force=True` error is not raised only a warning is emitted."}
{"_id": "q_11419", "text": "Fills the `run` overview table with information.\n\n Will also update new information."}
{"_id": "q_11420", "text": "Recalls names of all explored parameters"}
{"_id": "q_11421", "text": "Creates the overview tables in overview group"}
{"_id": "q_11422", "text": "Stores a trajectory to an hdf5 file\n\n Stores all groups, parameters and results"}
{"_id": "q_11423", "text": "Stores a node to hdf5 and if desired stores recursively everything below it.\n\n :param parent_traj_node: The parental node\n :param name: Name of node to be stored\n :param store_data: How to store data\n :param with_links: If links should be stored\n :param recursive: Whether to store recursively the subtree\n :param max_depth: Maximum recursion depth in tree\n :param current_depth: Current depth\n :param parent_hdf5_group: Parent hdf5 group"}
{"_id": "q_11424", "text": "Stores a single row into an overview table\n\n :param instance: A parameter or result instance\n\n :param table: Table where row will be inserted\n\n :param flags:\n\n Flags how to insert into the table. Potential Flags are\n `ADD_ROW`, `REMOVE_ROW`, `MODIFY_ROW`\n\n :param additional_info:\n\n Dictionary containing information that cannot be extracted from\n `instance`, but needs to be inserted, too."}
{"_id": "q_11425", "text": "Creates a new table, or if the table already exists, returns it."}
{"_id": "q_11426", "text": "Returns an HDF5 node by the path specified in `name`"}
{"_id": "q_11427", "text": "Stores original data type to hdf5 node attributes for preserving the data type.\n\n :param data:\n\n Data to be stored\n\n :param ptitem:\n\n HDF5 node to store data types as attributes. Can also be just a PTItemMock.\n\n :param prefix:\n\n String prefix to label and name data in HDF5 attributes"}
{"_id": "q_11428", "text": "Checks if loaded data has the type it was stored in. If not converts it.\n\n :param data: Data item to be checked and converted\n :param ptitem: HDf5 Node or Leaf from where data was loaded\n :param prefix: Prefix for recalling the data type from the hdf5 node attributes\n\n :return:\n\n Tuple, first item is the (converted) `data` item, second boolean whether\n item was converted or not."}
{"_id": "q_11429", "text": "Adds or changes a row in a pytable.\n\n :param item_name: Name of item, the row is about, only important for throwing errors.\n\n :param insert_dict:\n\n Dictionary of data that is about to be inserted into the pytables row.\n\n :param table:\n\n The table to insert or modify a row in\n\n :param index:\n\n Index of row to be modified. Instead of an index a search condition can be\n used as well, see below.\n\n :param condition:\n\n Condition to search for in the table\n\n :param condvars:\n\n Variables for the search condition\n\n :param flags:\n\n Flags whether to add, modify, or remove a row in the table"}
{"_id": "q_11430", "text": "Creates or returns a group"}
{"_id": "q_11431", "text": "Stores annotations into an hdf5 file."}
{"_id": "q_11432", "text": "Stores a group node.\n\n For group nodes only annotations and comments need to be stored."}
{"_id": "q_11433", "text": "Loads a group node and potentially everything recursively below"}
{"_id": "q_11434", "text": "Reloads skeleton data of a tree node"}
{"_id": "q_11435", "text": "Extracts storage flags for data in `data_dict`\n if they were not specified in `flags_dict`.\n\n See :const:`~pypet.storageservice.HDF5StorageService.TYPE_FLAG_MAPPING`\n for how to store different types of data per default."}
{"_id": "q_11436", "text": "Adds information to overview tables and meta information to\n the `instance`s hdf5 `group`.\n\n :param instance: Instance to store meta info about\n :param group: HDF5 group of instance\n :param overwrite: If data should be explicitly overwritten"}
{"_id": "q_11437", "text": "Stores a `store_dict`"}
{"_id": "q_11438", "text": "Stores a parameter or result to hdf5.\n\n :param instance:\n\n The instance to be stored\n\n :param store_data:\n\n How to store data\n\n :param store_flags:\n\n Dictionary containing how to store individual data, usually empty.\n\n :param overwrite:\n\n Instructions how to overwrite data\n\n :param with_links:\n\n Placeholder because leaves have no links\n\n :param recursive:\n\n Placeholder, because leaves have no children\n\n :param _hdf5_group:\n\n The hdf5 group for storing the parameter or result\n\n :param _newly_created:\n\n If should be created in a new form"}
{"_id": "q_11439", "text": "Creates and array that can be used with an HDF5 array object"}
{"_id": "q_11440", "text": "Creates a new empty table"}
{"_id": "q_11441", "text": "Stores a pandas DataFrame into hdf5.\n\n :param key:\n\n Name of data item to store\n\n :param data:\n\n Pandas Data to Store\n\n :param group:\n\n Group node where to store data in hdf5 file\n\n :param fullname:\n\n Full name of the `data_to_store`s original container, only needed for throwing errors.\n\n :param flag:\n\n If it is a series, frame or panel"}
{"_id": "q_11442", "text": "Stores data as array.\n\n :param key:\n\n Name of data item to store\n\n :param data:\n\n Data to store\n\n :param group:\n\n Group node where to store data in hdf5 file\n\n :param fullname:\n\n Full name of the `data_to_store`s original container, only needed for throwing errors.\n\n :param recall:\n\n If container type and data type for perfect recall should be stored"}
{"_id": "q_11443", "text": "Stores data as pytable.\n\n :param tablename:\n\n Name of the data table\n\n :param data:\n\n Data to store\n\n :param hdf5_group:\n\n Group node where to store data in hdf5 file\n\n :param fullname:\n\n Full name of the `data_to_store`s original container, only needed for throwing errors."}
{"_id": "q_11444", "text": "Returns a description dictionary for pytables table creation"}
{"_id": "q_11445", "text": "Creates a pytables column instance.\n\n The type of column depends on the type of `column[0]`.\n Note that data in `column` must be homogeneous!"}
{"_id": "q_11446", "text": "Returns the longest string size for a string entry across data."}
{"_id": "q_11447", "text": "Loads into dictionary"}
{"_id": "q_11448", "text": "Loads data that was originally a dictionary when stored\n\n :param leaf:\n\n PyTables table containing the dictionary data\n\n :param full_name:\n\n Full name of the parameter or result whose data is to be loaded\n\n :return:\n\n Data to be loaded"}
{"_id": "q_11449", "text": "Reads shared data and constructs the appropraite class.\n\n :param shared_node:\n\n hdf5 node storing the pandas DataFrame\n\n :param full_name:\n\n Full name of the parameter or result whose data is to be loaded\n\n :return:\n\n Data to load"}
{"_id": "q_11450", "text": "Reads a non-nested PyTables table column by column and created a new ObjectTable for\n the loaded data.\n\n :param table_or_group:\n\n PyTables table to read from or a group containing subtables.\n\n :param full_name:\n\n Full name of the parameter or result whose data is to be loaded\n\n :return:\n\n Data to be loaded"}
{"_id": "q_11451", "text": "Reads data from an array or carray\n\n :param array:\n\n PyTables array or carray to read from\n\n :param full_name:\n\n Full name of the parameter or result whose data is to be loaded\n\n :return:\n\n Data to load"}
{"_id": "q_11452", "text": "Helper function that creates a novel trajectory and loads it from disk.\n\n For the parameters see :func:`~pypet.trajectory.Trajectory.f_load`.\n\n ``new_name`` and ``add_time`` are only used in case ``as_new`` is ``True``.\n Accordingly, they determine the new name of trajectory."}
{"_id": "q_11453", "text": "Creates a run set name based on ``idx``"}
{"_id": "q_11454", "text": "Sets properties like ``v_fast_access``.\n\n For example: ``traj.f_set_properties(v_fast_access=True, v_auto_load=False)``"}
{"_id": "q_11455", "text": "Adds classes or paths to classes to the trajectory to create custom parameters.\n\n :param dynamic_imports:\n\n If you've written custom parameter that needs to be loaded dynamically during runtime,\n this needs to be specified here as a list of classes or strings naming classes\n and there module paths. For example:\n `dynamic_imports = ['pypet.parameter.PickleParameter',MyCustomParameter]`\n\n If you only have a single class to import, you do not need the list brackets:\n `dynamic_imports = 'pypet.parameter.PickleParameter'`"}
{"_id": "q_11456", "text": "Makes the trajectory iterate over all runs.\n\n :param start: Start index of run\n\n :param stop: Stop index, leave ``None`` for length of trajectory\n\n :param step: Stepsize\n\n :param yields:\n\n What should be yielded: ``'name'`` of run, ``idx`` of run\n or ``'self'`` to simply return the trajectory container.\n\n You can also pick ``'copy'`` to get **shallow** copies (ie the tree is copied but\n no leave nodes except explored ones.) of your trajectory,\n might lead to some of overhead.\n\n Note that after a full iteration, the trajectory is set back to normal.\n\n Thus, the following code snippet\n\n ::\n\n for run_name in traj.f_iter_runs():\n\n # Do some stuff here...\n\n\n is equivalent to\n\n ::\n\n for run_name in traj.f_get_run_names(sort=True):\n traj.f_set_crun(run_name)\n\n # Do some stuff here...\n\n traj.f_set_crun(None)\n\n\n :return:\n\n Iterator over runs. The iterator itself will return the run names but modify\n the trajectory in each iteration and set it back do normal in the end."}
{"_id": "q_11457", "text": "Searches for all occurrences of `name` in each run.\n\n Generates an ordered dictionary with the run names or indices as keys and\n found items as values.\n\n Example:\n\n >>> traj.f_get_from_runs(self, 'deep.universal_answer', use_indices=True, fast_access=True)\n OrderedDict([(0, 42), (1, 42), (2, 'fortytwo), (3, 43)])\n\n\n :param name:\n\n String description of the item(s) to find.\n Cannot be full names but the part of the names that are below\n a `run_XXXXXXXXX` group.\n\n :param include_default_run:\n\n If results found under ``run_ALL`` should be accounted for every run or simply be\n ignored.\n\n :param use_indices:\n\n If `True` the keys of the resulting dictionary are the run indices\n (e.g. 0,1,2,3), otherwise the keys are run names (e.g. `run_00000000`,\n `run_000000001`)\n\n :param fast_access:\n\n Whether to return parameter or result instances or the values handled by these.\n\n :param with_links:\n\n If links should be considered\n\n :param shortcuts:\n\n If shortcuts are allowed and the trajectory can *hop* over nodes in the\n path.\n\n :param max_depth:\n\n Maximum depth (relative to start node) how search should progress in tree.\n `None` means no depth limit. Only relevant if `shortcuts` are allowed.\n\n :param auto_load:\n\n If data should be loaded from the storage service if it cannot be found in the\n current trajectory tree. Auto-loading will load group and leaf nodes currently\n not in memory and it will load data into empty leaves. Be aware that auto-loading\n does not work with shortcuts.\n\n :return:\n\n Ordered dictionary with run names or indices as keys and found items as values.\n Will only include runs where an item was actually found."}
{"_id": "q_11458", "text": "Private function such that it can still be called by the environment during\n a single run"}
{"_id": "q_11459", "text": "Called if trajectory is expanded, deletes all explored parameters from disk"}
{"_id": "q_11460", "text": "Pass a ``node`` to insert the full tree to the trajectory.\n\n Considers all links in the given node!\n Ignored nodes already found in the current trajectory.\n\n :param node: The node to insert\n\n :param copy_leaves:\n\n If leaves should be **shallow** copied or simply referred to by both trees.\n **Shallow** copying is established using the copy module.\n\n Accepts the setting ``'explored'`` to only copy explored parameters.\n Note that ``v_full_copy`` determines how these will be copied.\n\n :param overwrite:\n\n If existing elemenst should be overwritten. Requries ``__getstate__`` and\n ``__setstate__`` being implemented in the leaves.\n\n :param with_links: If links should be ignored or followed and copied as well\n\n :return: The corresponding (new) node in the tree."}
{"_id": "q_11461", "text": "Prepares the trajectory to explore the parameter space.\n\n\n To explore the parameter space you need to provide a dictionary with the names of the\n parameters to explore as keys and iterables specifying the exploration ranges as values.\n\n All iterables need to have the same length otherwise a ValueError is raised.\n A ValueError is also raised if the names from the dictionary map to groups or results\n and not parameters.\n\n If your trajectory is already explored but not stored yet and your parameters are\n not locked you can add new explored parameters to the current ones if their\n iterables match the current length of the trajectory.\n\n Raises an AttributeError if the names from the dictionary are not found at all in\n the trajectory and NotUniqueNodeError if the keys not unambiguously map\n to single parameters.\n\n Raises a TypeError if the trajectory has been stored already, please use\n :func:`~pypet.trajectory.Trajectory.f_expand` then instead.\n\n Example usage:\n\n >>> traj.f_explore({'groupA.param1' : [1,2,3,4,5], 'groupA.param2':['a','b','c','d','e']})\n\n Could also be called consecutively:\n\n >>> traj.f_explore({'groupA.param1' : [1,2,3,4,5]})\n >>> traj.f_explore({'groupA.param2':['a','b','c','d','e']})\n\n NOTE:\n\n Since parameters are very conservative regarding the data they accept\n (see :ref:`type_conservation`), you sometimes won't be able to use Numpy arrays\n for exploration as iterables.\n\n For instance, the following code snippet won't work:\n\n ::\n\n import numpy a np\n from pypet.trajectory import Trajectory\n traj = Trajectory()\n traj.f_add_parameter('my_float_parameter', 42.4,\n comment='My value is a standard python float')\n\n traj.f_explore( { 'my_float_parameter': np.arange(42.0, 44.876, 0.23) } )\n\n\n This will result in a `TypeError` because your exploration iterable\n `np.arange(42.0, 44.876, 0.23)` contains `numpy.float64` values\n whereas you parameter is supposed to use standard python floats.\n\n Yet, you can use Numpys `tolist()` function to overcome this problem:\n\n ::\n\n traj.f_explore( { 'my_float_parameter': np.arange(42.0, 44.876, 0.23).tolist() } )\n\n\n Or you could specify your parameter directly as a numpy float:\n\n ::\n\n traj.f_add_parameter('my_float_parameter', np.float64(42.4),\n comment='My value is a numpy 64 bit float')"}
{"_id": "q_11462", "text": "Overwrites the run information of a particular run"}
{"_id": "q_11463", "text": "Locks all non-empty parameters"}
{"_id": "q_11464", "text": "Loads the full skeleton from the storage service.\n\n This needs to be done after a successful exploration in order to update the\n trajectory tree with all results and derived parameters from the individual single runs.\n This will only add empty results and derived parameters (i.e. the skeleton)\n and load annotations."}
{"_id": "q_11465", "text": "Loads a trajectory via the storage service.\n\n\n If you want to load individual results or parameters manually, you can take\n a look at :func:`~pypet.trajectory.Trajectory.f_load_items`.\n To only load subtrees check out :func:`~pypet.naturalnaming.NNGroupNode.f_load_child`.\n\n\n For `f_load` you can pass the following arguments:\n\n :param name:\n\n Name of the trajectory to be loaded. If no name or index is specified\n the current name of the trajectory is used.\n\n :param index:\n\n If you don't specify a name you can specify an integer index instead.\n The corresponding trajectory in the hdf5 file at the index\n position is loaded (counting starts with 0). Negative indices are also allowed\n counting in reverse order. For instance, `-1` refers to the last trajectory in\n the file, `-2` to the second last, and so on.\n\n :param as_new:\n\n Whether you want to rerun the experiments. So the trajectory is loaded only\n with parameters. The current trajectory name is kept in this case, which should be\n different from the trajectory name specified in the input parameter `name`.\n If you load `as_new=True` all parameters are unlocked.\n If you load `as_new=False` the current trajectory is replaced by the one on disk,\n i.e. name, timestamp, formatted time etc. are all taken from disk.\n\n :param load_parameters: How parameters and config items are loaded\n\n :param load_derived_parameters: How derived parameters are loaded\n\n :param load_results: How results are loaded\n\n You can specify how to load the parameters, derived parameters and results\n as follows:\n\n * :const:`pypet.pypetconstants.LOAD_NOTHING`: (0)\n\n Nothing is loaded.\n\n * :const:`pypet.pypetconstants.LOAD_SKELETON`: (1)\n\n The skeleton is loaded including annotations (See :ref:`more-on-annotations`).\n This means that only empty\n *parameter* and *result* objects will\n be created and you can manually load the data into them afterwards.\n Note that :class:`pypet.annotations.Annotations` do not count as data and they\n will be loaded because they are assumed to be small.\n\n * :const:`pypet.pypetconstants.LOAD_DATA`: (2)\n\n The whole data is loaded. Note in case you have non-empty leaves\n already in RAM, these are left untouched.\n\n * :const:`pypet.pypetconstants.OVERWRITE_DATA`: (3)\n\n As before, but non-empty nodes are emptied and reloaded.\n\n Note that in all cases except :const:`pypet.pypetconstants.LOAD_NOTHING`,\n annotations will be reloaded if the corresponding instance\n is created or the annotations of an existing instance were emptied before.\n\n :param recursive:\n\n If data should be loaded recursively. If set to `None`, this is equivalent to\n set all data loading to `:const:`pypet.pypetconstants.LOAD_NOTHING`.\n\n :param load_data:\n\n As the above, per default set to `None`. If not `None` the setting of `load_data`\n will overwrite the settings of `load_parameters`, `load_derived_parameters`,\n `load_results`, and `load_other_data`. This is more or less or shortcut if all\n types should be loaded the same.\n\n :param max_depth:\n\n Maximum depth to load nodes (inclusive).\n\n :param force:\n\n *pypet* will refuse to load trajectories that have been created using *pypet* with a\n different version number or a different python version.\n To force the load of a trajectory from a previous version\n simply set ``force = True``. Note that it is not checked if other versions of packages\n differ from previous experiments, i.e. numpy, scipy, etc. But you can check\n for this manually. The versions of other packages can be found under\n ``'config.environment.name_of_environment.versions.package_name'``.\n\n :param dynamic_imports:\n\n If you've written a custom parameter that needs to be loaded dynamically\n during runtime, this needs to be specified here as a list of classes or\n strings naming classes and there module paths. For example:\n `dynamic_imports = ['pypet.parameter.PickleParameter',MyCustomParameter]`\n\n If you only have a single class to import, you do not need the list brackets:\n `dynamic_imports = 'pypet.parameter.PickleParameter'`\n\n The classes passed here are added for good and will be kept by the trajectory.\n Please add your dynamically imported classes only once.\n\n :param with_run_information:\n\n If information about the individual runs should be loaded. If you have many\n runs, like 1,000,000 or more you can spare time by setting\n `with_run_information=False`.\n Note that `f_get_run_information` and `f_idx_to_run` do not work in such a case.\n Moreover, setting `v_idx` does not work either. If you load the trajectory\n without this information, be careful, this is not recommended.\n\n :param wiht_meta_data:\n\n If meta data should be loaded.\n\n :param storage_service:\n\n Pass a storage service used by the trajectory. Alternatively pass a constructor\n and other ``**kwargs`` are passed onto the constructor. Leave `None` in combination\n with using no other kwargs, if you don't want to change the service\n the trajectory is currently using.\n\n :param kwargs:\n\n Other arguments passed to the storage service constructor. Don't pass any\n other kwargs and ``storage_service=None``,\n if you don't want to change the current service."}
{"_id": "q_11466", "text": "Backs up the trajectory with the given storage service.\n\n Arguments of ``kwargs`` are directly passed to the storage service,\n for the HDF5StorageService you can provide the following argument:\n\n :param backup_filename:\n\n Name of file where to store the backup.\n\n In case you use the standard HDF5 storage service and `backup_filename=None`,\n the file will be chosen automatically.\n The backup file will be in the same folder as your hdf5 file and\n named 'backup_XXXXX.hdf5' where 'XXXXX' is the name of your current trajectory."}
{"_id": "q_11467", "text": "Can be used to merge several `other_trajectories` into your current one.\n\n IMPORTANT `backup=True` only backs up the current trajectory not any of\n the `other_trajectories`. If you need a backup of these, do it manually.\n\n Parameters as for :func:`~pypet.trajectory.Trajectory.f_merge`."}
{"_id": "q_11468", "text": "Renames a full name based on the wildcards and a particular run"}
{"_id": "q_11469", "text": "Merges derived parameters that have the `run_ALL` in a name.\n\n Creates a new parameter with the name of the first new run and links to this\n parameter to avoid copying in all other runs."}
{"_id": "q_11470", "text": "Merges all links"}
{"_id": "q_11471", "text": "Merges meta data about previous merges, git commits, and environment settings\n of the other trajectory into the current one."}
{"_id": "q_11472", "text": "Merges all results.\n\n :param rename_dict:\n\n Dictionary that is filled with the names of results in the `other_trajectory`\n as keys and the corresponding new names in the current trajectory as values.\n Note for results kept under trajectory run branch there is actually no need to\n change the names. So we will simply keep the original name."}
{"_id": "q_11473", "text": "Can be called to rename and relocate the trajectory.\n\n :param new_name: New name of the trajectory, None if you do not want to change the name.\n\n :param in_store:\n\n Set this to True if the trajectory has been stored with the new name at the new\n file before and you just want to \"switch back\" to the location.\n If you migrate to a store used before and you do not set `in_store=True`,\n the storage service will throw a RuntimeError in case you store the Trajectory\n because it will assume that you try to store a new trajectory that accidentally has\n the very same name as another trajectory. If set to `True` and trajectory is not found\n in the file, the trajectory is simply stored to the file.\n\n :param new_storage_service:\n\n New service where you want to migrate to. Leave none if you want to keep the olde one.\n\n :param kwargs:\n\n Additional keyword arguments passed to the service.\n For instance, to change the file of the trajectory use ``filename='my_new_file.hdf5``."}
{"_id": "q_11474", "text": "Restores the default value in all explored parameters and sets the\n v_idx property back to -1 and v_crun to None."}
{"_id": "q_11475", "text": "Notifies the explored parameters what current point in the parameter space\n they should represent."}
{"_id": "q_11476", "text": "Returns a dictionary containing information about a single run.\n\n ONLY useful during a single run if ``v_full_copy` was set to ``True``.\n Otherwise only the current run is available.\n\n The information dictionaries have the following key, value pairings:\n\n * completed: Boolean, whether a run was completed\n\n * idx: Index of a run\n\n * timestamp: Timestamp of the run as a float\n\n * time: Formatted time string\n\n * finish_timestamp: Timestamp of the finishing of the run\n\n * runtime: Total runtime of the run in human readable format\n\n * name: Name of the run\n\n * parameter_summary:\n\n A string summary of the explored parameter settings for the particular run\n\n * short_environment_hexsha: The short version of the environment SHA-1 code\n\n\n If no name or idx is given then a nested dictionary with keys as run names and\n info dictionaries as values is returned.\n\n :param name_or_idx: str or int\n\n :param copy:\n\n Whether you want the dictionary used by the trajectory or a copy. Note if\n you want the real thing, please do not modify it, i.e. popping or adding stuff. This\n could mess up your whole trajectory.\n\n :return:\n\n A run information dictionary or a nested dictionary of information dictionaries\n with the run names as keys."}
{"_id": "q_11477", "text": "Finds a single run index given a particular condition on parameters.\n\n ONLY useful for a single run if ``v_full_copy` was set to ``True``.\n Otherwise a TypeError is thrown.\n\n :param name_list:\n\n A list of parameter names the predicate applies to, if you have only a single\n parameter name you can omit the list brackets.\n\n :param predicate:\n\n A lambda predicate for filtering that evaluates to either ``True`` or ``False``\n\n :return: A generator yielding the matching single run indices\n\n Example:\n\n >>> predicate = lambda param1, param2: param1==4 and param2 in [1.0, 2.0]\n >>> iterator = traj.f_find_idx(['groupA.param1', 'groupA.param2'], predicate)\n >>> [x for x in iterator]\n [0, 2, 17, 36]"}
{"_id": "q_11478", "text": "Can be used to manually allow running of an experiment without using an environment.\n\n :param run_name_or_idx:\n\n Can manually set a trajectory to a particular run. If `None` the current run\n the trajectory is set to is used.\n\n :param turn_into_run:\n\n Turns the trajectory into a run, i.e. reduces functionality but makes storing\n more efficient."}
{"_id": "q_11479", "text": "Sets the start timestamp and formatted time to the current time."}
{"_id": "q_11480", "text": "Returns a dictionary containing either all parameters, all explored parameters,\n all config, all derived parameters, or all results.\n\n :param param_dict: The dictionary which is about to be returned\n :param fast_access: Whether to use fast access\n :param copy: If the original dict should be returned or a shallow copy\n\n :return: The dictionary\n\n :raises: ValueError if `copy=False` and fast_access=True`"}
{"_id": "q_11481", "text": "Called by the environment after storing to perform some rollback operations.\n\n All results and derived parameters created in the current run are removed.\n\n Important for single processing to not blow up the parent trajectory with the results\n of all runs."}
{"_id": "q_11482", "text": "Stores individual items to disk.\n\n This function is useful if you calculated very large results (or large derived parameters)\n during runtime and you want to write these to disk immediately and empty them afterwards\n to free some memory.\n\n Instead of storing individual parameters or results you can also store whole subtrees with\n :func:`~pypet.naturalnaming.NNGroupNode.f_store_child`.\n\n\n You can pass the following arguments to `f_store_items`:\n\n :param iterator:\n\n An iterable containing the parameters or results to store, either their\n names or the instances. You can also pass group instances or names here\n to store the annotations of the groups.\n\n :param non_empties:\n\n Optional keyword argument (boolean),\n if `True` will only store the subset of provided items that are not empty.\n Empty parameters or results found in `iterator` are simply ignored.\n\n :param args: Additional arguments passed to the storage service\n\n :param kwargs:\n\n If you use the standard hdf5 storage service, you can pass the following additional\n keyword argument:\n\n :param overwrite:\n\n List names of parts of your item that should\n be erased and overwritten by the new data in your leaf.\n You can also set `overwrite=True`\n to overwrite all parts.\n\n For instance:\n\n >>> traj.f_add_result('mygroup.myresult', partA=42, partB=44, partC=46)\n >>> traj.f_store()\n >>> traj.mygroup.myresult.partA = 333\n >>> traj.mygroup.myresult.partB = 'I am going to change to a string'\n >>> traj.f_store_item('mygroup.myresult', overwrite=['partA', 'partB'])\n\n Will store `'mygroup.myresult'` to disk again and overwrite the parts\n `'partA'` and `'partB'` with the new values `333` and\n `'I am going to change to a string'`.\n The data stored as `partC` is not changed.\n\n Be aware that you need to specify the names of parts as they were stored\n to HDF5. Depending on how your leaf construction works, this may differ\n from the names the data might have in your leaf in the trajectory container.\n\n Note that massive overwriting will fragment and blow up your HDF5 file.\n Try to avoid changing data on disk whenever you can.\n\n :raises:\n\n TypeError:\n\n If the (parent) trajectory has never been stored to disk. In this case\n use :func:`pypet.trajectory.f_store` first.\n\n ValueError: If no item could be found to be stored.\n\n Note if you use the standard hdf5 storage service, there are no additional arguments\n or keyword arguments to pass!"}
{"_id": "q_11483", "text": "Loads parameters and results specified in `iterator`.\n\n You can directly list the Parameter objects or just their names.\n\n If names are given the `~pypet.naturalnaming.NNGroupNode.f_get` method is applied to find the\n parameters or results in the trajectory. Accordingly, the parameters and results\n you want to load must already exist in your trajectory (in RAM), probably they\n are just empty skeletons waiting desperately to handle data.\n If they do not exist in RAM yet, but have been stored to disk before,\n you can call :func:`~pypet.trajectory.Trajectory.f_load_skeleton` in order to\n bring your trajectory tree skeleton up to date. In case of a single run you can\n use the :func:`~pypet.naturalnaming.NNGroupNode.f_load_child` method to recursively\n load a subtree without any data.\n Then you can load the data of individual results or parameters one by one.\n\n If want to load the whole trajectory at once or ALL results and parameters that are\n still empty take a look at :func:`~pypet.trajectory.Trajectory.f_load`.\n As mentioned before, to load subtrees of your trajectory you might want to check out\n :func:`~pypet.naturalnaming.NNGroupNode.f_load_child`.\n\n To load a list of parameters or results with `f_load_items` you can pass\n the following arguments:\n\n :param iterator: A list with parameters or results to be loaded.\n\n :param only_empties:\n\n Optional keyword argument (boolean),\n if `True` only empty parameters or results are passed\n to the storage service to get loaded. Non-empty parameters or results found in\n `iterator` are simply ignored.\n\n :param args: Additional arguments directly passed to the storage service\n\n :param kwargs:\n\n Additional keyword arguments directly passed to the storage service\n (except the kwarg `only_empties`)\n\n If you use the standard hdf5 storage service, you can pass the following additional\n keyword arguments:\n\n :param load_only:\n\n If you load a result, you can partially load it and ignore the rest of data items.\n Just specify the name of the data you want to load. You can also provide a list,\n for example `load_only='spikes'`, `load_only=['spikes','membrane_potential']`.\n\n Be aware that you need to specify the names of parts as they were stored\n to HDF5. Depending on how your leaf construction works, this may differ\n from the names the data might have in your leaf in the trajectory container.\n\n A warning is issued if data specified in `load_only` cannot be found in the\n instances specified in `iterator`.\n\n :param load_except:\n\n Analogous to the above, but everything is loaded except names or parts\n specified in `load_except`.\n You cannot use `load_only` and `load_except` at the same time. If you do\n a ValueError is thrown.\n\n A warning is issued if names listed in `load_except` are not part of the\n items to load."}
{"_id": "q_11484", "text": "Removes parameters, results or groups from the trajectory.\n\n This function ONLY removes items from your current trajectory and does not delete\n data stored to disk. If you want to delete data from disk, take a look at\n :func:`~pypet.trajectory.Trajectory.f_delete_items`.\n\n This will also remove all links if items are linked.\n\n :param iterator:\n\n A sequence of items you want to remove. Either the instances themselves\n or strings with the names of the items.\n\n :param recursive:\n\n In case you want to remove group nodes, if the children should be removed, too."}
{"_id": "q_11485", "text": "Recursively removes all children of the trajectory\n\n :param recursive:\n\n Only here for consistency with signature of parent method. Cannot be set\n to `False` because the trajectory root node cannot be removed.\n\n :param predicate:\n\n Predicate which can evaluate for each node to ``True`` in order to remove the node or\n ``False`` if the node should be kept. Leave ``None`` if you want to remove all nodes."}
{"_id": "q_11486", "text": "Deletes items from storage on disk.\n\n Per default the item is NOT removed from the trajectory.\n\n Links are NOT deleted on the hard disk, please delete links manually before deleting\n data!\n\n :param iterator:\n\n A sequence of items you want to remove. Either the instances themselves\n or strings with the names of the items.\n\n :param remove_from_trajectory:\n\n If items should also be removed from trajectory. Default is `False`.\n\n\n :param args: Additional arguments passed to the storage service\n\n :param kwargs: Additional keyword arguments passed to the storage service\n\n If you use the standard hdf5 storage service, you can pass the following additional\n keyword argument:\n\n :param delete_only:\n\n You can partially delete leaf nodes. Specify a list of parts of the result node\n that should be deleted like `delete_only=['mystuff','otherstuff']`.\n This wil only delete the hdf5 sub parts `mystuff` and `otherstuff` from disk.\n BE CAREFUL,\n erasing data partly happens at your own risk. Depending on how complex the\n loading process of your result node is, you might not be able to reconstruct\n any data due to partially deleting some of it.\n\n Be aware that you need to specify the names of parts as they were stored\n to HDF5. Depending on how your leaf construction works, this may differ\n from the names the data might have in your leaf in the trajectory container.\n\n If the hdf5 nodes you specified in `delete_only` cannot be found a warning\n is issued.\n\n Note that massive deletion will fragment your HDF5 file.\n Try to avoid changing data on disk whenever you can.\n\n If you want to erase a full node, simply ignore this argument or set to `None`.\n\n :param remove_from_item:\n\n If data that you want to delete from storage should also be removed from\n the items in `iterator` if they contain these. Default is `False`.\n\n :param recursive:\n\n If you want to delete a group node and it has children you need to\n set `recursive` to `True. Default is `False`."}
{"_id": "q_11487", "text": "A class to replace the strftime in datetime package or time module.\n\tIdentical to strftime behavior in those modules except supports any\n\tyear.\n\tAlso supports datetime.datetime times.\n\tAlso supports milliseconds using %s\n\tAlso supports microseconds using %u"}
{"_id": "q_11488", "text": "A function to replace strptime in the time module. Should behave\n\tidentically to the strptime function except it returns a datetime.datetime\n\tobject instead of a time.struct_time object.\n\tAlso takes an optional tzinfo parameter which is a time zone info object."}
{"_id": "q_11489", "text": "Returns the nearest year to now inferred from a Julian date."}
{"_id": "q_11490", "text": "return the number of seconds in the specified period\n\n\t>>> get_period_seconds('day')\n\t86400\n\t>>> get_period_seconds(86400)\n\t86400\n\t>>> get_period_seconds(datetime.timedelta(hours=24))\n\t86400\n\t>>> get_period_seconds('day + os.system(\"rm -Rf *\")')\n\tTraceback (most recent call last):\n\t...\n\tValueError: period not in (second, minute, hour, day, month, year)"}
{"_id": "q_11491", "text": "Divide a timedelta by a float value\n\n\t>>> one_day = datetime.timedelta(days=1)\n\t>>> half_day = datetime.timedelta(days=.5)\n\t>>> divide_timedelta_float(one_day, 2.0) == half_day\n\tTrue\n\t>>> divide_timedelta_float(one_day, 2) == half_day\n\tTrue"}
{"_id": "q_11492", "text": "Take a string representing a span of time and parse it to a time delta.\n\tAccepts any string of comma-separated numbers each with a unit indicator.\n\n\t>>> parse_timedelta('1 day')\n\tdatetime.timedelta(days=1)\n\n\t>>> parse_timedelta('1 day, 30 seconds')\n\tdatetime.timedelta(days=1, seconds=30)\n\n\t>>> parse_timedelta('47.32 days, 20 minutes, 15.4 milliseconds')\n\tdatetime.timedelta(days=47, seconds=28848, microseconds=15400)\n\n\tSupports weeks, months, years\n\n\t>>> parse_timedelta('1 week')\n\tdatetime.timedelta(days=7)\n\n\t>>> parse_timedelta('1 year, 1 month')\n\tdatetime.timedelta(days=395, seconds=58685)\n\n\tNote that months and years strict intervals, not aligned\n\tto a calendar:\n\n\t>>> now = datetime.datetime.now()\n\t>>> later = now + parse_timedelta('1 year')\n\t>>> diff = later.replace(year=now.year) - now\n\t>>> diff.seconds\n\t20940"}
{"_id": "q_11493", "text": "Get the ratio of two timedeltas\n\n\t>>> one_day = datetime.timedelta(days=1)\n\t>>> one_hour = datetime.timedelta(hours=1)\n\t>>> divide_timedelta(one_hour, one_day) == 1 / 24\n\tTrue"}
{"_id": "q_11494", "text": "Construct a datetime.datetime from a number of different time\n\t\ttypes found in python and pythonwin"}
{"_id": "q_11495", "text": "Starts a pool single run and passes the storage service"}
{"_id": "q_11496", "text": "Single run wrapper for the frozen pool, makes a single run and passes kwargs"}
{"_id": "q_11497", "text": "Configures the pool and keeps the storage service"}
{"_id": "q_11498", "text": "Configures the frozen pool and keeps all kwargs"}
{"_id": "q_11499", "text": "Wrapper function that configures a frozen SCOOP set up.\n\n Deletes of data if necessary."}
{"_id": "q_11500", "text": "Starts running a queue handler and creates a log file for the queue."}
{"_id": "q_11501", "text": "String summary of the value handled by the parameter.\n\n Note that representing the parameter as a string accesses its value,\n but for simpler debugging, this does not lock the parameter or counts as usage!\n\n Calls `__repr__` of the contained value."}
{"_id": "q_11502", "text": "Explores the parameter according to the iterable.\n\n Raises ParameterLockedException if the parameter is locked.\n Raises TypeError if the parameter does not support the data,\n the types of the data in the iterable are not the same as the type of the default value,\n or the parameter has already an exploration range.\n\n Note that the parameter will iterate over the whole iterable once and store\n the individual data values into a tuple. Thus, the whole exploration range is\n explicitly stored in memory.\n\n :param explore_iterable: An iterable specifying the exploration range\n\n For example:\n\n >>> param._explore([3.0,2.0,1.0])\n >>> param.f_get_range()\n (3.0, 2.0, 1.0)\n\n :raises TypeError,ParameterLockedException"}
{"_id": "q_11503", "text": "Explores the parameter according to the iterable and appends to the exploration range.\n\n Raises ParameterLockedException if the parameter is locked.\n Raises TypeError if the parameter does not support the data,\n the types of the data in the iterable are not the same as the type of the default value,\n or the parameter did not have an array before.\n\n Note that the parameter will iterate over the whole iterable once and store\n the individual data values into a tuple. Thus, the whole exploration range is\n explicitly stored in memory.\n\n :param explore_iterable: An iterable specifying the exploration range\n\n For example:\n\n >>> param = Parameter('Im.an.example', data=33.33, comment='Wooohoo!')\n >>> param._explore([3.0,2.0,1.0])\n >>> param._expand([42.0, 43.42])\n >>> param.f_get_range()\n >>> [3.0, 2.0, 1.0, 42.0, 43.42]\n\n :raises TypeError, ParameterLockedException"}
{"_id": "q_11504", "text": "Loads the data and exploration range from the `load_dict`.\n\n The `load_dict` needs to be in the same format as the result of the\n :func:`~pypet.parameter.Parameter._store` method."}
{"_id": "q_11505", "text": "Reconstructs the data and exploration array.\n\n Checks if it can find the array identifier in the `load_dict`, i.e. '__rr__'.\n If not calls :class:`~pypet.parameter.Parameter._load` of the parent class.\n\n If the parameter is explored, the exploration range of arrays is reconstructed\n as it was stored in :func:`~pypet.parameter.ArrayParameter._store`."}
{"_id": "q_11506", "text": "Matrices are equal if they hash to the same value."}
{"_id": "q_11507", "text": "Extracts data from a sparse matrix to make it serializable in a human readable format.\n\n :return: Tuple with following elements:\n\n 1.\n\n A list containing data that is necessary to reconstruct the matrix.\n For csr, csc, and bsr matrices the following attributes are extracted:\n `format`, `data`, `indices`, `indptr`, `shape`.\n Where format is simply one of the strings 'csr', 'csc', or 'bsr'.\n\n For dia matrices the following attributes are extracted:\n `format`, `data`, `offsets`, `shape`.\n Where `format` is simply the string 'dia'.\n\n 2.\n\n A list containing the names of the extracted attributes.\n For csr, csc, and bsr:\n\n [`format`, `data`, `indices`, `indptr`, `shape`]\n\n For dia:\n\n [`format`, `data`, `offsets`, `shape`]\n\n 3.\n\n A tuple containing the hashable parts of (1) in order to use the tuple as\n a key for a dictionary. Accordingly, the numpy arrays of (1) are\n changed to read-only."}
{"_id": "q_11508", "text": "Reconstructs a matrix from a list containing sparse matrix extracted properties\n\n `data_list` needs to be formatted as the first result of\n :func:`~pypet.parameter.SparseParameter._serialize_matrix`"}
{"_id": "q_11509", "text": "Reconstructs the data and exploration array\n\n Checks if it can find the array identifier in the `load_dict`, i.e. '__spsp__'.\n If not, calls :class:`~pypet.parameter.ArrayParameter._load` of the parent class.\n\n If the parameter is explored, the exploration range of matrices is reconstructed\n as it was stored in :func:`~pypet.parameter.SparseParameter._store`."}
{"_id": "q_11510", "text": "Returns a dictionary for storage.\n\n Every element in the dictionary except for 'explored_data' is a pickle dump.\n\n Reusage of objects is identified over the object id, i.e. python's built-in id function.\n\n 'explored_data' contains the references to the objects to be able to recall the\n order of objects later on."}
{"_id": "q_11511", "text": "Reconstructs objects from the pickle dumps in `load_dict`.\n\n The 'explored_data' entry in `load_dict` is used to reconstruct\n the exploration range in the correct order.\n\n Sets the `v_protocol` property to the protocol used to store 'data'."}
{"_id": "q_11512", "text": "Translates integer indices into the appropriate names"}
{"_id": "q_11513", "text": "Returns all handled data as a dictionary.\n\n :param copy:\n\n Whether the original dictionary or a shallow copy is returned.\n\n :return: Data dictionary"}
{"_id": "q_11514", "text": "Method to put data into the result.\n\n :param args:\n\n The first positional argument is stored with the name of the result.\n Following arguments are stored with `name_X` where `X` is the position\n of the argument.\n\n :param kwargs: Arguments are stored with the key as name.\n\n :raises: TypeError if outer data structure is not understood.\n\n Example usage:\n\n >>> res = Result('supergroup.subgroup.myresult', comment='I am a neat example!')\n >>> res.f_set(333,42.0, mystring='String!')\n >>> res.f_get('myresult')\n 333\n >>> res.f_get('myresult_1')\n 42.0\n >>> res.f_get(1)\n 42.0\n >>> res.f_get('mystring')\n 'String!'"}
{"_id": "q_11515", "text": "Sets a single data item of the result.\n\n Raises TypeError if the type of the outer data structure is not understood.\n Note that the type check is shallow. For example, if the data item is a list,\n the individual list elements are NOT checked whether their types are appropriate.\n\n :param name: The name of the data item\n\n :param item: The data item\n\n :raises: TypeError\n\n Example usage:\n\n >>> res.f_set_single('answer', 42)\n >>> res.f_get('answer')\n 42"}
{"_id": "q_11516", "text": "Supports everything of parent class and csr, csc, bsr, and dia sparse matrices."}
{"_id": "q_11517", "text": "Returns a storage dictionary understood by the storage service.\n\n Sparse matrices are extracted similar to the :class:`~pypet.parameter.SparseParameter` and\n marked with the identifier `__spsp__`."}
{"_id": "q_11518", "text": "Loads data from `load_dict`\n\n Reconstruction of sparse matrices similar to the :class:`~pypet.parameter.SparseParameter`."}
{"_id": "q_11519", "text": "Simply merge all trajectories in the working directory"}
{"_id": "q_11520", "text": "Creates and returns a new SAGA session"}
{"_id": "q_11521", "text": "Starts all jobs and runs `the_task.py` in batches."}
{"_id": "q_11522", "text": "Sophisticated simulation of multiplication"}
{"_id": "q_11523", "text": "Runs a simulation of a model neuron.\n\n :param traj:\n\n Container with all parameters.\n\n :return:\n\n An estimate of the firing rate of the neuron"}
{"_id": "q_11524", "text": "Postprocessing, sorts computed firing rates into a table\n\n :param traj:\n\n Container for results and parameters\n\n :param result_list:\n\n List of tuples, where first entry is the run index and second is the actual\n result of the corresponding run.\n\n :return:"}
{"_id": "q_11525", "text": "Extracts subruns from the trajectory.\n\n :param traj: Trajectory container\n\n :param pre_run: Boolean whether current run is regular or a pre-run\n\n :raises: RuntimeError if orders are duplicates or even missing"}
{"_id": "q_11526", "text": "Generic `execute_network_run` function, handles experimental runs as well as pre-runs.\n\n See also :func:`~pypet.brian2.network.NetworkRunner.execute_network_run` and\n :func:`~pypet.brian2.network.NetworkRunner.execute_network_pre_run`."}
{"_id": "q_11527", "text": "Top-level simulation function, pass this to the environment\n\n Performs an individual network run during parameter exploration.\n\n `run_network` does not need to be called by the user. If this\n method (not this one of the NetworkManager)\n is passed to an :class:`~pypet.environment.Environment` with this NetworkManager,\n `run_network` and :func:`~pypet.brian2.network.NetworkManager.build`\n are automatically called for each individual experimental run.\n\n This function will create a new BRIAN2 network in case one was not pre-run.\n The execution of the network run is carried out by\n the :class:`~pypet.brian2.network.NetworkRunner` and it's\n :func:`~pypet.brian2.network.NetworkRunner.execute_network_run` (also take\n a look at this function's documentation to see the structure of a network run).\n\n :param traj: Trajectory container"}
{"_id": "q_11528", "text": "Starts a single run carried out by a NetworkRunner.\n\n Called from the public function :func:`~pypet.brian2.network.NetworkManger.run_network`.\n\n :param traj: Trajectory container"}
{"_id": "q_11529", "text": "Function to create generic filenames based on what has been explored"}
{"_id": "q_11530", "text": "Merges all files in a given folder.\n\n IMPORTANT: Does not check if there are more than 1 trajectory in a file. Always\n uses the last trajectory in file and ignores the other ones.\n\n Trajectories are merged according to the alphabetical order of the files,\n i.e. the resulting merged trajectory is found in the first file\n (according to lexicographic ordering).\n\n :param folder: folder (not recursive) where to look for files\n :param ext: only files with the given extension are used\n :param dynamic_imports: Dynamic imports for loading\n :param storage_service: storage service to use, leave `None` to use the default one\n :param force: If loading should be forced.\n :param delete_other_files: Deletes files of merged trajectories\n\n All other parameters as in `f_merge_many` of the trajectory.\n\n :return: The merged traj"}
{"_id": "q_11531", "text": "Handler of SIGINT\n\n Does nothing if SIGINT is encountered once but raises a KeyboardInterrupt in case it\n is encountered twice.\n immediatly."}
{"_id": "q_11532", "text": "Small configuration file management function"}
{"_id": "q_11533", "text": "Method to request API tokens from ecobee"}
{"_id": "q_11534", "text": "Write api tokens to a file"}
{"_id": "q_11535", "text": "The minimum time, in minutes, to run the fan each hour. Value from 1 to 60"}
{"_id": "q_11536", "text": "Set a climate hold - ie away, home, sleep"}
{"_id": "q_11537", "text": "Delete the vacation with name vacation"}
{"_id": "q_11538", "text": "Resume currently scheduled program"}
{"_id": "q_11539", "text": "Set humidity level"}
{"_id": "q_11540", "text": "Generate the time in seconds in which DHCPDISCOVER wil be retransmited.\n\n [:rfc:`2131#section-3.1`]::\n\n might retransmit the\n DHCPREQUEST message four times, for a total delay of 60 seconds\n\n [:rfc:`2131#section-4.1`]::\n\n For example, in a 10Mb/sec Ethernet\n internetwork, the delay before the first retransmission SHOULD be 4\n seconds randomized by the value of a uniform random number chosen\n from the range -1 to +1. Clients with clocks that provide resolution\n granularity of less than one second may choose a non-integer\n randomization value. The delay before the next retransmission SHOULD\n be 8 seconds randomized by the value of a uniform number chosen from\n the range -1 to +1. The retransmission delay SHOULD be doubled with\n subsequent retransmissions up to a maximum of 64 seconds."}
{"_id": "q_11541", "text": "Generate time in seconds to retransmit DHCPREQUEST.\n\n [:rfc:`2131#section-4..4.5`]::\n\n In both RENEWING and REBINDING states,\n if the client receives no response to its DHCPREQUEST\n message, the client SHOULD wait one-half of the remaining\n time until T2 (in RENEWING state) and one-half of the\n remaining lease time (in REBINDING state), down to a\n minimum of 60 seconds, before retransmitting the\n DHCPREQUEST message."}
{"_id": "q_11542", "text": "Return the self object attributes not inherited as dict."}
{"_id": "q_11543", "text": "Reset object attributes when state is INIT."}
{"_id": "q_11544", "text": "Workaround to get timeout in the ATMT.timeout class method."}
{"_id": "q_11545", "text": "Workaround to change timeout values in the ATMT.timeout class method.\n\n self.timeout format is::\n\n {'STATE': [\n (TIMEOUT0, <function foo>),\n (TIMEOUT1, <function bar>)),\n (None, None)\n ],\n }"}
{"_id": "q_11546", "text": "Select an offer from the offers received.\n\n [:rfc:`2131#section-4.2`]::\n\n DHCP clients are free to use any strategy in selecting a DHCP\n server among those from which the client receives a DHCPOFFER.\n\n [:rfc:`2131#section-4.4.1`]::\n\n The time\n over which the client collects messages and the mechanism used to\n select one DHCPOFFER are implementation dependent.\n\n Nor [:rfc:`7844`] nor [:rfc:`2131`] specify the algorithm.\n Here, currently the first offer is selected.\n\n .. todo::\n\n - Check other implementations algorithm to select offer."}
{"_id": "q_11547", "text": "Send request.\n\n [:rfc:`2131#section-3.1`]::\n\n a client retransmitting as described in section 4.1 might retransmit\n the DHCPREQUEST message four times, for a total delay of 60 seconds\n\n .. todo::\n\n - The maximum number of retransmitted REQUESTs is per state or in\n total?\n - Are the retransmitted REQUESTs independent to the retransmitted\n DISCOVERs?"}
{"_id": "q_11548", "text": "INIT state.\n\n [:rfc:`2131#section-4.4.1`]::\n\n The client SHOULD wait a random time between one and ten\n seconds to desynchronize the use of DHCP at startup\n\n .. todo::\n - The initial delay is implemented, but probably is not in other\n implementations. Check what other implementations do."}
{"_id": "q_11549", "text": "END state."}
{"_id": "q_11550", "text": "ERROR state."}
{"_id": "q_11551", "text": "Timeout of selecting on SELECTING state.\n\n Not specifiyed in [:rfc:`7844`].\n See comments in :func:`dhcpcapfsm.DHCPCAPFSM.timeout_request`."}
{"_id": "q_11552", "text": "Timeout requesting in REQUESTING state.\n\n Not specifiyed in [:rfc:`7844`]\n\n [:rfc:`2131#section-3.1`]::\n\n might retransmit the\n DHCPREQUEST message four times, for a total delay of 60 seconds"}
{"_id": "q_11553", "text": "Timeout of renewing on RENEWING state.\n\n Same comments as in\n :func:`dhcpcapfsm.DHCPCAPFSM.timeout_requesting`."}
{"_id": "q_11554", "text": "Timeout of request rebinding on REBINDING state.\n\n Same comments as in\n :func:`dhcpcapfsm.DHCPCAPFSM.timeout_requesting`."}
{"_id": "q_11555", "text": "Receive offer on SELECTING state."}
{"_id": "q_11556", "text": "Receive ACK in REQUESTING state."}
{"_id": "q_11557", "text": "Receive ACK in RENEWING state."}
{"_id": "q_11558", "text": "Receive NAK in RENEWING state."}
{"_id": "q_11559", "text": "Assign a value, remove if it's None"}
{"_id": "q_11560", "text": "Append a value to multiple value parameter."}
{"_id": "q_11561", "text": "Remove a value from multiple value parameter."}
{"_id": "q_11562", "text": "Pass in an Overpass query in Overpass QL."}
{"_id": "q_11563", "text": "This attempts to allocate v6 addresses as per RFC2462 and RFC3041.\n\n To accomodate this, we effectively treat all v6 assignment as a\n first time allocation utilizing the MAC address of the VIF. Because\n we recycle MACs, we will eventually attempt to recreate a previously\n generated v6 address. Instead of failing, we've opted to handle\n reallocating that address in this method.\n\n This should provide a performance boost over attempting to check\n each and every subnet in the existing reallocate logic, as we'd\n have to iterate over each and every subnet returned"}
{"_id": "q_11564", "text": "Associates the flip with ports and creates it with the flip driver\n\n :param context: neutron api request context.\n :param flip: quark.db.models.IPAddress object representing a floating IP\n :param port_fixed_ips: dictionary of the structure:\n {\"<id of port>\": {\"port\": <quark.db.models.Port>,\n \"fixed_ip\": \"<fixed ip address>\"}}\n :return: None"}
{"_id": "q_11565", "text": "Update an existing floating IP.\n\n :param context: neutron api request context.\n :param id: id of the floating ip\n :param content: dictionary with keys indicating fields to update.\n valid keys are those that have a value of True for 'allow_put'\n as listed in the RESOURCE_ATTRIBUTE_MAP object in\n neutron/api/v2/attributes.py.\n\n :returns: Dictionary containing details for the new floating IP. If values\n are declared in the fields parameter, then only those keys will be\n present."}
{"_id": "q_11566", "text": "deallocate a floating IP.\n\n :param context: neutron api request context.\n :param id: id of the floating ip"}
{"_id": "q_11567", "text": "Retrieve a list of floating ips.\n\n :param context: neutron api request context.\n :param filters: a dictionary with keys that are valid keys for\n a floating ip as listed in the RESOURCE_ATTRIBUTE_MAP object\n in neutron/api/v2/attributes.py. Values in this dictionary\n are an iterable containing values that will be used for an exact\n match comparison for that value. Each result returned by this\n function will have matched one of the values for each key in\n filters.\n :param fields: a list of strings that are valid keys in a\n floating IP dictionary as listed in the RESOURCE_ATTRIBUTE_MAP\n object in neutron/api/v2/attributes.py. Only these fields\n will be returned.\n\n :returns: List of floating IPs that are accessible to the tenant who\n submits the request (as indicated by the tenant id of the context)\n as well as any filters."}
{"_id": "q_11568", "text": "Return the number of floating IPs.\n\n :param context: neutron api request context\n :param filters: a dictionary with keys that are valid keys for\n a floating IP as listed in the RESOURCE_ATTRIBUTE_MAP object\n in neutron/api/v2/attributes.py. Values in this dictionary\n are an iterable containing values that will be used for an exact\n match comparison for that value. Each result returned by this\n function will have matched one of the values for each key in\n filters.\n\n :returns: The number of floating IPs that are accessible to the tenant who\n submits the request (as indicated by the tenant id of the context)\n as well as any filters.\n\n NOTE: this method is optional, as it was not part of the originally\n defined plugin API."}
{"_id": "q_11569", "text": "Update an existing scaling IP.\n\n :param context: neutron api request context.\n :param id: id of the scaling ip\n :param content: dictionary with keys indicating fields to update.\n valid keys are those that have a value of True for 'allow_put'\n as listed in the RESOURCE_ATTRIBUTE_MAP object in\n neutron/api/v2/attributes.py.\n\n :returns: Dictionary containing details for the new scaling IP. If values\n are declared in the fields parameter, then only those keys will be\n present."}
{"_id": "q_11570", "text": "Splits VIFs into three explicit categories and one implicit\n\n Added - Groups exist in Redis that have not been ack'd and the VIF\n is not tagged.\n Action: Tag the VIF and apply flows\n Updated - Groups exist in Redis that have not been ack'd and the VIF\n is already tagged\n Action: Do not tag the VIF, do apply flows\n Removed - Groups do NOT exist in Redis but the VIF is tagged\n Action: Untag the VIF, apply default flows\n Self-Heal - Groups are ack'd in Redis but the VIF is untagged. We treat\n this case as if it were an \"added\" group.\n Action: Tag the VIF and apply flows\n NOOP - The VIF is not tagged and there are no matching groups in Redis.\n This is our implicit category\n Action: Do nothing"}
{"_id": "q_11571", "text": "Fetches changes and applies them to VIFs periodically\n\n Process as of RM11449:\n * Get all groups from redis\n * Fetch ALL VIFs from Xen\n * Walk ALL VIFs and partition them into added, updated and removed\n * Walk the final \"modified\" VIFs list and apply flows to each"}
{"_id": "q_11572", "text": "Delete the quota entries for a given tenant_id.\n\n Atfer deletion, this tenant will use default quota values in conf."}
{"_id": "q_11573", "text": "Validate the CIDR for a subnet.\n\n Verifies the specified CIDR does not overlap with the ones defined\n for the other subnets specified for this network, or with any other\n CIDR if overlapping IPs are disabled."}
{"_id": "q_11574", "text": "Retrieve a subnet.\n\n : param context: neutron api request context\n : param id: UUID representing the subnet to fetch.\n : param fields: a list of strings that are valid keys in a\n subnet dictionary as listed in the RESOURCE_ATTRIBUTE_MAP\n object in neutron/api/v2/attributes.py. Only these fields\n will be returned."}
{"_id": "q_11575", "text": "Return the number of subnets.\n\n The result depends on the identity of the user making the request\n (as indicated by the context) as well as any filters.\n : param context: neutron api request context\n : param filters: a dictionary with keys that are valid keys for\n a network as listed in the RESOURCE_ATTRIBUTE_MAP object\n in neutron/api/v2/attributes.py. Values in this dictiontary\n are an iterable containing values that will be used for an exact\n match comparison for that value. Each result returned by this\n function will have matched one of the values for each key in\n filters.\n\n NOTE: this method is optional, as it was not part of the originally\n defined plugin API."}
{"_id": "q_11576", "text": "Updates a rule and updates the ports"}
{"_id": "q_11577", "text": "Returns the public net id"}
{"_id": "q_11578", "text": "A decorator to be used on another decorator\n\n This is done to allow separate handling on the basis of argument values"}
{"_id": "q_11579", "text": "Will add the tenant_id to the context from body.\n\n It is assumed that the body must have a tenant_id because neutron\n core could never have gotten here otherwise."}
{"_id": "q_11580", "text": "Validate IP allocation pools.\n\n Verify start and end address for each allocation pool are valid,\n ie: constituted by valid and appropriately ordered IP addresses.\n Also, verify pools do not overlap among themselves.\n Finally, verify that each range fall within the subnet's CIDR."}
{"_id": "q_11581", "text": "Creates a job with support for subjobs.\n\n If parent_id is not in the body:\n * the job is considered a parent job\n * it will have a NULL transaction id\n * its transaction id == its id\n * all subjobs will use its transaction id as theirs\n\n Else:\n * the job is a sub job\n * the parent id is the id passed in\n * the transaction id is the root of the job tree"}
{"_id": "q_11582", "text": "Deconfigure any additional default transport zone bindings."}
{"_id": "q_11583", "text": "Public interface for fetching lswitch ids for a given network.\n\n NOTE(morgabra) This is here because calling private methods\n from outside the class feels wrong, and we need to be able to\n fetch lswitch ids for use in other drivers."}
{"_id": "q_11584", "text": "Looks for modules with amtching entry points."}
{"_id": "q_11585", "text": "Chunks data into chunk with size<=chunk_size."}
{"_id": "q_11586", "text": "Find a deallocated network segment id and reallocate it.\n\n NOTE(morgabra) This locks the segment table, but only the rows\n in use by the segment, which is pretty handy if we ever have\n more than 1 segment or segment type."}
{"_id": "q_11587", "text": "Deletes locks for each IP address that is no longer null-routed."}
{"_id": "q_11588", "text": "Creates locks for each IP address that is null-routed.\n\n The function creates the IP address if it is not present in the database."}
{"_id": "q_11589", "text": "Return relevant IPAM strategy name.\n\n :param network_id: neutron network id.\n :param network_strategy: default strategy for the network.\n\n NOTE(morgabra) This feels like a hack but I can't think of a better\n idea. The root problem is we can now attach ports to networks with\n a different backend driver/ipam strategy than the network speficies.\n\n We handle the the backend driver part with allowing network_plugin to\n be specified for port objects. This works pretty well because nova or\n whatever knows when we are hooking up an Ironic node so it can pass\n along that key during port_create().\n\n IPAM is a little trickier, especially in Ironic's case, because we\n *must* use a specific IPAM for provider networks. There isn't really\n much of an option other than involve the backend driver when selecting\n the IPAM strategy."}
{"_id": "q_11590", "text": "Return a dict of extra network information.\n\n :param context: neutron request context.\n :param network_id: neturon network id.\n :param net_driver: network driver associated with network_id.\n :raises IronicException: Any unexpected data fetching failures will\n be logged and IronicException raised.\n\n This driver can attach to networks managed by other drivers. We may\n need some information from these drivers, or otherwise inform\n downstream about the type of network we are attaching to. We can\n make these decisions here."}
{"_id": "q_11591", "text": "Set tag on model object."}
{"_id": "q_11592", "text": "Get a matching valid tag off the model."}
{"_id": "q_11593", "text": "Pop all matching tags off the model and return them."}
{"_id": "q_11594", "text": "Get all known tags from a model.\n\n Returns a dict of {<tag_name>:<tag_value>}."}
{"_id": "q_11595", "text": "Gets security groups for interfaces from Redis\n\n Returns a dictionary of xapi.VIFs with values of the current\n acknowledged status in Redis.\n\n States not explicitly handled:\n * ack key, no rules - This is the same as just tagging the VIF,\n the instance will be inaccessible\n * rules key, no ack - Nothing will happen, the VIF will\n not be tagged."}
{"_id": "q_11596", "text": "Updates security groups by setting the ack field"}
{"_id": "q_11597", "text": "Run migrations in 'offline' mode.\n\n This configures the context with just a URL\n and not an Engine, though an Engine is acceptable\n here as well. By skipping the Engine creation\n we don't even need a DBAPI to be available.\n\n Calls to context.execute() here emit the given string to the\n script output."}
{"_id": "q_11598", "text": "Run migrations in 'online' mode.\n\n In this scenario we need to create an Engine\n and associate a connection with the context."}
{"_id": "q_11599", "text": "Generic Notifier.\n\n Parameters:\n - `context`: session context\n - `event_type`: the event type to report, i.e. ip.usage\n - `payload`: dict containing the payload to send"}
{"_id": "q_11600", "text": "Method to send notifications.\n\n We must send USAGE when a public IPv4 address is deallocated or a FLIP is\n associated.\n Parameters:\n - `context`: the context for notifier\n - `event_type`: the event type for IP allocate, deallocate, associate,\n disassociate\n - `ipaddress`: the ipaddress object to notify about\n Returns:\n nothing\n Notes: this may live in the billing module"}
{"_id": "q_11601", "text": "Returns a tuple of start_period and end_period.\n\n Assumes that the period is 24-hrs.\n Parameters:\n - `hour`: the hour from 0 to 23 when the period ends\n - `minute`: the minute from 0 to 59 when the period ends\n This method will calculate the end of the period as the closest hour/minute\n going backwards.\n It will also calculate the start of the period as the passed hour/minute\n but 24 hrs ago.\n Example, if we pass 0, 0 - we will get the events from 0:00 midnight of the\n day before yesterday until today's midnight.\n If we pass 2,0 - we will get the start time as 2am of the previous morning\n till 2am of today's morning.\n By default it's midnight."}
{"_id": "q_11602", "text": "Creates the view for a job while calculating progress.\n\n Since a root job does not have a transaction id (TID) it will return its\n id as the TID."}
{"_id": "q_11603", "text": "Retrieve a mac_address_range.\n\n : param context: neutron api request context\n : param id: UUID representing the network to fetch.\n : param fields: a list of strings that are valid keys in a\n network dictionary as listed in the RESOURCE_ATTRIBUTE_MAP\n object in neutron/api/v2/attributes.py. Only these fields\n will be returned."}
{"_id": "q_11604", "text": "Delete a segment_allocation_range.\n\n : param context: neutron api request context\n : param id: UUID representing the segment_allocation_range to delete."}
{"_id": "q_11605", "text": "Returns a WSGI filter app for use with paste.deploy."}
{"_id": "q_11606", "text": "Returns dictionary with keys segment_id and value used IPs count.\n\n Used IP address count is determined by:\n - allocated IPs\n - deallocated IPs whose `deallocated_at` is within the `reuse_after`\n window compared to the present time, excluding IPs that are accounted for\n in the current IP policy (because IP policy is mutable and deallocated IPs\n are not checked nor deleted on IP policy creation, thus deallocated IPs\n that don't fit the current IP policy can exist in the neutron database)."}
{"_id": "q_11607", "text": "Returns dictionary with key segment_id, and value unused IPs count.\n\n Unused IP address count is determined by:\n - adding subnet's cidr's size\n - subtracting IP policy exclusions on subnet\n - subtracting used ips per segment"}
{"_id": "q_11608", "text": "Handles changes to interfaces' security groups\n\n Calls refresh_interfaces on argument VIFs. Set security groups on\n added_sg's VIFs. Unsets security groups on removed_sg's VIFs."}
{"_id": "q_11609", "text": "Retrieve a network.\n\n : param context: neutron api request context\n : param id: UUID representing the network to fetch.\n : param fields: a list of strings that are valid keys in a\n network dictionary as listed in the RESOURCE_ATTRIBUTE_MAP\n object in neutron/api/v2/attributes.py. Only these fields\n will be returned."}
{"_id": "q_11610", "text": "Return the number of networks.\n\n The result depends on the identity of the user making the request\n (as indicated by the context) as well as any filters.\n : param context: neutron api request context\n : param filters: a dictionary with keys that are valid keys for\n a network as listed in the RESOURCE_ATTRIBUTE_MAP object\n in neutron/api/v2/attributes.py. Values in this dictiontary\n are an iterable containing values that will be used for an exact\n match comparison for that value. Each result returned by this\n function will have matched one of the values for each key in\n filters.\n\n NOTE: this method is optional, as it was not part of the originally\n defined plugin API."}
{"_id": "q_11611", "text": "Delete a network.\n\n : param context: neutron api request context\n : param id: UUID representing the network to delete."}
{"_id": "q_11612", "text": "This is a helper method for testing.\n\n When run with the current context, it will create a case 2 entries\n in the database. See top of file for what case 2 is."}
{"_id": "q_11613", "text": "Runs billing report. Optionally sends notifications to billing"}
{"_id": "q_11614", "text": "Provides an admin context for workers."}
{"_id": "q_11615", "text": "Begins the async update process."}
{"_id": "q_11616", "text": "Updates the ports through redis."}
{"_id": "q_11617", "text": "Gather all ports associated to security group.\n\n Returns:\n * list, or None"}
{"_id": "q_11618", "text": "Updates a security group rule.\n\n NOTE(alexm) this is non-standard functionality."}
{"_id": "q_11619", "text": "Sends the given command to Niko Home Control and returns the output of\n the system.\n\n Aliases: write, put, sendall, send_all"}
{"_id": "q_11620", "text": "Implements the 'if' operator with support for multiple elseif-s."}
{"_id": "q_11621", "text": "Implements the '==' operator, which does type JS-style coertion."}
{"_id": "q_11622", "text": "Implements the '<' operator with JS-style type coertion."}
{"_id": "q_11623", "text": "Also, converts either to ints or to floats."}
{"_id": "q_11624", "text": "Implements the 'merge' operator for merging lists."}
{"_id": "q_11625", "text": "Gets variable value from data dictionary."}
{"_id": "q_11626", "text": "Implements the missing operator for finding missing variables."}
{"_id": "q_11627", "text": "Implements the missing_some operator for finding missing variables."}
{"_id": "q_11628", "text": "Executes the json-logic with given data."}
{"_id": "q_11629", "text": "Performs an indentation"}
{"_id": "q_11630", "text": "Performs an un-indentation"}
{"_id": "q_11631", "text": "Handle indent between symbols such as parenthesis, braces,..."}
{"_id": "q_11632", "text": "Extends mouseMoveEvent to display a pointing hand cursor when the\n mouse cursor is over a file location"}
{"_id": "q_11633", "text": "Emits open_file_requested if the press event occured over\n a file location string."}
{"_id": "q_11634", "text": "Connects slots to signals"}
{"_id": "q_11635", "text": "Setup the python editor, run the server and connect a few signals.\n\n :param editor: editor to setup."}
{"_id": "q_11636", "text": "Creates a new GenericCodeEdit, opens the requested file and adds it\n to the tab widget.\n\n :param path: Path of the file to open\n\n :return The opened editor if open succeeded."}
{"_id": "q_11637", "text": "Add a new empty code editor to the tab widget"}
{"_id": "q_11638", "text": "Save the current editor document as."}
{"_id": "q_11639", "text": "Update action states when the current tab changed."}
{"_id": "q_11640", "text": "Run the current current script"}
{"_id": "q_11641", "text": "Worker that returns a list of calltips.\n\n A calltips is a tuple made of the following parts:\n - module_name: name of the module of the function invoked\n - call_name: name of the function that is being called\n - params: the list of parameter names.\n - index: index of the current parameter\n - bracket_start\n\n :returns tuple(module_name, call_name, params)"}
{"_id": "q_11642", "text": "Go to assignements worker."}
{"_id": "q_11643", "text": "Worker that returns the documentation of the symbol under cursor."}
{"_id": "q_11644", "text": "Worker that run the pep8 tool on the current editor text.\n\n :returns a list of tuples (msg, msg_type, line_number)"}
{"_id": "q_11645", "text": "Completes python code using `jedi`_.\n\n :returns: a list of completion."}
{"_id": "q_11646", "text": "Request a go to assignment.\n\n :param tc: Text cursor which contains the text that we must look for\n its assignment. Can be None to go to the text that is under\n the text cursor.\n :type tc: QtGui.QTextCursor"}
{"_id": "q_11647", "text": "Not performant but works."}
{"_id": "q_11648", "text": "r\"\"\"Create variants metadata file.\n\n Variants metadata file helps speed up subsequent reads of the associated\n bgen file.\n\n Parameters\n ----------\n bgen_filepath : str\n Bgen file path.\n metafile_file : str\n Metafile file path.\n verbose : bool\n ``True`` to show progress; ``False`` otherwise.\n\n Examples\n --------\n .. doctest::\n\n >>> import os\n >>> from bgen_reader import create_metafile, example_files\n >>>\n >>> with example_files(\"example.32bits.bgen\") as filepath:\n ... folder = os.path.dirname(filepath)\n ... metafile_filepath = os.path.join(folder, filepath + \".metadata\")\n ...\n ... try:\n ... create_metafile(filepath, metafile_filepath, verbose=False)\n ... finally:\n ... if os.path.exists(metafile_filepath):\n ... os.remove(metafile_filepath)"}
{"_id": "q_11649", "text": "Search through lines for match.\n Raise an Exception if a match"}
{"_id": "q_11650", "text": "Return true if ``instance`` is an instance of any the Directive\n types in ``typeList``"}
{"_id": "q_11651", "text": "Get programs statuses.\n\n :param options: parsed commandline arguments.\n :type options: optparse.Values.\n :return: supervisord XML-RPC call result.\n :rtype: dict."}
{"_id": "q_11652", "text": "Create Nagios and human readable supervisord statuses.\n\n :param data: supervisord XML-RPC call result.\n :type data: dict.\n :param options: parsed commandline arguments.\n :type options: optparse.Values.\n :return: Nagios and human readable supervisord statuses and exit code.\n :rtype: (str, int)."}
{"_id": "q_11653", "text": "r\"\"\" Compute allele frequency from its expectation.\n\n Parameters\n ----------\n expec : array_like\n Allele expectations encoded as a samples-by-alleles matrix.\n\n Returns\n -------\n :class:`numpy.ndarray`\n Allele frequencies encoded as a variants-by-alleles matrix.\n\n Examples\n --------\n .. doctest::\n\n >>> from bgen_reader import read_bgen, example_files\n >>> from bgen_reader import allele_expectation, allele_frequency\n >>>\n >>> # Download an example\n >>> example = example_files(\"example.32bits.bgen\")\n >>> filepath = example.filepath\n >>>\n >>> bgen = read_bgen(filepath, verbose=False)\n >>>\n >>> variants = bgen[\"variants\"]\n >>> samples = bgen[\"samples\"]\n >>> genotype = bgen[\"genotype\"]\n >>>\n >>> variant = variants[variants[\"rsid\"] == \"RSID_6\"].compute()\n >>> variant_idx = variant.index.item()\n >>>\n >>> p = genotype[variant_idx].compute()[\"probs\"]\n >>> # For unphased genotypes only.\n >>> e = allele_expectation(bgen, variant_idx)\n >>> f = allele_frequency(e)\n >>>\n >>> alleles = variant[\"allele_ids\"].item().split(\",\")\n >>> print(alleles[0] + \": {}\".format(f[0]))\n A: 229.23103218810434\n >>> print(alleles[1] + \": {}\".format(f[1]))\n G: 270.7689678118956\n >>> print(variant)\n id rsid chrom pos nalleles allele_ids vaddr\n 4 SNPID_6 RSID_6 01 6000 2 A,G 19377\n >>>\n >>> # Clean-up the example\n >>> example.close()"}
{"_id": "q_11654", "text": "Fit distance-based AD.\n\n Parameters\n ----------\n X : array-like or sparse matrix, shape (n_samples, n_features)\n The input samples. Use ``dtype=np.float32`` for maximum\n efficiency.\n\n Returns\n -------\n self : object\n Returns self."}
{"_id": "q_11655", "text": "Learning is to find the inverse matrix for X and calculate the threshold.\n\n Parameters\n ----------\n X : array-like or sparse matrix, shape (n_samples, n_features)\n The input samples. Use ``dtype=np.float32`` for maximum\n efficiency.\n y : array-like, shape = [n_samples] or [n_samples, n_outputs]\n The target values (real numbers in regression).\n\n Returns\n -------\n self : object"}
{"_id": "q_11656", "text": "Predict the distances for X to center of the training set.\n\n Parameters\n ----------\n X : array-like or sparse matrix, shape (n_samples, n_features)\n The input samples. Internally, it will be converted to\n ``dtype=np.float32`` and if a sparse matrix is provided\n to a sparse ``csr_matrix``.\n\n Returns\n -------\n leverages: array of shape = [n_samples]\n The objects distances to center of the training set."}
{"_id": "q_11657", "text": "Find min and max values of every feature.\n\n Parameters\n ----------\n X : {array-like, sparse matrix}, shape (n_samples, n_features)\n The training input samples.\n y : Ignored\n not used, present for API consistency by convention.\n\n Returns\n -------\n self : object"}
{"_id": "q_11658", "text": "Do nothing and return the estimator unchanged\n\n This method is just there to implement the usual API and hence work in pipelines."}
{"_id": "q_11659", "text": "finalize partial fitting procedure"}
{"_id": "q_11660", "text": "Compute the header."}
{"_id": "q_11661", "text": "Return whether this model has a self ref FK, and the name for the field"}
{"_id": "q_11662", "text": "All concrete fields of the ``SyncableModel`` subclass, except for those specifically blacklisted, are returned in a dict."}
{"_id": "q_11663", "text": "Returns an unsaved class object based on the valid properties passed in."}
{"_id": "q_11664", "text": "Returns the default value for this field."}
{"_id": "q_11665", "text": "Should return a 32-digit hex string for a UUID that is calculated as a function of a set of fields from the model."}
{"_id": "q_11666", "text": "Whenever a model is deleted, we record its ID in a separate model for tracking purposes. During serialization, we will mark\n the model as deleted in the store."}
{"_id": "q_11667", "text": "Static method for error handling.\n\n :param resp - API response\n :param error - Error thrown\n :param mode - Error mode\n :param response_format - XML or json"}
{"_id": "q_11668", "text": "Checks the condition in poll response to determine if it is complete\n and no subsequent poll requests should be done."}
{"_id": "q_11669", "text": "We set the lower counter between two same instance ids.\n If an instance_id exists in one fsic but not the other we want to give that counter a value of 0.\n\n :param fsic1: dictionary containing (instance_id, counter) pairs\n :param fsic2: dictionary containing (instance_id, counter) pairs\n :return ``dict`` of fsics to be used in queueing the correct records to the buffer"}
{"_id": "q_11670", "text": "Takes data from the buffers and merges into the store and record max counters."}
{"_id": "q_11671", "text": "SQLite has a limit on the max number of variables allowed for parameter substitution. This limit is usually 999, but\n can be compiled to a different number. This function calculates what the max is for the sqlite version running on the device.\n We use the calculated value to chunk our SQL bulk insert statements when deserializing from the store to the app layer."}
{"_id": "q_11672", "text": "Authenticate the userargs and password against Django auth backends.\n The \"userargs\" string may be just the username, or a querystring-encoded set of params."}
{"_id": "q_11673", "text": "Per profile, adds each model to a dictionary mapping the morango model name to its model class.\n We sort by ForeignKey dependencies to safely sync data."}
{"_id": "q_11674", "text": "Generic request method designed to handle any morango endpoint.\n\n :param endpoint: constant representing which morango endpoint we are querying\n :param method: HTTP verb/method for request\n :param lookup: the pk value for the specific object we are querying\n :param data: dict that will be form-encoded in request\n :param params: dict to be sent as part of URL's query string\n :param userargs: Authorization credentials\n :param password:\n :return: ``Response`` object from request"}
{"_id": "q_11675", "text": "Validate a decoded SNS message.\n\n Parameters:\n message:\n Decoded SNS message.\n\n get_certificate:\n Function that receives a URL, and returns the certificate from that\n URL as a string. The default doesn't implement caching.\n\n certificate_url_regex:\n Regex that validates the signing certificate URL. Default value\n checks it's hosted on an AWS-controlled domain, in the format\n \"https://sns.<data-center>.amazonaws.com/\"\n\n max_age:\n Maximum age of an SNS message before it fails validation, expressed\n as a `datetime.timedelta`. Defaults to one hour, the max. lifetime\n of an SNS message."}
{"_id": "q_11676", "text": "Creates an access token.\n\n TODO: check valid in hours\n TODO: maybe specify how often a token can be used"}
{"_id": "q_11677", "text": "Stores an OWS service in mongodb."}
{"_id": "q_11678", "text": "Lists all services in mongodb storage."}
{"_id": "q_11679", "text": "Gets service for given ``name`` from mongodb storage."}
{"_id": "q_11680", "text": "Gets service for given ``url`` from mongodb storage."}
{"_id": "q_11681", "text": "Delegates owsproxy request to external twitcher service."}
{"_id": "q_11682", "text": "A tween factory which produces a tween which raises an exception\n if access to OWS service is not allowed."}
{"_id": "q_11683", "text": "Read tdms file and return channel names and data"}
{"_id": "q_11684", "text": "From circularity, compute the deformation\n\n This method is useful for RT-DC data sets that contain\n the circularity but not the deformation."}
{"_id": "q_11685", "text": "Creates an fcs file for a given tdms file"}
{"_id": "q_11686", "text": "Store an OWS service in database."}
{"_id": "q_11687", "text": "Lists all services in memory storage."}
{"_id": "q_11688", "text": "Get service for given ``name`` from memory storage."}
{"_id": "q_11689", "text": "Generates a new private key and certificate request, submits the request to be\n signed by the SLCS CA and returns the certificate."}
{"_id": "q_11690", "text": "Get parameter in GET request."}
{"_id": "q_11691", "text": "Find requested request type in POST request."}
{"_id": "q_11692", "text": "Provide a timzeone-aware object for a given datetime and timezone name"}
{"_id": "q_11693", "text": "return baseurl of given url"}
{"_id": "q_11694", "text": "Call 'setup egg_info' and return the parsed meta-data."}
{"_id": "q_11695", "text": "Distribute the project."}
{"_id": "q_11696", "text": "Prepare for a release."}
{"_id": "q_11697", "text": "Perform source code checks via pylint."}
{"_id": "q_11698", "text": "Check for uncommitted changes, return `True` if everything is clean.\n\n Inspired by http://stackoverflow.com/questions/3878624/."}
{"_id": "q_11699", "text": "Dump project metadata for Jenkins Description Setter Plugin."}
{"_id": "q_11700", "text": "Run a command and return its stripped captured output."}
{"_id": "q_11701", "text": "Run a command and flush its output."}
{"_id": "q_11702", "text": "Adds a new patch with patchname to the queue\n\n The new patch will be added as the topmost applied patch."}
{"_id": "q_11703", "text": "Factory for the correct SCM provider in `workdir`."}
{"_id": "q_11704", "text": "Delete specified patch from the series\n If remove is True the patch file will also be removed. If remove and\n backup are True a copy of the deleted patch file will be made."}
{"_id": "q_11705", "text": "Checks if a backup file of the filename in the current patch\n exists"}
{"_id": "q_11706", "text": "Creates a backup of file"}
{"_id": "q_11707", "text": "Run command as a subprocess and wait until it is finished.\n\n The command should be given as a list of strings to avoid problems\n with shell quoting. If the command exits with a return code other\n than 0, a SubprocessError is raised."}
{"_id": "q_11708", "text": "Creates the directory and all its parent directories if it does not\n exist yet"}
{"_id": "q_11709", "text": "Create hard link as link to this file"}
{"_id": "q_11710", "text": "Copy file to destination"}
{"_id": "q_11711", "text": "Returns the directory where the file is placed in or None if the\n path to the file doesn't contain a directory"}
{"_id": "q_11712", "text": "Provide a zipped stream of the docs tree."}
{"_id": "q_11713", "text": "Upload to PyPI."}
{"_id": "q_11714", "text": "Upload to WebDAV store."}
{"_id": "q_11715", "text": "A context that enters a given directory and restores the old state on exit.\n\n The original directory is returned as the context variable."}
{"_id": "q_11716", "text": "Backup file in dest_dir Directory.\n The return value is a File object pointing to the copied file in the\n destination directory or None if no file is copied.\n\n If file exists and it is not empty it is copied to dest_dir.\n If file exists and it is empty the file is copied only if copy_empty is\n True.\n If file does not exist and copy_empty is True a new file in dest_dir\n will be created.\n In all other cases no file will be copied and None is returned."}
{"_id": "q_11717", "text": "Refresh patch with patch_name or applied top patch if patch_name is\n None"}
{"_id": "q_11718", "text": "Unapply top patch"}
{"_id": "q_11719", "text": "Unapply all patches"}
{"_id": "q_11720", "text": "Emit a normal message."}
{"_id": "q_11721", "text": "Emit an error message to stderr."}
{"_id": "q_11722", "text": "Get currently used 'devpi' base URL."}
{"_id": "q_11723", "text": "Apply all patches up to patch_name"}
{"_id": "q_11724", "text": "Apply next patch in series file"}
{"_id": "q_11725", "text": "Apply all patches in series file"}
{"_id": "q_11726", "text": "Determine location of `tasks.py`."}
{"_id": "q_11727", "text": "Load and return configuration as a ``Bunch``.\n\n Values are based on ``DEFAULTS``, and metadata from ``setup.py``."}
{"_id": "q_11728", "text": "Convert a path part to regex syntax."}
{"_id": "q_11729", "text": "Generate parts of regex transformed from glob pattern."}
{"_id": "q_11730", "text": "Convert the given glob `spec` to a compiled regex."}
{"_id": "q_11731", "text": "Check patterns in order, last match that includes or excludes `path` wins. Return `None` on undecided."}
{"_id": "q_11732", "text": "Build a DEB package."}
{"_id": "q_11733", "text": "Reads all patches from the series file"}
{"_id": "q_11734", "text": "Saves current patches list in the series file"}
{"_id": "q_11735", "text": "Add a patch to the patches list"}
{"_id": "q_11736", "text": "Add a list of patches to the patches list"}
{"_id": "q_11737", "text": "Returns a list of patches before patch from the patches list\n including the provided patch"}
{"_id": "q_11738", "text": "Replace old_patch with new_patch\n The method only replaces the patch and doesn't change any comments."}
{"_id": "q_11739", "text": "Creates the dirname and inserts a .version file"}
{"_id": "q_11740", "text": "Checks if the .version file in dirname has the correct supported\n version number"}
{"_id": "q_11741", "text": "Build the project."}
{"_id": "q_11742", "text": "Freeze currently installed requirements."}
{"_id": "q_11743", "text": "Adds the argument to an argparse.ArgumentParser instance\n\n @param parser An argparse.ArgumentParser instance"}
{"_id": "q_11744", "text": "Adds this SubParser to the subparsers created by\n argparse.ArgumentParser.add_subparsers method.\n\n @param subparsers Normally a _SubParsersAction instance created by\n argparse.ArgumentParser.add_subparsers method"}
{"_id": "q_11745", "text": "Checks if a backup file of the filename in the current patch\n exists and raises a QuiltError if not."}
{"_id": "q_11746", "text": "Revert not added changes of filename.\n If patch_name is None or empty the topmost patch will be used."}
{"_id": "q_11747", "text": "Return current or given time formatted according to ISO-8601."}
{"_id": "q_11748", "text": "Windows allow application paths to be registered in the registry."}
{"_id": "q_11749", "text": "Return a generator of full paths to the given command.\n\n \"command\" is a the name of the executable to search for.\n \"path\" is an optional alternate path list to search. The default it\n to use the PATH environment variable.\n \"verbose\", if true, will cause a 2-tuple to be returned for each\n match. The second element is a textual description of where the\n match was found.\n \"exts\" optionally allows one to specify a list of extensions to use\n instead of the standard list for this system. This can\n effectively be used as an optimization to, for example, avoid\n stat's of \"foo.vbs\" when searching for \"foo\" and you know it is\n not a VisualBasic script but \".vbs\" is on PATHEXT. This option\n is only supported on Windows.\n\n This method returns a generator which yields either full paths to\n the given command or, if verbose, tuples of the form (<path to\n command>, <where path found>)."}
{"_id": "q_11750", "text": "Return the full path to the first match of the given command on\n the path.\n\n \"command\" is a the name of the executable to search for.\n \"path\" is an optional alternate path list to search. The default it\n to use the PATH environment variable.\n \"verbose\", if true, will cause a 2-tuple to be returned. The second\n element is a textual description of where the match was found.\n \"exts\" optionally allows one to specify a list of extensions to use\n instead of the standard list for this system. This can\n effectively be used as an optimization to, for example, avoid\n stat's of \"foo.vbs\" when searching for \"foo\" and you know it is\n not a VisualBasic script but \".vbs\" is on PATHEXT. This option\n is only supported on Windows.\n\n If no match is found for the command, a WhichError is raised."}
{"_id": "q_11751", "text": "Import patch into the patch queue\n The patch is inserted as the next unapplied patch."}
{"_id": "q_11752", "text": "Import several patches into the patch queue"}
{"_id": "q_11753", "text": "Process each way."}
{"_id": "q_11754", "text": "Get a list of nodes not found in OSM data."}
{"_id": "q_11755", "text": "Process each node."}
{"_id": "q_11756", "text": "Extract information of one route."}
{"_id": "q_11757", "text": "Create a meaningful route name."}
{"_id": "q_11758", "text": "Process the files and collect necessary data."}
{"_id": "q_11759", "text": "Process each relation."}
{"_id": "q_11760", "text": "Fill the fields that are necessary for passing transitfeed checks."}
{"_id": "q_11761", "text": "Create station stop times for each trip."}
{"_id": "q_11762", "text": "Write the GTFS feed in the given file."}
{"_id": "q_11763", "text": "Write GTFS text files in the given path."}
{"_id": "q_11764", "text": "Extract stops in a relation."}
{"_id": "q_11765", "text": "Gets a list of supported U2F versions from the device."}
{"_id": "q_11766", "text": "Sends an APDU to the device, and waits for a response."}
{"_id": "q_11767", "text": "Register a U2F device\n\n data = {\n \"version\": \"U2F_V2\",\n \"challenge\": string, //b64 encoded challenge\n \"appId\": string, //app_id\n }"}
{"_id": "q_11768", "text": "Perform a rachted step, replacing one of the internally managed chains with a new\n one.\n\n :param key: A bytes-like object encoding the key to initialize the replacement\n chain with.\n :param chain: The chain to replace. This parameter must be one of the two strings\n \"sending\" and \"receiving\"."}
{"_id": "q_11769", "text": "Derive a new set of internal and output data from given input data and the data\n stored internally.\n\n Use the key derivation function to derive new data. The kdf gets supplied with the\n current key and the data passed to this method.\n\n :param data: A bytes-like object encoding the data to pass to the key derivation\n function.\n :returns: A bytes-like object encoding the output material."}
{"_id": "q_11770", "text": "Create a connection to an other mesh.\n\n .. warning:: Both meshes need to be disconnected and one needs to be\n a consumed and the other a produced mesh. You can check if a\n connection is possible using :meth:`can_connect_to`.\n\n .. seealso:: :meth:`is_consumed`, :meth:`is_produced`,\n :meth:`can_connect_to`"}
{"_id": "q_11771", "text": "Saves the dump in a file-like object in text mode.\n\n :param file: :obj:`None` or a file-like object.\n :return: a file-like object\n\n If :paramref:`file` is :obj:`None`, a new :class:`io.StringIO`\n is returned.\n If :paramref:`file` is not :obj:`None` it should be a file-like object.\n\n The content is written to the file. After writing, the file's\n read/write position points behind the dumped content."}
{"_id": "q_11772", "text": "Saves the dump in a file named `path`."}
{"_id": "q_11773", "text": "Saves the dump in a temporary file and returns its path.\n\n .. warning:: The user of this method is responsible for deleting this\n file to save space on the hard drive.\n If you only need a file object for a short period of time\n you can use the method :meth:`temporary_file`.\n\n :param str extension: the ending ot the file name e.g. ``\".png\"``\n :return: a path to the temporary file\n :rtype: str"}
{"_id": "q_11774", "text": "set the pixel but convert the color before."}
{"_id": "q_11775", "text": "set the color of the pixel.\n\n :param color: must be a valid color in the form of \"#RRGGBB\".\n If you need to convert color, use `_set_pixel_and_convert_color()`."}
{"_id": "q_11776", "text": "The id that identifies the instruction in this cache.\n\n :param instruction_or_id: an :class:`instruction\n <knittingpattern.Instruction.Instruction>` or an instruction id\n :return: a :func:`hashable <hash>` object\n :rtype: tuple"}
{"_id": "q_11777", "text": "Return the SVG for an instruction.\n\n :param instruction_or_id: either an\n :class:`~knittingpattern.Instruction.Instruction` or an id\n returned by :meth:`get_instruction_id`\n :param bool i_promise_not_to_change_the_result:\n\n - :obj:`False`: the result is copied, you can alter it.\n - :obj:`True`: the result is directly from the cache. If you change\n the result, other calls of this function get the changed result.\n\n :return: an SVGDumper\n :rtype: knittingpattern.Dumper.SVGDumper"}
{"_id": "q_11778", "text": "Return the SVG dict for the SVGBuilder.\n\n :param instruction_or_id: the instruction or id, see\n :meth:`get_instruction_id`\n :param bool copy_result: whether to copy the result\n :rtype: dict\n\n The result is cached."}
{"_id": "q_11779", "text": "The last produced mesh.\n\n :return: the last produced mesh\n :rtype: knittingpattern.Mesh.Mesh\n :raises IndexError: if no mesh is produced\n\n .. seealso:: :attr:`number_of_produced_meshes`"}
{"_id": "q_11780", "text": "The last consumed mesh.\n\n :return: the last consumed mesh\n :rtype: knittingpattern.Mesh.Mesh\n :raises IndexError: if no mesh is consumed\n\n .. seealso:: :attr:`number_of_consumed_meshes`"}
{"_id": "q_11781", "text": "The first produced mesh.\n\n :return: the first produced mesh\n :rtype: knittingpattern.Mesh.Mesh\n :raises IndexError: if no mesh is produced\n\n .. seealso:: :attr:`number_of_produced_meshes`"}
{"_id": "q_11782", "text": "The first consumed mesh.\n\n :return: the first consumed mesh\n :rtype: knittingpattern.Mesh.Mesh\n :raises IndexError: if no mesh is consumed\n\n .. seealso:: :attr:`number_of_consumed_meshes`"}
{"_id": "q_11783", "text": "The rows that produce meshes for this row.\n\n :rtype: list\n :return: a list of rows that produce meshes for this row. Each row\n occurs only once. They are sorted by the first occurrence in the\n instructions."}
{"_id": "q_11784", "text": "The rows that consume meshes from this row.\n\n :rtype: list\n :return: a list of rows that consume meshes from this row. Each row\n occurs only once. They are sorted by the first occurrence in the\n instructions."}
{"_id": "q_11785", "text": "Load all files from a folder recursively.\n\n Depending on :meth:`chooses_path` some paths may not be loaded.\n Every loaded path is processed and returned part of the returned list.\n\n :param str folder: the folder to load the files from\n :rtype: list\n :return: a list of the results of the processing steps of the loaded\n files"}
{"_id": "q_11786", "text": "Load a folder located relative to a module and return the processed\n result.\n\n :param str module: can be\n\n - a path to a folder\n - a path to a file\n - a module name\n\n :param str folder: the path of a folder relative to :paramref:`module`\n :return: a list of the results of the processing\n :rtype: list\n\n Depending on :meth:`chooses_path` some paths may not be loaded.\n Every loaded path is processed and returned part of the returned list.\n You can use :meth:`choose_paths` to find out which paths are chosen to\n load."}
{"_id": "q_11787", "text": "Load a file relative to a module.\n\n :param str module: can be\n\n - a path to a folder\n - a path to a file\n - a module name\n\n :param str folder: the path of a folder relative to :paramref:`module`\n :return: the result of the processing"}
{"_id": "q_11788", "text": "Load an example from the knitting pattern examples.\n\n :param str relative_path: the path to load\n :return: the result of the processing\n\n You can use :meth:`knittingpattern.Loader.PathLoader.examples`\n to find out the paths of all examples."}
{"_id": "q_11789", "text": "load and process the content behind a url\n\n :return: the processed result of the :paramref:`url's <url>` content\n :param str url: the url to retrieve the content from\n :param str encoding: the encoding of the retrieved content.\n The default encoding is UTF-8."}
{"_id": "q_11790", "text": "dump a knitting pattern to a file."}
{"_id": "q_11791", "text": "Create a definition for the instruction.\n\n :return: the id of a symbol in the defs for the specified\n :paramref:`instruction`\n :rtype: str\n\n If no symbol yet exists in the defs for the :paramref:`instruction` a\n symbol is created and saved using :meth:`_make_symbol`."}
{"_id": "q_11792", "text": "Compute the scale of an instruction svg.\n\n Compute the scale using the bounding box stored in the\n :paramref:`svg_dict`. The scale is saved in a dictionary using\n :paramref:`instruction_id` as key.\n\n :param str instruction_id: id identifying a symbol in the defs\n :param dict svg_dict: dictionary containing the SVG for the\n instruction currently processed"}
{"_id": "q_11793", "text": "Add a new, empty knitting pattern to the set.\n\n :param id_: the id of the pattern\n :param name: the name of the pattern to add or if :obj:`None`, the\n :paramref:`id_` is used\n :return: a new, empty knitting pattern\n :rtype: knittingpattern.KnittingPattern.KnittingPattern"}
{"_id": "q_11794", "text": "Index of the instruction in the instructions of the row or None.\n\n :return: index in the :attr:`row`'s instructions or None, if the\n instruction is not in the row\n :rtype: int\n\n .. seealso:: :attr:`row_instructions`, :attr:`index_in_row`,\n :meth:`is_in_row`"}
{"_id": "q_11795", "text": "The instruction after this one or None.\n\n :return: the instruction in :attr:`row_instructions` after this or\n :obj:`None` if this is the last\n :rtype: knittingpattern.Instruction.InstructionInRow\n\n This can be used to traverse the instructions.\n\n .. seealso:: :attr:`previous_instruction_in_row`"}
{"_id": "q_11796", "text": "Index of the first produced mesh in the row that consumes it.\n\n :return: an index of the first produced mesh of rows produced meshes\n :rtype: int\n\n .. note:: If the instruction :meth:`produces meshes\n <Instruction.produces_meshes>`, this is the index of the first\n mesh the instruction produces in all the meshes of the row.\n If the instruction does not produce meshes, the index of the mesh is\n returned as if the instruction had produced a mesh.\n\n .. code::\n\n if instruction.produces_meshes():\n index = instruction.index_of_first_produced_mesh_in_row"}
{"_id": "q_11797", "text": "The index of the first consumed mesh of this instruction in its row.\n\n Same as :attr:`index_of_first_produced_mesh_in_row`\n but for consumed meshes."}
{"_id": "q_11798", "text": "Parse a knitting pattern set.\n\n :param dict value: the specification of the knitting pattern set\n :rtype: knittingpattern.KnittingPatternSet.KnittingPatternSet\n :raises knittingpattern.KnittingPatternSet.ParsingError: if\n :paramref:`value` does not fulfill the :ref:`specification\n <FileFormatSpecification>`."}
{"_id": "q_11799", "text": "Fill a pattern collection."}
{"_id": "q_11800", "text": "Parse a pattern."}
{"_id": "q_11801", "text": "Parse a collection of rows."}
{"_id": "q_11802", "text": "Connect the parsed rows."}
{"_id": "q_11803", "text": "Create a new pattern set."}
{"_id": "q_11804", "text": "Write bytes to the file."}
{"_id": "q_11805", "text": "Write a string to the file."}
{"_id": "q_11806", "text": "An SVG object.\n\n :return: an SVG object\n :rtype: kivy.graphics.svg.Svg\n :raises ImportError: if the module was not found"}
{"_id": "q_11807", "text": "Adds the defs to the SVG structure.\n\n :param defs: a list of SVG dictionaries, which contain the defs,\n which should be added to the SVG structure."}
{"_id": "q_11808", "text": "For ``self.width``."}
{"_id": "q_11809", "text": "Walk through the knitting pattern by expanding an row."}
{"_id": "q_11810", "text": "expand the consumed meshes"}
{"_id": "q_11811", "text": "expand the produced meshes"}
{"_id": "q_11812", "text": "place the instruction on a grid"}
{"_id": "q_11813", "text": "Loop through all the instructions that are `_todo`."}
{"_id": "q_11814", "text": "Returns an `InstructionInGrid` object for the `instruction`"}
{"_id": "q_11815", "text": "Iterate over instructions.\n\n :return: an iterator over :class:`instructions in grid\n <InstructionInGrid>`\n :param mapping: funcion to map the result\n\n .. code:: python\n\n for pos, c in layout.walk_instructions(lambda i: (i.xy, i.color)):\n print(\"color {} at {}\".format(c, pos))"}
{"_id": "q_11816", "text": "Iterate over rows.\n\n :return: an iterator over :class:`rows <RowsInGrid>`\n :param mapping: funcion to map the result, see\n :meth:`walk_instructions` for an example usage"}
{"_id": "q_11817", "text": "The minimum and maximum bounds of this layout.\n\n :return: ``(min_x, min_y, max_x, max_y)`` the bounding box\n of this layout\n :rtype: tuple"}
{"_id": "q_11818", "text": "dump to the file"}
{"_id": "q_11819", "text": "Add an instruction specification\n\n :param specification: a specification with a key\n :data:`knittingpattern.Instruction.TYPE`\n\n .. seealso:: :meth:`as_instruction`"}
{"_id": "q_11820", "text": "Listen to parameters change.\n\n Parameters\n ----------\n func : callable\n Function to be called when a parameter changes."}
{"_id": "q_11821", "text": "Gradient of K.\n\n Returns\n -------\n C0 : ndarray\n Derivative of C\u2080 over its parameters.\n C1 : ndarray\n Derivative of C\u2081 over its parameters."}
{"_id": "q_11822", "text": "Poisson likelihood sampling.\n\n Parameters\n ----------\n random_state : random_state\n Set the initial random state.\n\n Example\n -------\n\n .. doctest::\n\n >>> from glimix_core.random import poisson_sample\n >>> from numpy.random import RandomState\n >>> offset = -0.5\n >>> G = [[0.5, -1], [2, 1]]\n >>> poisson_sample(offset, G, random_state=RandomState(0))\n array([0, 6])"}
{"_id": "q_11823", "text": "Helper function for retrieving a particular entry from the prefix trees"}
{"_id": "q_11824", "text": "This is not a general purpose converter. Only converts this readme"}
{"_id": "q_11825", "text": "This method starts the server. There are two processes, one is an HTTP server that shows\n and admin interface and the second is a Thrift server that the client code calls.\n\n Arguments:\n `conf_path` - The path to your flawless.cfg file\n `storage_factory` - You can pass in your own storage class that implements StorageInterface. You must implement\n storage_cls if you want Flawless to be horizontally scalable, since by default it will just\n store everything on the local disk."}
{"_id": "q_11826", "text": "r\"\"\" Mean of the estimated posteriori.\n\n This is also the maximum a posteriori estimation of the latent variable."}
{"_id": "q_11827", "text": "r\"\"\" Covariance of the estimated posteriori."}
{"_id": "q_11828", "text": "Eigen decomposition of a zero matrix."}
{"_id": "q_11829", "text": "r\"\"\"Gradient of the log of the marginal likelihood.\n\n Returns\n -------\n dict\n Map between variables to their gradient values."}
{"_id": "q_11830", "text": "Derivative of the covariance matrix over the lower triangular, flat part of L.\n\n It is equal to\n\n \u2202K/\u2202L\u1d62\u2c7c = AL\u1d40 + LA\u1d40,\n\n where A\u1d62\u2c7c is an n\u00d7m matrix of zeros except at [A\u1d62\u2c7c]\u1d62\u2c7c=1.\n\n Returns\n -------\n Lu : ndarray\n Derivative of K over the lower-triangular, flat part of L."}
{"_id": "q_11831", "text": "Fixed-effect sizes.\n\n Returns\n -------\n effect-sizes : numpy.ndarray\n Optimal fixed-effect sizes.\n\n Notes\n -----\n Setting the derivative of log(p(\ud835\udc32)) over effect sizes equal\n to zero leads to solutions \ud835\udf37 from equation ::\n\n (Q\u1d40X)\u1d40D\u207b\u00b9(Q\u1d40X)\ud835\udf37 = (Q\u1d40X)\u1d40D\u207b\u00b9(Q\u1d40\ud835\udc32)."}
{"_id": "q_11832", "text": "Estimates the covariance-matrix of the optimal beta.\n\n Returns\n -------\n beta-covariance : ndarray\n (X\u1d40(s((1-\ud835\udeff)K + \ud835\udeffI))\u207b\u00b9X)\u207b\u00b9.\n\n References\n ----------\n .. Rencher, A. C., & Schaalje, G. B. (2008). Linear models in statistics. John\n Wiley & Sons."}
{"_id": "q_11833", "text": "Disable parameter optimization.\n\n Parameters\n ----------\n param : str\n Possible values are ``\"delta\"``, ``\"beta\"``, and ``\"scale\"``."}
{"_id": "q_11834", "text": "Maximise the marginal likelihood.\n\n Parameters\n ----------\n verbose : bool, optional\n ``True`` for progress output; ``False`` otherwise.\n Defaults to ``True``."}
{"_id": "q_11835", "text": "Internal use only."}
{"_id": "q_11836", "text": "Variance ratio between ``K`` and ``I``."}
{"_id": "q_11837", "text": "Log of the marginal likelihood for optimal scale.\n\n Implementation for unrestricted LML::\n\n Returns\n -------\n lml : float\n Log of the marginal likelihood."}
{"_id": "q_11838", "text": "Degrees of freedom."}
{"_id": "q_11839", "text": "r\"\"\"Log of the marginal likelihood.\n\n Formally,\n\n .. math::\n\n - \\frac{n}{2}\\log{2\\pi} - \\frac{1}{2} \\log{\\left|\n v_0 \\mathrm K + v_1 \\mathrm I + \\tilde{\\Sigma} \\right|}\n - \\frac{1}{2}\n \\left(\\tilde{\\boldsymbol\\mu} -\n \\mathrm X\\boldsymbol\\beta\\right)^{\\intercal}\n \\left( v_0 \\mathrm K + v_1 \\mathrm I +\n \\tilde{\\Sigma} \\right)^{-1}\n \\left(\\tilde{\\boldsymbol\\mu} -\n \\mathrm X\\boldsymbol\\beta\\right)\n\n Returns\n -------\n float\n :math:`\\log{p(\\tilde{\\boldsymbol\\mu})}`"}
{"_id": "q_11840", "text": "r\"\"\"Initialize the mean and covariance of the posterior.\n\n Given that :math:`\\tilde{\\mathrm T}` is a matrix of zeros right before\n the first EP iteration, we have\n\n .. math::\n\n \\boldsymbol\\mu = \\mathrm K^{-1} \\mathbf m ~\\text{ and }~\n \\Sigma = \\mathrm K\n\n as the initial posterior mean and covariance."}
{"_id": "q_11841", "text": "Fetch an image from url and convert it into a Pillow Image object"}
{"_id": "q_11842", "text": "Check that the image width is superior to `width`"}
{"_id": "q_11843", "text": "Build an engine and a session.\n\n :param str connection: An RFC-1738 database connection string\n :param bool echo: Turn on echoing SQL\n :param Optional[bool] autoflush: Defaults to True if not specified in kwargs or configuration.\n :param Optional[bool] autocommit: Defaults to False if not specified in kwargs or configuration.\n :param Optional[bool] expire_on_commit: Defaults to False if not specified in kwargs or configuration.\n :param scopefunc: Scoped function to pass to :func:`sqlalchemy.orm.scoped_session`\n :rtype: tuple[Engine,Session]\n\n From the Flask-SQLAlchemy documentation:\n\n An extra key ``'scopefunc'`` can be set on the ``options`` dict to\n specify a custom scope function. If it's not provided, Flask's app\n context stack identity is used. This will ensure that sessions are\n created and removed with the request/response cycle, and should be fine\n in most cases."}
{"_id": "q_11844", "text": "Converts the text category to a tasks.Category instance."}
{"_id": "q_11845", "text": "a helper method that composes and sends an email with attachments\n requires a pre-configured smtplib.SMTP instance"}
{"_id": "q_11846", "text": "Make a function that downloads the data for you, or uses a cached version at the given path.\n\n :param url: The URL of some data\n :param path: The path of the cached data, or where data is cached if it does not already exist\n :return: A function that downloads the data and returns the path of the data"}
{"_id": "q_11847", "text": "Build a function that handles downloading tabular data and parsing it into a pandas DataFrame.\n\n :param data_url: The URL of the data\n :param data_path: The path where the data should get stored\n :param kwargs: Any other arguments to pass to :func:`pandas.read_csv`"}
{"_id": "q_11848", "text": "Decode timestamp using bespoke decoder.\n Cannot use simple strptime since the ness panel contains a bug\n that P199E zone and state updates emitted on the hour cause a minute\n value of `60` to be sent, causing strptime to fail. This decoder handles\n this edge case."}
{"_id": "q_11849", "text": "Launch a query across all or a selection of providers.\n\n .. code-block:: python\n\n # Find anything that has a label of church in any provider.\n registry.find({'label': 'church'})\n\n # Find anything that has a label of church with the BUILDINGS provider.\n # Attention, this syntax was deprecated in version 0.3.0\n registry.find({'label': 'church'}, providers=['BUILDINGS'])\n\n # Find anything that has a label of church with the BUILDINGS provider.\n registry.find({'label': 'church'}, providers={'ids': ['BUILDINGS']})\n\n # Find anything that has a label of church with a provider\n # marked with the subject 'architecture'.\n registry.find({'label': 'church'}, providers={'subject': 'architecture'})\n\n # Find anything that has a label of church in any provider.\n # If possible, display the results with a Dutch label.\n registry.find({'label': 'church'}, language='nl')\n\n :param dict query: The query parameters that will be passed on to each\n :meth:`~skosprovider.providers.VocabularyProvider.find` method of\n the selected.\n :class:`providers <skosprovider.providers.VocabularyProvider>`.\n :param dict providers: Optional. If present, it should be a dictionary.\n This dictionary can contain any of the keyword arguments available\n to the :meth:`get_providers` method. The query will then only\n be passed to the providers confirming to these arguments.\n :param string language: Optional. If present, it should be a\n :term:`language-tag`. This language-tag is passed on to the\n underlying providers and used when selecting the label to display\n for each concept.\n :returns: a list of :class:`dict`.\n Each dict has two keys: id and concepts."}
{"_id": "q_11850", "text": "Get all concepts from all providers.\n\n .. code-block:: python\n\n # get all concepts in all providers.\n registry.get_all()\n\n # get all concepts in all providers.\n # If possible, display the results with a Dutch label.\n registry.get_all(language='nl')\n\n :param string language: Optional. If present, it should be a\n :term:`language-tag`. This language-tag is passed on to the\n underlying providers and used when selecting the label to display\n for each concept.\n\n :returns: a list of :class:`dict`.\n Each dict has two keys: id and concepts."}
{"_id": "q_11851", "text": "Build the backend and upload it to the remote server at the given index"}
{"_id": "q_11852", "text": "Install the backend from the given devpi index at the given version on the target host and restart the service.\n\n If version is None, it defaults to the latest version\n\n Optionally, build and upload the application first from local sources. This requires a\n full backend development environment on the machine running this command (pyramid etc.)"}
{"_id": "q_11853", "text": "Returns a sorted version of a list of concepts. Will leave the original\n list unsorted.\n\n :param list concepts: A list of concepts and collections.\n :param string sort: What to sort on: `id`, `label` or `sortlabel`\n :param string language: Language to use when sorting on `label` or\n `sortlabel`.\n :param boolean reverse: Reverse the sort order?\n :rtype: list"}
{"_id": "q_11854", "text": "Force update of alarm status and zones"}
{"_id": "q_11855", "text": "Schedule a state update to keep the connection alive"}
{"_id": "q_11856", "text": "Return an iterator over the models to be converted to the namespace."}
{"_id": "q_11857", "text": "Get the reference BEL namespace if it exists."}
{"_id": "q_11858", "text": "Make a namespace."}
{"_id": "q_11859", "text": "Convert a PyBEL generalized namespace entries to a set.\n\n Default to using the identifier, but can be overridden to use the name instead.\n\n >>> {term.identifier for term in namespace.entries}"}
{"_id": "q_11860", "text": "Upload the namespace to the PyBEL database.\n\n :param update: Should the namespace be updated first?"}
{"_id": "q_11861", "text": "Write as a BEL namespace file."}
{"_id": "q_11862", "text": "Write a BEL namespace mapping file."}
{"_id": "q_11863", "text": "Write a BEL namespace for identifiers, names, name hash, and mappings to the given directory."}
{"_id": "q_11864", "text": "Get the namespace hash.\n\n Defaults to MD5."}
{"_id": "q_11865", "text": "Iterator of the list of items in the XML source."}
{"_id": "q_11866", "text": "expects the id of an existing dropbox and returns its instance"}
{"_id": "q_11867", "text": "Saves an error in the error list."}
{"_id": "q_11868", "text": "Receives an item and returns a dictionary of field values."}
{"_id": "q_11869", "text": "Get an item from the database or an empty one if not found."}
{"_id": "q_11870", "text": "preserve the file ending, but replace the name with a random token"}
{"_id": "q_11871", "text": "ensures that no data leaks from drop after processing by\n removing all data except the status file"}
{"_id": "q_11872", "text": "creates a zip file from the drop and encrypts it to the editors.\n the encrypted archive is created inside fs_target_dir"}
{"_id": "q_11873", "text": "returns the number of bytes that the cleansed attachments take up on disk"}
{"_id": "q_11874", "text": "returns a list of strings"}
{"_id": "q_11875", "text": "returns the user submitted text"}
{"_id": "q_11876", "text": "returns a list of absolute paths to the attachements"}
{"_id": "q_11877", "text": "destroys all cleanser slaves and their rollback snapshots, as well as the initial master\n snapshot - this allows re-running the jailhost deployment to recreate fresh cleansers."}
{"_id": "q_11878", "text": "Add a Flask Admin interface to an application.\n\n :param flask.Flask app: A Flask application\n :param kwargs: Keyword arguments are passed through to :class:`flask_admin.Admin`\n :rtype: flask_admin.Admin"}
{"_id": "q_11879", "text": "generates a dropbox uid and renders the submission form with a signed version of that id"}
{"_id": "q_11880", "text": "handles the form submission, redirects to the dropbox's status page."}
{"_id": "q_11881", "text": "Write as a BEL namespace."}
{"_id": "q_11882", "text": "Help store an action."}
{"_id": "q_11883", "text": "Make a session."}
{"_id": "q_11884", "text": "Create the tables for Bio2BEL."}
{"_id": "q_11885", "text": "Store a \"populate\" event.\n\n :param resource: The normalized name of the resource to store\n\n Example:\n\n >>> from bio2bel.models import Action\n >>> Action.store_populate('hgnc')"}
{"_id": "q_11886", "text": "Store a \"drop\" event.\n\n :param resource: The normalized name of the resource to store\n\n Example:\n\n >>> from bio2bel.models import Action\n >>> Action.store_drop('hgnc')"}
{"_id": "q_11887", "text": "Get all actions."}
{"_id": "q_11888", "text": "Count all actions."}
{"_id": "q_11889", "text": "Opens the source file."}
{"_id": "q_11890", "text": "Iterator to read the rows of the CSV file."}
{"_id": "q_11891", "text": "Build a module configuration class."}
{"_id": "q_11892", "text": "Return the SQLAlchemy connection string if it is set.\n\n Order of operations:\n\n 1. Return the connection if given as a parameter\n 2. Check the environment for BIO2BEL_{module_name}_CONNECTION\n 3. Look in the bio2bel config file for module-specific connection. Create if doesn't exist. Check the\n module-specific section for ``connection``\n 4. Look in the bio2bel module folder for a config file. Don't create if doesn't exist. Check the default section\n for ``connection``\n 5. Check the environment for BIO2BEL_CONNECTION\n 6. Check the bio2bel config file for default\n 7. Fall back to standard default cache connection\n\n :param module_name: The name of the module to get the configuration for\n :param connection: get the SQLAlchemy connection string\n :return: The SQLAlchemy connection string based on the configuration"}
{"_id": "q_11893", "text": "Clear all downloaded files."}
{"_id": "q_11894", "text": "Drop all tables from the database.\n\n :param bool check_first: Defaults to True, only issue DROPs for tables confirmed to be\n present in the target database. Defers to :meth:`sqlalchemy.sql.schema.MetaData.drop_all`"}
{"_id": "q_11895", "text": "Find the best label for a certain labeltype.\n\n :param list labels: A list of :class:`Label`.\n :param str language: An IANA language string, eg. `nl` or `nl-BE`.\n :param str labeltype: Type of label to look for, eg. `prefLabel`."}
{"_id": "q_11896", "text": "Filter a list of labels, leaving only labels of a certain language.\n\n :param list labels: A list of :class:`Label`.\n :param str language: An IANA language string, eg. `nl` or `nl-BE`.\n :param boolean broader: When true, will also match `nl-BE` when filtering\n on `nl`. When false, only exact matches are considered."}
{"_id": "q_11897", "text": "Provide a single sortkey for this conceptscheme.\n\n :param string key: Either `uri`, `label` or `sortlabel`.\n :param string language: The preferred language to receive the label in\n if key is `label` or `sortlabel`. This should be a valid IANA language tag.\n :rtype: :class:`str`"}
{"_id": "q_11898", "text": "Iterate over instantiated managers."}
{"_id": "q_11899", "text": "Drop all."}
{"_id": "q_11900", "text": "Clear all caches."}
{"_id": "q_11901", "text": "Generate a summary sheet."}
{"_id": "q_11902", "text": "Run a combine web interface."}
{"_id": "q_11903", "text": "List all actions."}
{"_id": "q_11904", "text": "Count the number of BEL relations generated."}
{"_id": "q_11905", "text": "convert from 'list' or 'tuple' object to pgmagick.CoordinateList.\n\n :type input_obj: list or tuple"}
{"_id": "q_11906", "text": "convert from 'list' or 'tuple' object to pgmagick.VPathList.\n\n :type input_obj: list or tuple"}
{"_id": "q_11907", "text": "return exif-tag dict"}
{"_id": "q_11908", "text": "Draw a Bezier-curve.\n\n :param points: ex.) ((5, 5), (6, 6), (7, 7))\n :type points: list"}
{"_id": "q_11909", "text": "set to stroke linejoin.\n\n :param linejoin: 'undefined', 'miter', 'round', 'bevel'\n :type linejoin: str"}
{"_id": "q_11910", "text": "Runs a command inside the sandbox and returns the results.\n\n :param args: A list of strings that specify which command should\n be run inside the sandbox.\n\n :param max_num_processes: The maximum number of processes the\n command is allowed to spawn.\n\n :param max_stack_size: The maximum stack size, in bytes, allowed\n for the command.\n\n :param max_virtual_memory: The maximum amount of memory, in\n bytes, allowed for the command.\n\n :param as_root: Whether to run the command as a root user.\n\n :param stdin: A file object to be redirected as input to the\n command's stdin. If this is None, /dev/null is sent to the\n command's stdin.\n\n :param timeout: The time limit for the command.\n\n :param check: Causes CalledProcessError to be raised if the\n command exits nonzero or times out.\n\n :param truncate_stdout: When not None, stdout from the command\n will be truncated after this many bytes.\n\n :param truncate_stderr: When not None, stderr from the command\n will be truncated after this many bytes."}
{"_id": "q_11911", "text": "Copies the specified files into the working directory of this\n sandbox.\n The filenames specified can be absolute paths or relative paths\n to the current working directory.\n\n :param owner: The name of a user who should be granted ownership of\n the newly added files.\n Must be either autograder_sandbox.SANDBOX_USERNAME or 'root',\n otherwise ValueError will be raised.\n :param read_only: If true, the new files' permissions will be set to\n read-only."}
{"_id": "q_11912", "text": "Submission to remove a license acceptance request."}
{"_id": "q_11913", "text": "Submission to remove a role acceptance request."}
{"_id": "q_11914", "text": "Submission to remove an ACL."}
{"_id": "q_11915", "text": "Churns over PostgreSQL notifications on configured channels.\n This requires the application be setup and the registry be available.\n This function uses the database connection string and a list of\n pre configured channels."}
{"_id": "q_11916", "text": "Given a dbapi cursor, lookup all the api keys and their information."}
{"_id": "q_11917", "text": "A function task decorator used in place of ``@celery_app.task``."}
{"_id": "q_11918", "text": "Declare all routes."}
{"_id": "q_11919", "text": "Given a `Binder` as `binder`, bake the contents and\n persist those changes alongside the published content."}
{"_id": "q_11920", "text": "Function to supply a database connection object."}
{"_id": "q_11921", "text": "Decorator that supplies a cursor to the function.\n This passes in a psycopg2 Cursor as the argument 'cursor'.\n It also accepts a cursor if one is given."}
{"_id": "q_11922", "text": "Given a model's ``metadata``, iterate over the roles.\n Return values are the role identifier and role type as a tuple."}
{"_id": "q_11923", "text": "Obtain the licenses in a dictionary form, keyed by url."}
{"_id": "q_11924", "text": "Given the model, check the license is one valid for publication."}
{"_id": "q_11925", "text": "Lookup a document by id and version."}
{"_id": "q_11926", "text": "Given a tree, parse to a set of models"}
{"_id": "q_11927", "text": "Return the list of publications that need moderation."}
{"_id": "q_11928", "text": "Configures the session manager"}
{"_id": "q_11929", "text": "Returns a dictionary of all unique print_styles, and their latest tag,\n revision, and recipe_type."}
{"_id": "q_11930", "text": "Return the list of API keys."}
{"_id": "q_11931", "text": "Insert a module with the given ``metadata``."}
{"_id": "q_11932", "text": "Return the SHA1 hash of the given a file-like object as ``file``.\n This will seek the file back to 0 when it's finished."}
{"_id": "q_11933", "text": "Upsert the ``file`` and ``media_type`` into the files table.\n Returns the ``fileid`` and ``sha1`` of the upserted file."}
{"_id": "q_11934", "text": "Lookup publication state"}
{"_id": "q_11935", "text": "Configures the caching manager"}
{"_id": "q_11936", "text": "Gets the value from the key.\n If the key doesn't exist, the default value is returned, otherwise None.\n\n :param key: The key\n :param default: The default value\n :return: The value"}
{"_id": "q_11937", "text": "Iterate reversal points in the series.\n\n A reversal point is a point in the series at which the first derivative\n changes sign. Reversal is undefined at the first (last) point because the\n derivative before (after) this point is undefined. The first and the last\n points may be treated as reversals by setting the optional parameters\n `left` and `right` to True.\n\n Parameters\n ----------\n series : iterable sequence of numbers\n left: bool, optional\n If True, yield the first point in the series (treat it as a reversal).\n right: bool, optional\n If True, yield the last point in the series (treat it as a reversal).\n\n Yields\n ------\n float\n Reversal points."}
{"_id": "q_11938", "text": "Decorator for extract_cycles"}
{"_id": "q_11939", "text": "Iterate cycles in the series.\n\n Parameters\n ----------\n series : iterable sequence of numbers\n left: bool, optional\n If True, treat the first point in the series as a reversal.\n right: bool, optional\n If True, treat the last point in the series as a reversal.\n\n Yields\n ------\n cycle : tuple\n Each tuple contains three floats (low, high, mult), where low and high\n define cycle amplitude and mult equals to 1.0 for full cycles and 0.5\n for half cycles."}
{"_id": "q_11940", "text": "Count cycles in the series.\n\n Parameters\n ----------\n series : iterable sequence of numbers\n ndigits : int, optional\n Round cycle magnitudes to the given number of digits before counting.\n left: bool, optional\n If True, treat the first point in the series as a reversal.\n right: bool, optional\n If True, treat the last point in the series as a reversal.\n\n Returns\n -------\n A sorted list containing pairs of cycle magnitude and count.\n One-half cycles are counted as 0.5, so the returned counts may not be\n whole numbers."}
{"_id": "q_11941", "text": "Recipe to render a given FST node.\n\n The FST is composed of branch nodes which are either lists or dicts\n and of leaf nodes which are strings. Branch nodes can have other\n list, dict or leaf nodes as childs.\n\n To render a string, simply output it. To render a list, render each\n of its elements in order. To render a dict, you must follow the\n node's entry in the nodes_rendering_order dictionary and its\n dependents constraints.\n\n This function hides all this algorithmic complexity by returning\n a structured rendering recipe, whatever the type of node. But even\n better, you should subclass the RenderWalker which simplifies\n drastically working with the rendered FST.\n\n The recipe is a list of steps, each step correspond to a child and is actually a 3-uple composed of the following fields:\n\n - `key_type` is a string determining the type of the child in the second field (`item`) of the tuple. It can be one of:\n\n - 'constant': the child is a string\n - 'node': the child is a dict\n - 'key': the child is an element of a dict\n - 'list': the child is a list\n - 'formatting': the child is a list specialized in formatting\n\n - `item` is the child itself: either a string, a dict or a list.\n - `render_key` gives the key used to access this child from the parent node. It's a string if the node is a dict or a number if its a list.\n\n Please note that \"bool\" `key_types` are never rendered, that's why\n they are not shown here."}
{"_id": "q_11942", "text": "FST node located at the given path"}
{"_id": "q_11943", "text": "Return a list of all enrollments for the passed course_id.\n\n https://canvas.instructure.com/doc/api/enrollments.html#method.enrollments_api.index"}
{"_id": "q_11944", "text": "Enroll a user into a course.\n\n https://canvas.instructure.com/doc/api/enrollments.html#method.enrollments_api.create"}
{"_id": "q_11945", "text": "Get information about a single role, for the passed Canvas account ID.\n\n https://canvas.instructure.com/doc/api/roles.html#method.role_overrides.show"}
{"_id": "q_11946", "text": "Return course resource for given canvas course id.\n\n https://canvas.instructure.com/doc/api/courses.html#method.courses.show"}
{"_id": "q_11947", "text": "Returns a list of courses for the passed account ID.\n\n https://canvas.instructure.com/doc/api/accounts.html#method.accounts.courses_api"}
{"_id": "q_11948", "text": "Return a list of courses for the passed account SIS ID."}
{"_id": "q_11949", "text": "Return a list of published courses for the passed account SIS ID."}
{"_id": "q_11950", "text": "Return a list of courses for the passed regid.\n\n https://canvas.instructure.com/doc/api/courses.html#method.courses.index"}
{"_id": "q_11951", "text": "Updates the SIS ID for the course identified by the passed course ID.\n\n https://canvas.instructure.com/doc/api/courses.html#method.courses.update"}
{"_id": "q_11952", "text": "Returns statistics for the given account_id and term_id.\n\n https://canvas.instructure.com/doc/api/analytics.html#method.analytics_api.department_statistics"}
{"_id": "q_11953", "text": "Returns per-student data for the given course_id.\n\n https://canvas.instructure.com/doc/api/analytics.html#method.analytics_api.course_student_summaries"}
{"_id": "q_11954", "text": "Returns student activity data for the given user_id and course_id.\n\n https://canvas.instructure.com/doc/api/analytics.html#method.analytics_api.student_in_course_participation"}
{"_id": "q_11955", "text": "Returns student messaging data for the given user_id and course_id.\n\n https://canvas.instructure.com/doc/api/analytics.html#method.analytics_api.student_in_course_messaging"}
{"_id": "q_11956", "text": "Adds multicodec prefix to the given bytes input\n\n :param str multicodec: multicodec to use for prefixing\n :param bytes bytes_: data to prefix\n :return: prefixed byte data\n :rtype: bytes"}
{"_id": "q_11957", "text": "Gets the codec used for prefix the multicodec prefixed data\n\n :param bytes bytes_: multicodec prefixed data bytes\n :return: name of the multicodec used to prefix\n :rtype: str"}
{"_id": "q_11958", "text": "Return external tools for the passed canvas account id.\n\n https://canvas.instructure.com/doc/api/external_tools.html#method.external_tools.index"}
{"_id": "q_11959", "text": "Return external tools for the passed canvas course id.\n\n https://canvas.instructure.com/doc/api/external_tools.html#method.external_tools.index"}
{"_id": "q_11960", "text": "Create an external tool using the passed json_data.\n\n context is either COURSES_API or ACCOUNTS_API.\n context_id is the Canvas course_id or account_id, depending on context.\n\n https://canvas.instructure.com/doc/api/external_tools.html#method.external_tools.create"}
{"_id": "q_11961", "text": "Check if a parameter is available on an object\n\n :param obj: Object\n :param required_parameters: list of parameters\n :return:"}
{"_id": "q_11962", "text": "Returns user profile data.\n\n https://canvas.instructure.com/doc/api/users.html#method.profile.settings"}
{"_id": "q_11963", "text": "Returns a list of users for the given sis course id."}
{"_id": "q_11964", "text": "Create and return a new user and pseudonym for an account.\n\n https://canvas.instructure.com/doc/api/users.html#method.users.create"}
{"_id": "q_11965", "text": "Return a user's logins for the given user_id.\n\n https://canvas.instructure.com/doc/api/logins.html#method.pseudonyms.index"}
{"_id": "q_11966", "text": "Update an existing login for a user in the given account.\n\n https://canvas.instructure.com/doc/api/logins.html#method.pseudonyms.update"}
{"_id": "q_11967", "text": "return url path to next page of paginated data"}
{"_id": "q_11968", "text": "Canvas GET method on a full url. Return representation of the\n requested resource, chasing pagination links to coalesce resources\n if indicated."}
{"_id": "q_11969", "text": "Canvas GET method. Return representation of the requested paged\n resource, either the requested page, or chase pagination links to\n coalesce resources."}
{"_id": "q_11970", "text": "Canvas GET method. Return representation of the requested resource."}
{"_id": "q_11971", "text": "Canvas PUT method."}
{"_id": "q_11972", "text": "Canvas POST method."}
{"_id": "q_11973", "text": "Canvas DELETE method."}
{"_id": "q_11974", "text": "Return a list of the admins in the account.\n\n https://canvas.instructure.com/doc/api/admins.html#method.admins.index"}
{"_id": "q_11975", "text": "Flag an existing user as an admin within the account.\n\n https://canvas.instructure.com/doc/api/admins.html#method.admins.create"}
{"_id": "q_11976", "text": "Flag an existing user as an admin within the account sis id."}
{"_id": "q_11977", "text": "Remove an account admin role from a user.\n\n https://canvas.instructure.com/doc/api/admins.html#method.admins.destroy"}
{"_id": "q_11978", "text": "Remove an account admin role from a user for the account sis id."}
{"_id": "q_11979", "text": "Create a new grading standard for the passed course.\n\n https://canvas.instructure.com/doc/api/grading_standards.html#method.grading_standards_api.create"}
{"_id": "q_11980", "text": "Return section resource for given canvas section id.\n\n https://canvas.instructure.com/doc/api/sections.html#method.sections.show"}
{"_id": "q_11981", "text": "Return list of sections for the passed course ID.\n\n https://canvas.instructure.com/doc/api/sections.html#method.sections.index"}
{"_id": "q_11982", "text": "Return list of sections including students for the passed course ID."}
{"_id": "q_11983", "text": "Return list of sections including students for the passed sis ID."}
{"_id": "q_11984", "text": "Create a canvas section in the given course id.\n\n https://canvas.instructure.com/doc/api/sections.html#method.sections.create"}
{"_id": "q_11985", "text": "Update a canvas section with the given section id.\n\n https://canvas.instructure.com/doc/api/sections.html#method.sections.update"}
{"_id": "q_11986", "text": "List quizzes for a given course\n\n https://canvas.instructure.com/doc/api/quizzes.html#method.quizzes_api.index"}
{"_id": "q_11987", "text": "Update the passed account. Returns the updated account.\n\n https://canvas.instructure.com/doc/api/accounts.html#method.accounts.update"}
{"_id": "q_11988", "text": "Return the authentication settings for the passed account_id.\n\n https://canvas.instructure.com/doc/api/authentication_providers.html#method.account_authorization_configs.show_sso_settings"}
{"_id": "q_11989", "text": "Update the authentication settings for the passed account_id.\n\n https://canvas.instructure.com/doc/api/authentication_providers.html#method.account_authorization_configs.update_sso_settings"}
{"_id": "q_11990", "text": "Imports a CSV string.\n\n https://canvas.instructure.com/doc/api/sis_imports.html#method.sis_imports_api.create"}
{"_id": "q_11991", "text": "Imports a directory of CSV files.\n\n https://canvas.instructure.com/doc/api/sis_imports.html#method.sis_imports_api.create"}
{"_id": "q_11992", "text": "Get the status of an already created SIS import.\n\n https://canvas.instructure.com/doc/api/sis_imports.html#method.sis_imports_api.show"}
{"_id": "q_11993", "text": "List assignments for a given course\n\n https://canvas.instructure.com/doc/api/assignments.html#method.assignments_api.index"}
{"_id": "q_11994", "text": "Modify an existing assignment.\n\n https://canvas.instructure.com/doc/api/assignments.html#method.assignments_api.update"}
{"_id": "q_11995", "text": "Shows all reports of the passed report_type that have been run\n for the canvas account id.\n\n https://canvas.instructure.com/doc/api/account_reports.html#method.account_reports.index"}
{"_id": "q_11996", "text": "Convenience method for create_report, for creating a course\n provisioning report."}
{"_id": "q_11997", "text": "Convenience method for create_report, for creating an unused courses\n report."}
{"_id": "q_11998", "text": "Returns a completed report as a list of csv strings."}
{"_id": "q_11999", "text": "Returns the status of a report.\n\n https://canvas.instructure.com/doc/api/account_reports.html#method.account_reports.show"}
{"_id": "q_12000", "text": "Deletes a generated report instance.\n\n https://canvas.instructure.com/doc/api/account_reports.html#method.account_reports.destroy"}
{"_id": "q_12001", "text": "Horizontally flip detections according to an image flip.\n\n :param label: The label dict containing all detection lists.\n :param w: The width of the image as a number.\n :return:"}
{"_id": "q_12002", "text": "Archives the provided URL using archive.is\n\n Returns the URL where the capture is stored."}
{"_id": "q_12003", "text": "Archives the provided URL using archive.is."}
{"_id": "q_12004", "text": "Get the logo for a channel"}
{"_id": "q_12005", "text": "Edit to get the dict even when the object is a GenericRelatedObjectManager.\n Added the try except."}
{"_id": "q_12006", "text": "Get the text to display when the field is empty."}
{"_id": "q_12007", "text": "Parse uniformly args and kwargs from a templatetag\n\n Usage::\n\n For parsing a template like this:\n\n {% footag my_contents,height=10,zoom=20 as myvar %}\n\n You simply do this:\n\n @register.tag\n def footag(parser, token):\n args, kwargs = parse_args_kwargs(parser, token)"}
{"_id": "q_12008", "text": "Create and register metrics from a list of MetricConfigs."}
{"_id": "q_12009", "text": "Create Prometheus metrics from a list of MetricConfigs."}
{"_id": "q_12010", "text": "Return a metric, optionally configured with labels."}
{"_id": "q_12011", "text": "Home page request handler."}
{"_id": "q_12012", "text": "Handler for metrics."}
{"_id": "q_12013", "text": "r'~\"|~\\"}
{"_id": "q_12014", "text": "r'[^\\'@]+"}
{"_id": "q_12015", "text": "Lex file."}
{"_id": "q_12016", "text": "Inner wrapper to search for mixins by name."}
{"_id": "q_12017", "text": "Inner wrapper to search for blocks by name."}
{"_id": "q_12018", "text": "Round half-way away from zero.\n\n Python2's round() method."}
{"_id": "q_12019", "text": "Convergent rounding.\n\n Round to neareas even, similar to Python3's round() method."}
{"_id": "q_12020", "text": "Return successive r length permutations of elements in the iterable.\n\n Similar to itertools.permutation but withouth repeated values filtering."}
{"_id": "q_12021", "text": "Query Wolfram|Alpha using the v2.0 API\n\n Allows for arbitrary parameters to be passed in\n the query. For example, to pass assumptions:\n\n client.query(input='pi', assumption='*C.pi-_*NamedConstant-')\n\n To pass multiple assumptions, pass multiple items\n as params:\n\n params = (\n ('assumption', '*C.pi-_*NamedConstant-'),\n ('assumption', 'DateOrder_**Day.Month.Year--'),\n )\n client.query(input='pi', params=params)\n\n For more details on Assumptions, see\n https://products.wolframalpha.com/api/documentation.html#6"}
{"_id": "q_12022", "text": "The pods, assumptions, and warnings of this result."}
{"_id": "q_12023", "text": "Add request content data to request body, set Content-type header.\n\n Should be overridden by subclasses if not using JSON encoding.\n\n Args:\n request (HTTPRequest): The request object.\n data (dict, None): Data to be encoded.\n\n Returns:\n HTTPRequest: The request object."}
{"_id": "q_12024", "text": "Call the API with a DELETE request.\n\n Args:\n url (str): Resource location relative to the base URL.\n params (dict or None): Query-string parameters.\n\n Returns:\n ResultParser or ErrorParser."}
{"_id": "q_12025", "text": "Call the API with a POST request.\n\n Args:\n url (str): Resource location relative to the base URL.\n params (dict or None): Query-string parameters.\n data (dict or None): Request body contents.\n files (dict or None: Files to be passed to the request.\n\n Returns:\n An instance of ResultParser or ErrorParser."}
{"_id": "q_12026", "text": "Process query recursively, if the text is too long,\n it is split and processed bit a bit.\n\n Args:\n query (sdict): Text to be processed.\n prepared (bool): True when the query is ready to be submitted via\n POST request.\n Returns:\n str: Body ready to be submitted to the API."}
{"_id": "q_12027", "text": "Recognise the language of the text in input\n\n Args:\n id (str): The text whose the language needs to be recognised\n\n Returns:\n dict, int: A dict containing the recognised language and the\n confidence score."}
{"_id": "q_12028", "text": "Fetch the concept from the Knowledge base\n\n Args:\n id (str): The concept id to be fetched, it can be Wikipedia\n page id or Wikiedata id.\n\n Returns:\n dict, int: A dict containing the concept information; an integer\n representing the response code."}
{"_id": "q_12029", "text": "Constructs the MDR ensemble from the provided training data\n\n Parameters\n ----------\n features: array-like {n_samples, n_features}\n Feature matrix\n classes: array-like {n_samples}\n List of class labels for prediction\n\n Returns\n -------\n None"}
{"_id": "q_12030", "text": "Constructs the MDR feature map from the provided training data.\n\n Parameters\n ----------\n features: array-like {n_samples, n_features}\n Feature matrix\n class_labels: array-like {n_samples}\n List of true class labels\n\n Returns\n -------\n self: A copy of the fitted model"}
{"_id": "q_12031", "text": "Convenience function that fits the provided data then constructs predictions from the provided features.\n\n Parameters\n ----------\n features: array-like {n_samples, n_features}\n Feature matrix\n class_labels: array-like {n_samples}\n List of true class labels\n\n Returns\n ----------\n array-like: {n_samples}\n Constructed features from the provided feature matrix"}
{"_id": "q_12032", "text": "Uses the Continuous MDR feature map to construct a new feature from the provided features.\n\n Parameters\n ----------\n features: array-like {n_samples, n_features}\n Feature matrix to transform\n\n Returns\n ----------\n array-like: {n_samples}\n Constructed feature from the provided feature matrix\n The constructed feature will be a binary variable, taking the values 0 and 1"}
{"_id": "q_12033", "text": "Estimates the quality of the ContinuousMDR model using a t-statistic.\n\n Parameters\n ----------\n features: array-like {n_samples, n_features}\n Feature matrix to predict from\n targets: array-like {n_samples}\n List of true target values\n\n Returns\n -------\n quality_score: float\n The estimated quality of the Continuous MDR model"}
{"_id": "q_12034", "text": "Fits a MDR model to variables X and Y with the given labels, then returns the resulting predictions\n\n This is a convenience method that should only be used internally.\n\n Parameters\n ----------\n X: array-like (# samples)\n An array of values corresponding to one feature in the MDR model\n Y: array-like (# samples)\n An array of values corresponding to one feature in the MDR model\n labels: array-like (# samples)\n The class labels corresponding to features X and Y\n\n Returns\n ----------\n predictions: array-like (# samples)\n The predictions from the fitted MDR model"}
{"_id": "q_12035", "text": "Fits a MDR model to all n-way combinations of the features in X.\n\n Note that this function performs an exhaustive search through all feature combinations and can be computationally expensive.\n\n Parameters\n ----------\n mdr_instance: object\n An instance of the MDR type to use.\n X: array-like (# rows, # features)\n NumPy matrix containing the features\n y: array-like (# rows, 1)\n NumPy matrix containing the target values\n n: list (default: [2])\n The maximum size(s) of the MDR model to generate.\n e.g., if n == [3], all 3-way models will be generated.\n feature_names: list (default: None)\n The corresponding names of the features in X.\n If None, then the features will be named according to their order.\n\n Returns\n ----------\n (fitted_model, fitted_model_score, fitted_model_features): tuple of (list, list, list)\n fitted_model contains the MDR model fitted to the data.\n fitted_model_score contains the training scores corresponding to the fitted MDR model.\n fitted_model_features contains a list of the names of the features that were used in the corresponding model."}
{"_id": "q_12036", "text": "Checks if a GitHub call returned multiple pages of data.\n\n :param gh: GitHub() instance\n :rtype: int\n :return: number of next page or 0 if no next page"}
{"_id": "q_12037", "text": "Fetch all pull requests. We need them to detect \"merged_at\" parameter\n\n :rtype: list\n :return: all pull requests"}
{"_id": "q_12038", "text": "Get the creation date of the repository from GitHub.\n\n :rtype: str, str\n :return: special tag name, creation date as ISO date string"}
{"_id": "q_12039", "text": "Fetch events for all issues and add them to self.events\n\n :param list issues: all issues\n :param str tag_name: name of the tag to fetch events for\n :returns: Nothing"}
{"_id": "q_12040", "text": "Fetch time for tag from repository.\n\n :param dict tag: dictionary with tag information\n :rtype: str\n :return: time of specified tag as ISO date string"}
{"_id": "q_12041", "text": "Daemonize this process\n\n Do everything that is needed to become a Unix daemon.\n\n :return: None\n :raise: DaemonError"}
{"_id": "q_12042", "text": "Detects user and project from git."}
{"_id": "q_12043", "text": "Convert an ISO formated date and time string to a datetime object.\n\n :param str timestring: String with date and time in ISO format.\n :rtype: datetime\n :return: datetime object"}
{"_id": "q_12044", "text": "Fetch event for issues and pull requests\n\n @return [Array] array of fetched issues"}
{"_id": "q_12045", "text": "Fill \"actual_date\" parameter of specified issue by closed date of\n the commit, if it was closed by commit.\n\n :param dict issue: issue to edit"}
{"_id": "q_12046", "text": "Set closed date from this issue.\n\n :param dict event: event data\n :param dict issue: issue data"}
{"_id": "q_12047", "text": "Generate formated list of issues for changelog.\n\n :param list issues: Issues to put in sub-section.\n :param str prefix: Title of sub-section.\n :rtype: str\n :return: Generated ready-to-add sub-section."}
{"_id": "q_12048", "text": "Generate log between 2 specified tags.\n\n :param dict older_tag: All issues before this tag's date will be\n excluded. May be special value, if new tag is\n the first tag. (Means **older_tag** is when\n the repo was created.)\n :param dict newer_tag: All issues after this tag's date will be\n excluded. May be title of unreleased section.\n :rtype: str\n :return: Generated ready-to-add tag section for newer tag."}
{"_id": "q_12049", "text": "The full cycle of generation for whole project.\n\n :rtype: str\n :return: The complete change log for released tags."}
{"_id": "q_12050", "text": "Generate log for unreleased closed issues.\n\n :rtype: str\n :return: Generated ready-to-add unreleased section."}
{"_id": "q_12051", "text": "If option author is enabled, a link to the profile of the author\n of the pull reqest will be added to the issue line.\n\n :param str line: String containing a markdown-formatted single issue.\n :param dict issue: Fetched issue from GitHub.\n :rtype: str\n :return: Issue line with added author link."}
{"_id": "q_12052", "text": "Generates log for tag section with header and body.\n\n :param list(dict) pull_requests: List of PR's in this tag section.\n :param list(dict) issues: List of issues in this tag section.\n :param dict newer_tag: Github data of tag for this section.\n :param str older_tag_name: Older tag, used for the links.\n May be special value, if **newer tag** is\n the first tag. (Means **older_tag** is when\n the repo was created.)\n :rtype: str\n :return: Ready-to-add and parsed tag section."}
{"_id": "q_12053", "text": "Generate ready-to-paste log from list of issues and pull requests.\n\n :param list(dict) issues: List of issues in this tag section.\n :param list(dict) pull_requests: List of PR's in this tag section.\n :rtype: str\n :return: Generated log for issues and pull requests."}
{"_id": "q_12054", "text": "Add all issues, that should be in that tag, according to milestone.\n\n :param list(dict) all_issues: All issues.\n :param str tag_name: Name (title) of tag.\n :rtype: List[dict]\n :return: Issues filtered by milestone."}
{"_id": "q_12055", "text": "Filter issues that belong to specified tag range.\n\n :param list(dict) issues: Issues to filter.\n :param dict older_tag: All issues before this tag's date will be\n excluded. May be special value, if **newer_tag**\n is the first tag. (Means **older_tag** is when\n the repo was created.)\n :param dict newer_tag: All issues after this tag's date will be\n excluded. May be title of unreleased section.\n :rtype: list(dict)\n :return: Filtered issues."}
{"_id": "q_12056", "text": "Include issues with labels, specified in self.options.include_labels.\n\n :param list(dict) all_issues: All issues.\n :rtype: list(dict)\n :return: Filtered issues."}
{"_id": "q_12057", "text": "Filter issues to include only issues with labels\n specified in include_labels.\n\n :param list(dict) issues: Pre-filtered issues.\n :rtype: list(dict)\n :return: Filtered issues."}
{"_id": "q_12058", "text": "This method filter only merged PR and fetch missing required\n attributes for pull requests. Using merged date is more correct\n than closed date.\n\n :param list(dict) pull_requests: Pre-filtered pull requests.\n :rtype: list(dict)\n :return:"}
{"_id": "q_12059", "text": "Fetch and filter tags, fetch dates and sort them in time order."}
{"_id": "q_12060", "text": "Sort all tags by date.\n\n :param list(dict) tags: All tags.\n :rtype: list(dict)\n :return: Sorted list of tags."}
{"_id": "q_12061", "text": "Get date and time for tag, fetching it if not already cached.\n\n :param dict tag: Tag to get the datetime for.\n :rtype: datetime\n :return: datetime for specified tag."}
{"_id": "q_12062", "text": "Detect link, name and time for specified tag.\n\n :param dict tag: Tag data.\n :rtype: str, str, datetime\n :return: Link, name and time of the tag."}
{"_id": "q_12063", "text": "Try to detect the newest tag from self.options.base, otherwise\n return a special value indicating the creation of the repo.\n\n :rtype: str\n :return: Tag name to use as 'oldest' tag. May be special value,\n indicating the creation of the repo."}
{"_id": "q_12064", "text": "If not already cached, fetch the creation date of the repo, cache it\n and return the special value indicating the creation of the repo.\n\n :rtype: str\n :return: value indicating the creation"}
{"_id": "q_12065", "text": "Filter tags according since_tag option.\n\n :param list(dict) all_tags: All tags.\n :rtype: list(dict)\n :return: Filtered tags."}
{"_id": "q_12066", "text": "Filter tags according exclude_tags and exclude_tags_regex option.\n\n :param list(dict) all_tags: Pre-filtered tags.\n :rtype: list(dict)\n :return: Filtered tags."}
{"_id": "q_12067", "text": "Filter tags according exclude_tags option.\n\n :param list(dict) all_tags: Pre-filtered tags.\n :rtype: list(dict)\n :return: Filtered tags."}
{"_id": "q_12068", "text": "Parses an APRS packet and returns a dict with decoded data\n\n - All attributes are in metric units"}
{"_id": "q_12069", "text": "Takes a decimal and returns base91 char string.\n With optional parameter for fix with output"}
{"_id": "q_12070", "text": "Takes a CALLSIGN and returns passcode"}
{"_id": "q_12071", "text": "Parses the header part of packet\n Returns a dict"}
{"_id": "q_12072", "text": "Set callsign and password"}
{"_id": "q_12073", "text": "Closes the socket\n Called internally when Exceptions are raised"}
{"_id": "q_12074", "text": "Send a line, or multiple lines sperapted by '\\\\r\\\\n'"}
{"_id": "q_12075", "text": "Attemps connection to the server"}
{"_id": "q_12076", "text": "Sends login string to server"}
{"_id": "q_12077", "text": "Generator for complete lines, received from the server"}
{"_id": "q_12078", "text": "Convert binary blob to UUID instance"}
{"_id": "q_12079", "text": "Convert the python value for storage in the database."}
{"_id": "q_12080", "text": "Convert the database value to a pythonic value."}
{"_id": "q_12081", "text": "Find matching database router"}
{"_id": "q_12082", "text": "Returns dict of values to uniquely reference this item"}
{"_id": "q_12083", "text": "List items from query"}
{"_id": "q_12084", "text": "Conveniently get the security configuration for the specified\n application without the annoying 'SECURITY_' prefix.\n\n :param app: The application to inspect"}
{"_id": "q_12085", "text": "Get a Flask-Security configuration value.\n\n :param key: The configuration key without the prefix `SECURITY_`\n :param app: An optional specific application to inspect. Defaults to\n Flask's `current_app`\n :param default: An optional default value if the value is not set"}
{"_id": "q_12086", "text": "Creates a new vector from members."}
{"_id": "q_12087", "text": "Evaluate a file with the given name into a Python module AST node."}
{"_id": "q_12088", "text": "Evaluate the forms in a string into a Python module AST node."}
{"_id": "q_12089", "text": "Swap the methods atom to include method with key."}
{"_id": "q_12090", "text": "Add a new method to this function which will respond for\n key returned from the dispatch function."}
{"_id": "q_12091", "text": "Swap the methods atom to remove method with key."}
{"_id": "q_12092", "text": "Return True if the Var holds a macro function."}
{"_id": "q_12093", "text": "Attach any available location information from the input form to\n the node environment returned from the parsing function."}
{"_id": "q_12094", "text": "Assert that `recur` forms only appear in the tail position of this\n or child AST nodes.\n\n `recur` forms may only appear in `do` nodes (both literal and synthetic\n `do` nodes) and in either the :then or :else expression of an `if` node."}
{"_id": "q_12095", "text": "Resolve a non-namespaced symbol into a Python name or a local\n Basilisp Var."}
{"_id": "q_12096", "text": "Take a Lisp form as an argument and produce a Basilisp syntax\n tree matching the clojure.tools.analyzer AST spec."}
{"_id": "q_12097", "text": "If True, warn when a def'ed Var name is shadowed in an inner scope.\n\n Implied by warn_on_shadowed_name. The value of warn_on_shadowed_name\n supersedes the value of this flag."}
{"_id": "q_12098", "text": "Add a new symbol to the symbol table.\n\n This function allows individual warnings to be disabled for one run\n by supplying keyword arguments temporarily disabling those warnings.\n In certain cases, we do not want to issue warnings again for a\n previously checked case, so this is a simple way of disabling these\n warnings for those cases.\n\n If WARN_ON_SHADOWED_NAME compiler option is active and the\n warn_on_shadowed_name keyword argument is True, then a warning will be\n emitted if a local name is shadowed by another local name. Note that\n WARN_ON_SHADOWED_NAME implies WARN_ON_SHADOWED_VAR.\n\n If WARN_ON_SHADOWED_VAR compiler option is active and the\n warn_on_shadowed_var keyword argument is True, then a warning will be\n emitted if a named var is shadowed by a local name."}
{"_id": "q_12099", "text": "Produce a Lisp representation of an associative collection, bookended\n with the start and end string supplied. The entries argument must be a\n callable which will produce tuples of key-value pairs.\n\n The keyword arguments will be passed along to lrepr for the sequence\n elements."}
{"_id": "q_12100", "text": "Return a string representation of a Lisp object.\n\n Permissible keyword arguments are:\n - human_readable: if logical True, print strings without quotations or\n escape sequences (default: false)\n - print_dup: if logical true, print objects in a way that preserves their\n types (default: false)\n - print_length: the number of items in a collection which will be printed,\n or no limit if bound to a logical falsey value (default: 50)\n - print_level: the depth of the object graph to print, starting with 0, or\n no limit if bound to a logical falsey value (default: nil)\n - print_meta: if logical true, print objects meta in a way that can be\n read back by the reader (default: false)\n - print_readably: if logical false, print strings and characters with\n non-alphanumeric characters converted to escape sequences\n (default: true)\n\n Note that this function is not capable of capturing the values bound at\n runtime to the basilisp.core dynamic variables which correspond to each\n of the keyword arguments to this function. To use a version of lrepr\n which does capture those values, call basilisp.lang.runtime.lrepr directly."}
{"_id": "q_12101", "text": "Fallback function for lrepr for subclasses of standard types.\n\n The singledispatch used for standard lrepr dispatches using an exact\n type match on the first argument, so we will only hit this function\n for subclasses of common Python types like strings or lists."}
{"_id": "q_12102", "text": "Return a transformed copy of this node with location in this node's\n environment updated to match the `start_loc` if given, or using its\n existing location otherwise. All child nodes will be recursively\n transformed and replaced. Child nodes will use their parent node\n location if they do not have one."}
{"_id": "q_12103", "text": "Compile and execute the given form. This function will be most useful\n for the REPL and testing purposes. Returns the result of the executed expression.\n\n Callers may override the wrapped function name, which is used by the\n REPL to evaluate the result of an expression and print it back out."}
{"_id": "q_12104", "text": "Incrementally compile a stream of AST nodes in module mod.\n\n The source_filename will be passed to Python's native compile.\n\n Incremental compilation is an integral part of generating a Python module\n during the same process as macro-expansion."}
{"_id": "q_12105", "text": "Compile an entire Basilisp module into Python bytecode which can be\n executed as a Python module.\n\n This function is designed to generate bytecode which can be used for the\n Basilisp import machinery, to allow callers to import Basilisp modules from\n Python code."}
{"_id": "q_12106", "text": "Compile cached bytecode into the given module.\n\n The Basilisp import hook attempts to cache bytecode while compiling Basilisp\n namespaces. When the cached bytecode is reloaded from disk, it needs to be\n compiled within a bootstrapped module. This function bootstraps the module\n and then proceeds to compile a collection of bytecodes into the module."}
{"_id": "q_12107", "text": "Create a Fraction from a numerator and denominator."}
{"_id": "q_12108", "text": "Creates a new map."}
{"_id": "q_12109", "text": "Partition coll into groups of size n."}
{"_id": "q_12110", "text": "Read a namespaced token from the input stream."}
{"_id": "q_12111", "text": "Read a collection from the input stream and create the\n collection using f."}
{"_id": "q_12112", "text": "Read a vector element from the input stream."}
{"_id": "q_12113", "text": "Return a set from the input stream."}
{"_id": "q_12114", "text": "Return a string from the input stream.\n\n If allow_arbitrary_escapes is True, do not throw a SyntaxError if an\n unknown escape sequence is encountered."}
{"_id": "q_12115", "text": "Return a keyword from the input stream."}
{"_id": "q_12116", "text": "Expand syntax quoted forms to handle unquoting and unquote-splicing.\n\n The unquoted form (unquote x) becomes:\n (list x)\n\n The unquote-spliced form (unquote-splicing x) becomes\n x\n\n All other forms are recursively processed as by _process_syntax_quoted_form\n and are returned as:\n (list form)"}
{"_id": "q_12117", "text": "Post-process syntax quoted forms to generate forms that can be assembled\n into the correct types at runtime.\n\n Lists are turned into:\n (basilisp.core/seq\n (basilisp.core/concat [& rest]))\n\n Vectors are turned into:\n (basilisp.core/apply\n basilisp.core/vector\n (basilisp.core/concat [& rest]))\n\n Sets are turned into:\n (basilisp.core/apply\n basilisp.core/hash-set\n (basilisp.core/concat [& rest]))\n\n Maps are turned into:\n (basilisp.core/apply\n basilisp.core/hash-map\n (basilisp.core/concat [& rest]))\n\n The child forms (called rest above) are processed by _expand_syntax_quote.\n\n All other forms are passed through without modification."}
{"_id": "q_12118", "text": "Read an unquoted form and handle any special logic of unquoting.\n\n Unquoted forms can take two, well... forms:\n\n `~form` is read as `(unquote form)` and any nested forms are read\n literally and passed along to the compiler untouched.\n\n `~@form` is read as `(unquote-splicing form)` which tells the compiler\n to splice in the contents of a sequential form such as a list or\n vector into the final compiled form. This helps macro writers create\n longer forms such as function calls, function bodies, or data structures\n with the contents of another collection they have."}
{"_id": "q_12119", "text": "Read a derefed form from the input stream."}
{"_id": "q_12120", "text": "Read a regex reader macro from the input stream."}
{"_id": "q_12121", "text": "Read the next full form from the input stream, consuming any\n reader comments completely."}
{"_id": "q_12122", "text": "Read the next full form from the input stream."}
{"_id": "q_12123", "text": "Read the contents of a string as a Lisp expression.\n\n Keyword arguments to this function have the same meanings as those of\n basilisp.lang.reader.read."}
{"_id": "q_12124", "text": "Push one character back onto the stream, allowing it to be\n read again."}
{"_id": "q_12125", "text": "Advance the stream forward by one character and return the\n next token in the stream."}
{"_id": "q_12126", "text": "Return the bytes for a Basilisp bytecode cache file."}
{"_id": "q_12127", "text": "Unmarshal the bytes from a Basilisp bytecode cache file, validating the\n file header prior to returning. If the file header does not match, throw\n an exception."}
{"_id": "q_12128", "text": "Return the path to the cached file for the given path. The original path\n does not have to exist."}
{"_id": "q_12129", "text": "Hook into Python's import machinery with a custom Basilisp code\n importer.\n\n Once this is called, Basilisp code may be called from within Python code\n using standard `import module.submodule` syntax."}
{"_id": "q_12130", "text": "Find the ModuleSpec for the specified Basilisp module.\n\n Returns None if the module is not a Basilisp module to allow import processing to continue."}
{"_id": "q_12131", "text": "Load and execute a cached Basilisp module."}
{"_id": "q_12132", "text": "Load and execute a non-cached Basilisp module."}
{"_id": "q_12133", "text": "Compile the Basilisp module into Python code.\n\n Basilisp is fundamentally a form-at-a-time compilation, meaning that\n each form in a module may require code compiled from an earlier form, so\n we incrementally compile a Python module by evaluating a single top-level\n form at a time and inserting the resulting AST nodes into the Pyton module."}
{"_id": "q_12134", "text": "Create a new symbol."}
{"_id": "q_12135", "text": "Return an iterable of possible completions for the given text."}
{"_id": "q_12136", "text": "Create a new keyword."}
{"_id": "q_12137", "text": "Chain a sequence of generated Python ASTs into a tuple of dependency nodes"}
{"_id": "q_12138", "text": "Generate recursive Python Attribute AST nodes for resolving nested\n names."}
{"_id": "q_12139", "text": "Wrap a generator function in a decorator to supply line and column\n information to the returned Python AST node. Dependency nodes will not\n be hydrated, functions whose returns need dependency nodes to be\n hydrated should use `_with_ast_loc_deps` below."}
{"_id": "q_12140", "text": "Wrap a generator function in a decorator to supply line and column\n information to the returned Python AST node and dependency nodes.\n\n Dependency nodes should likely only be included if they are new nodes\n created in the same function wrapped by this function. Otherwise, dependencies\n returned from e.g. calling `gen_py_ast` should be assumed to already have\n their location information hydrated."}
{"_id": "q_12141", "text": "Return True if the Var holds a value which should be compiled to a dynamic\n Var access."}
{"_id": "q_12142", "text": "Return True if the Var can be redefined."}
{"_id": "q_12143", "text": "Transform non-statements into ast.Expr nodes so they can\n stand alone as statements."}
{"_id": "q_12144", "text": "Return True if the compiler should emit a warning about this name being redefined."}
{"_id": "q_12145", "text": "Generate a list of Python AST nodes from function method parameters."}
{"_id": "q_12146", "text": "Return a Python AST node for a function with multiple arities."}
{"_id": "q_12147", "text": "Generate custom `if` nodes to handle `recur` bodies.\n\n Recur nodes can appear in the then and else expressions of `if` forms.\n Recur nodes generate Python `continue` statements, which we would otherwise\n attempt to insert directly into an expression. Python will complain if\n it finds a statement in an expression AST slot, so we special case the\n recur handling here."}
{"_id": "q_12148", "text": "Return a Python AST Node for a Basilisp function invocation."}
{"_id": "q_12149", "text": "Return a Python AST Node for a `recur` expression.\n\n Note that `recur` nodes can only legally appear in two AST locations:\n (1) in :then or :else expressions in :if nodes, and\n (2) in :ret expressions in :do nodes\n\n As such, both of these handlers special case the recur construct, as it\n is the only case in which the code generator emits a statement rather than\n an expression."}
{"_id": "q_12150", "text": "Return a Python AST Node for a `try` expression."}
{"_id": "q_12151", "text": "Generate a Python AST node for accessing a locally defined Python variable."}
{"_id": "q_12152", "text": "Generate Var.find calls for the named symbol."}
{"_id": "q_12153", "text": "Generate a Python AST node for accessing a Var.\n\n If the Var is marked as :dynamic or :redef or the compiler option\n USE_VAR_INDIRECTION is active, do not compile to a direct access.\n If the corresponding function name is not defined in a Python module,\n no direct variable access is possible and Var.find indirection must be\n used."}
{"_id": "q_12154", "text": "Generate a Python AST node for accessing a potential Python module\n variable name."}
{"_id": "q_12155", "text": "Generate a Python AST node for accessing a potential Python module\n variable name with a namespace."}
{"_id": "q_12156", "text": "Generate Python AST nodes for constant Lisp forms.\n\n Nested values in collections for :const nodes are not parsed, so recursive\n structures need to call into this function to generate Python AST nodes for\n nested elements. For top-level :const Lisp AST nodes, see\n `_const_node_to_py_ast`."}
{"_id": "q_12157", "text": "Turn a quoted collection literal of Lisp forms into Python AST nodes.\n\n This function can only handle constant values. It does not call back into\n the generic AST generators, so only constant values will be generated down\n this path."}
{"_id": "q_12158", "text": "Take a Lisp AST node as an argument and produce zero or more Python\n AST nodes.\n\n This is the primary entrypoint for generating AST nodes from Lisp\n syntax. It may be called recursively to compile child forms."}
{"_id": "q_12159", "text": "Generate the Python Import AST node for importing all required\n language support modules."}
{"_id": "q_12160", "text": "Generate the Python From ... Import AST node for importing\n language support modules."}
{"_id": "q_12161", "text": "Assign a Python variable named `ns_var` to the value of the current\n namespace."}
{"_id": "q_12162", "text": "Creates a new set."}
{"_id": "q_12163", "text": "Eliminate dead code from except handler bodies."}
{"_id": "q_12164", "text": "Eliminate no-op constant expressions which are in the tree\n as standalone statements."}
{"_id": "q_12165", "text": "Eliminate dead code from function bodies."}
{"_id": "q_12166", "text": "Eliminate dead code from while bodies."}
{"_id": "q_12167", "text": "Eliminate dead code from except try bodies."}
{"_id": "q_12168", "text": "If o is a ISeq, return the first element from o. If o is None, return\n None. Otherwise, coerces o to a Seq and returns the first."}
{"_id": "q_12169", "text": "If o is a ISeq, return the elements after the first in o. If o is None,\n returns an empty seq. Otherwise, coerces o to a seq and returns the rest."}
{"_id": "q_12170", "text": "Returns the nth next sequence of coll."}
{"_id": "q_12171", "text": "Coerce the argument o to a ISeq. If o is None, return None."}
{"_id": "q_12172", "text": "Associate keys to values in associative data structure m. If m is None,\n returns a new Map with key-values kvs."}
{"_id": "q_12173", "text": "Conjoin xs to collection. New elements may be added in different positions\n depending on the type of coll. conj returns the same type as coll. If coll\n is None, return a list with xs conjoined."}
{"_id": "q_12174", "text": "Dereference a Deref object and return its contents.\n\n If o is an object implementing IBlockingDeref and timeout_s and\n timeout_val are supplied, deref will wait at most timeout_s seconds,\n returning timeout_val if timeout_s seconds elapse and o has not\n returned."}
{"_id": "q_12175", "text": "Compare two objects by value. Unlike the standard Python equality operator,\n this function does not consider 1 == True or 0 == False. All other equality\n operations are the same and performed using Python's equality operator."}
{"_id": "q_12176", "text": "Division reducer. If both arguments are integers, return a Fraction.\n Otherwise, return the true division of x and y."}
{"_id": "q_12177", "text": "Return a sorted sequence of the elements in coll. If a comparator\n function f is provided, compare elements in coll using f."}
{"_id": "q_12178", "text": "Return true if o contains the key k."}
{"_id": "q_12179", "text": "Return the value of k in m. Return default if k not found in m."}
{"_id": "q_12180", "text": "Recursively convert Python collections into Lisp collections."}
{"_id": "q_12181", "text": "Recursively convert Lisp collections into Python collections."}
{"_id": "q_12182", "text": "Produce a string representation of an object. If human_readable is False,\n the string representation of Lisp objects is something that can be read back\n in by the reader as the same object."}
{"_id": "q_12183", "text": "Trampoline a function repeatedly until it is finished recurring to help\n avoid stack growth."}
{"_id": "q_12184", "text": "Decorator to set attributes on a function. Returns the original\n function after setting the attributes named by the keyword arguments."}
{"_id": "q_12185", "text": "Create a Basilisp function, setting meta and supplying a with_meta\n method implementation."}
{"_id": "q_12186", "text": "Resolve the aliased symbol to a Var from the specified\n namespace, or the current namespace if none is specified."}
{"_id": "q_12187", "text": "Add generated Python code to a dynamic variable in which_ns."}
{"_id": "q_12188", "text": "Bootstrap the environment with functions that are are difficult to\n express with the very minimal lisp environment."}
{"_id": "q_12189", "text": "Intern the value bound to the symbol `name` in namespace `ns`."}
{"_id": "q_12190", "text": "Create a new unbound `Var` instance to the symbol `name` in namespace `ns`."}
{"_id": "q_12191", "text": "Return the value currently bound to the name in the namespace specified\n by `ns_qualified_sym`."}
{"_id": "q_12192", "text": "Return the Var currently bound to the name in the namespace specified\n by `ns_qualified_sym`. If no Var is bound to that name, raise an exception.\n\n This is a utility method to return useful debugging information when code\n refers to an invalid symbol at runtime."}
{"_id": "q_12193", "text": "Add a gated default import to the default imports.\n\n In particular, we need to avoid importing 'basilisp.core' before we have\n finished macro-expanding."}
{"_id": "q_12194", "text": "Add a Symbol alias for the given Namespace."}
{"_id": "q_12195", "text": "Intern the Var given in this namespace mapped by the given Symbol.\n If the Symbol already maps to a Var, this method _will not overwrite_\n the existing Var mapping unless the force keyword argument is given\n and is True."}
{"_id": "q_12196", "text": "Swap function used by intern to atomically intern a new variable in\n the symbol mapping for this Namespace."}
{"_id": "q_12197", "text": "Add the Symbol as an imported Symbol in this Namespace. If aliases are given,\n the aliases will be applied to the"}
{"_id": "q_12198", "text": "Return the module if a moduled named by sym has been imported into\n this Namespace, None otherwise.\n\n First try to resolve a module directly with the given name. If no module\n can be resolved, attempt to resolve the module using import aliases."}
{"_id": "q_12199", "text": "Refer var in this namespace under the name sym."}
{"_id": "q_12200", "text": "Get the Var referred by Symbol or None if it does not exist."}
{"_id": "q_12201", "text": "Refer all _public_ interns from another namespace."}
{"_id": "q_12202", "text": "Get the namespace bound to the symbol `name` in the global namespace\n cache, creating it if it does not exist.\n Return the namespace."}
{"_id": "q_12203", "text": "Get the namespace bound to the symbol `name` in the global namespace\n cache. Return the namespace if it exists or None otherwise.."}
{"_id": "q_12204", "text": "Remove the namespace bound to the symbol `name` in the global\n namespace cache and return that namespace.\n Return None if the namespace did not exist in the cache."}
{"_id": "q_12205", "text": "Return a function which matches any symbol keys from map entries\n against the given text."}
{"_id": "q_12206", "text": "Return an iterable of possible completions matching the given\n prefix from the list of imports and aliased imports. If name_in_module\n is given, further attempt to refine the list to matching names in that\n namespace."}
{"_id": "q_12207", "text": "Return an iterable of possible completions for the given text in\n this namespace."}
{"_id": "q_12208", "text": "Return the arguments for a trampolined function. If the function\n that is being trampolined has varargs, unroll the final argument if\n it is a sequence."}
{"_id": "q_12209", "text": "Creates a new list."}
{"_id": "q_12210", "text": "Creates a new list from members."}
{"_id": "q_12211", "text": "Regenerate the signing key for this instance. Store the new key in\n signing_key property.\n\n Take scope elements of the new key from the equivalent properties\n (region, service, date) of the current AWS4Auth instance. Scope\n elements can be overridden for the new key by supplying arguments to\n this function. If overrides are supplied update the current AWS4Auth\n instance's equivalent properties to match the new values.\n\n If secret_key is not specified use the value of the secret_key property\n of the current AWS4Auth instance's signing key. If the existing signing\n key is not storing its secret key (i.e. store_secret_key was set to\n False at instantiation) then raise a NoSecretKeyError and do not\n regenerate the key. In order to regenerate a key which is not storing\n its secret key, secret_key must be supplied to this function.\n\n Use the value of the existing key's store_secret_key property when\n generating the new key. If there is no existing key, then default\n to setting store_secret_key to True for new key."}
{"_id": "q_12212", "text": "Try to pull a date from the request by looking first at the\n x-amz-date header, and if that's not present then the Date header.\n\n Return a datetime.date object, or None if neither date header\n is found or is in a recognisable format.\n\n req -- a requests PreparedRequest object"}
{"_id": "q_12213", "text": "Check if date_str is in a recognised format and return an ISO\n yyyy-mm-dd format version if so. Raise DateFormatError if not.\n\n Recognised formats are:\n * RFC 7231 (e.g. Mon, 09 Sep 2011 23:36:00 GMT)\n * RFC 850 (e.g. Sunday, 06-Nov-94 08:49:37 GMT)\n * C time (e.g. Wed Dec 4 00:00:00 2002)\n * Amz-Date format (e.g. 20090325T010101Z)\n * ISO 8601 / RFC 3339 (e.g. 2009-03-25T10:11:12.13-01:00)\n\n date_str -- Str containing a date and optional time"}
{"_id": "q_12214", "text": "Create the AWS authentication Canonical Request string.\n\n req -- Requests PreparedRequest object. Should already\n include an x-amz-content-sha256 header\n cano_headers -- Canonical Headers section of Canonical Request, as\n returned by get_canonical_headers()\n signed_headers -- Signed Headers, as returned by\n get_canonical_headers()"}
{"_id": "q_12215", "text": "Generate the Canonical Headers section of the Canonical Request.\n\n Return the Canonical Headers and the Signed Headers strs as a tuple\n (canonical_headers, signed_headers).\n\n req -- Requests PreparedRequest object\n include -- List of headers to include in the canonical and signed\n headers. It's primarily included to allow testing against\n specific examples from Amazon. If omitted or None it\n includes host, content-type and any header starting 'x-amz-'\n except for x-amz-client context, which appears to break\n mobile analytics auth if included. Except for the\n x-amz-client-context exclusion these defaults are per the\n AWS documentation."}
{"_id": "q_12216", "text": "Generate the AWS4 auth string to sign for the request.\n\n req -- Requests PreparedRequest object. This should already\n include an x-amz-date header.\n cano_req -- The Canonical Request, as returned by\n get_canonical_request()"}
{"_id": "q_12217", "text": "Generate the canonical path as per AWS4 auth requirements.\n\n Not documented anywhere, determined from aws4_testsuite examples,\n problem reports and testing against the live services.\n\n path -- request path"}
{"_id": "q_12218", "text": "Generate the signing key string as bytes.\n\n If intermediate is set to True, returns a 4-tuple containing the key\n and the intermediate keys:\n\n ( signing_key, date_key, region_key, service_key )\n\n The intermediate keys can be used for testing against examples from\n Amazon."}
{"_id": "q_12219", "text": "Generate an SHA256 HMAC, encoding msg to UTF-8 if not\n already encoded.\n\n key -- signing key. bytes.\n msg -- message to sign. unicode or bytes."}
{"_id": "q_12220", "text": "If maybe_dttm is a datetime instance, convert to a STIX-compliant\n string representation. Otherwise return the value unchanged."}
{"_id": "q_12221", "text": "Factors out some JSON parse code with error handling, to hopefully improve\n error messages.\n\n :param resp: A \"requests\" library response\n :return: Parsed JSON.\n :raises: InvalidJSONError If JSON parsing failed."}
{"_id": "q_12222", "text": "Updates Status information"}
{"_id": "q_12223", "text": "Update the properties of this API Root.\n\n This invokes the ``Get API Root Information`` endpoint."}
{"_id": "q_12224", "text": "Update the list of Collections contained by this API Root.\n\n This invokes the ``Get Collections`` endpoint."}
{"_id": "q_12225", "text": "Validates server information. Raises errors for required properties."}
{"_id": "q_12226", "text": "Update the Server information and list of API Roots"}
{"_id": "q_12227", "text": "Returns the the amount of memory available for use.\n\n The memory is obtained from MemTotal entry in /proc/meminfo.\n \n Notes\n =====\n This function is not very useful and not very portable."}
{"_id": "q_12228", "text": "Returns the default number of slave processes to be spawned.\n\n The default value is the number of physical cpu cores seen by python.\n :code:`OMP_NUM_THREADS` environment variable overrides it.\n\n On PBS/torque systems if OMP_NUM_THREADS is empty, we try to\n use the value of :code:`PBS_NUM_PPN` variable.\n\n Notes\n -----\n On some machines the physical number of cores does not equal\n the number of cpus shall be used. PSC Blacklight for example."}
{"_id": "q_12229", "text": "Create a shared memory array from the shape of array."}
{"_id": "q_12230", "text": "Create a shared memory array with the same shape and type as a given array, filled with `value`."}
{"_id": "q_12231", "text": "Create a shared memory array of given shape and type, filled with `value`."}
{"_id": "q_12232", "text": "Copy an array to the shared memory. \n\n Notes\n -----\n copy is not always necessary because the private memory is always copy-on-write.\n\n Use :code:`a = copy(a)` to immediately dereference the old 'a' on private memory"}
{"_id": "q_12233", "text": "Wait and join the child process. \n The return value of the function call is returned.\n If any exception occurred it is wrapped and raised."}
{"_id": "q_12234", "text": "Map-reduce with multile processes.\n\n Apply func to each item on the sequence, in parallel. \n As the results are collected, reduce is called on the result.\n The reduced result is returned as a list.\n \n Parameters\n ----------\n func : callable\n The function to call. It must accept the same number of\n arguments as the length of an item in the sequence.\n\n .. warning::\n\n func is not supposed to use exceptions for flow control.\n In non-debug mode all exceptions will be wrapped into\n a :py:class:`SlaveException`.\n\n sequence : list or array_like\n The sequence of arguments to be applied to func.\n\n reduce : callable, optional\n Apply an reduction operation on the \n return values of func. If func returns a tuple, they\n are treated as positional arguments of reduce.\n\n star : boolean\n if True, the items in sequence are treated as positional\n arguments of reduce.\n\n minlength: integer\n Minimal length of `sequence` to start parallel processing.\n if len(sequence) < minlength, fall back to sequential\n processing. This can be used to avoid the overhead of starting\n the worker processes when there is little work.\n \n Returns\n -------\n results : list\n The list of reduced results from the map operation, in\n the order of the arguments of sequence.\n \n Raises\n ------\n SlaveException\n If any of the slave process encounters\n an exception. Inspect :py:attr:`SlaveException.reason` for the underlying exception."}
{"_id": "q_12235", "text": "Known issues delimiter and newline is not respected. \n string quotation with space is broken."}
{"_id": "q_12236", "text": "Unpack a structured data-type."}
{"_id": "q_12237", "text": "meta class for Ordered construct."}
{"_id": "q_12238", "text": "kill all slaves and reap the monitor"}
{"_id": "q_12239", "text": "ensure the master exit from Barrier"}
{"_id": "q_12240", "text": "axis is the axis to chop it off.\n if self.altreduce is set, the results will\n be reduced with altreduce and returned\n otherwise will be saved to out, then return out."}
{"_id": "q_12241", "text": "adapt source to a packarray according to the layout of template"}
{"_id": "q_12242", "text": "This function is used to format the key value as a multi-line string maintaining the line breaks"}
{"_id": "q_12243", "text": "Return a random year."}
{"_id": "q_12244", "text": "Return a random `dt.date` object. Delta args are days."}
{"_id": "q_12245", "text": "Return a random credit card number.\n\n :param type: credit card type. Defaults to a random selection.\n :param length: length of the credit card number.\n Defaults to the length for the selected card type.\n :param prefixes: allowed prefixes for the card number.\n Defaults to prefixes for the selected card type.\n :return: credit card randomly generated number (int)"}
{"_id": "q_12246", "text": "Return a random job title."}
{"_id": "q_12247", "text": "Return a random email text."}
{"_id": "q_12248", "text": "Return a str of decimal with two digits after a decimal mark."}
{"_id": "q_12249", "text": "Return random words."}
{"_id": "q_12250", "text": "Return a random paragraph."}
{"_id": "q_12251", "text": "Return a lowercased string with non alphabetic chars removed.\n\n White spaces are not to be removed."}
{"_id": "q_12252", "text": "Return random characters."}
{"_id": "q_12253", "text": "An aggregator for all above defined public methods."}
{"_id": "q_12254", "text": "Return a random user name.\n\n Basically it's lowercased result of\n :py:func:`~forgery_py.forgery.name.first_name()` with a number appended\n if `with_num`."}
{"_id": "q_12255", "text": "Return a random domain name.\n\n Lowercased result of :py:func:`~forgery_py.forgery.name.company_name()`\n plus :py:func:`~top_level_domain()`."}
{"_id": "q_12256", "text": "Return random e-mail address in a hopefully imaginary domain.\n\n If `user` is ``None`` :py:func:`~user_name()` will be used. Otherwise it\n will be lowercased and will have spaces replaced with ``_``.\n\n Domain name is created using :py:func:`~domain_name()`."}
{"_id": "q_12257", "text": "Return a random bank identification number."}
{"_id": "q_12258", "text": "Return a random government registration ID for a company."}
{"_id": "q_12259", "text": "Return a random string for use as a password."}
{"_id": "q_12260", "text": "Custom json dump using the custom encoder above."}
{"_id": "q_12261", "text": "Handles decoding of nested date strings."}
{"_id": "q_12262", "text": "Override of the default decode method that also uses decode_date."}
{"_id": "q_12263", "text": "Generate changelog."}
{"_id": "q_12264", "text": "Find the strongly connected components in a graph using Tarjan's algorithm.\n\t\n\tThe `graph` argument should be a dictionary mapping node names to sequences of successor nodes."}
{"_id": "q_12265", "text": "Identify strongly connected components then perform a topological sort of those components."}
{"_id": "q_12266", "text": "Add an ``Operator`` to the ``Expression``.\n\n The ``Operator`` may result in a new ``Expression`` if an ``Operator``\n already exists and is of a different precedence.\n\n There are three possibilities when adding an ``Operator`` to an\n ``Expression`` depending on whether or not an ``Operator`` already\n exists:\n\n - No ``Operator`` on the working ``Expression``; Simply set the\n ``Operator`` and return ``self``.\n - ``Operator`` already exists and is higher in precedence; The\n ``Operator`` and last ``Constraint`` belong in a sub-expression of\n the working ``Expression``.\n - ``Operator`` already exists and is lower in precedence; The\n ``Operator`` belongs to the parent of the working ``Expression``\n whether one currently exists or not. To remain in the context of\n the top ``Expression``, this method will return the parent here\n rather than ``self``.\n\n Args:\n operator (Operator): What we are adding.\n\n Returns:\n Expression: ``self`` or related ``Expression``.\n\n Raises:\n FiqlObjectExpression: Operator is not a valid ``Operator``."}
{"_id": "q_12267", "text": "Add an element of type ``Operator``, ``Constraint``, or\n ``Expression`` to the ``Expression``.\n\n Args:\n element: ``Constraint``, ``Expression``, or ``Operator``.\n\n Returns:\n Expression: ``self``\n\n Raises:\n FiqlObjectException: Element is not a valid type."}
{"_id": "q_12268", "text": "Update the ``Expression`` by joining the specified additional\n ``elements`` using an \"AND\" ``Operator``\n\n Args:\n *elements (BaseExpression): The ``Expression`` and/or\n ``Constraint`` elements which the \"AND\" ``Operator`` applies\n to.\n\n Returns:\n Expression: ``self`` or related ``Expression``."}
{"_id": "q_12269", "text": "Update the ``Expression`` by joining the specified additional\n ``elements`` using an \"OR\" ``Operator``\n\n Args:\n *elements (BaseExpression): The ``Expression`` and/or\n ``Constraint`` elements which the \"OR\" ``Operator`` applies\n to.\n\n Returns:\n Expression: ``self`` or related ``Expression``."}
{"_id": "q_12270", "text": "Decorate passed in function and log message to module logger."}
{"_id": "q_12271", "text": "Parse received response.\n\n Parameters\n ----------\n incomming : bytes string\n Incomming bytes from socket server.\n\n Returns\n -------\n list of OrderedDict\n Received message as a list of OrderedDict."}
{"_id": "q_12272", "text": "Translate a list of tuples to OrderedDict with key and val as strings.\n\n Parameters\n ----------\n _list : list of tuples\n\n Returns\n -------\n collections.OrderedDict\n\n Example\n -------\n ::\n\n >>> tuples_as_dict([('cmd', 'val'), ('cmd2', 'val2')])\n OrderedDict([('cmd', 'val'), ('cmd2', 'val2')])"}
{"_id": "q_12273", "text": "Check if specific message is present.\n\n Parameters\n ----------\n cmd : string\n Command to check for in bytestring from microscope CAM interface. If\n ``value`` is falsey, value of received command does not matter.\n value : string\n Check if ``cmd:value`` is received.\n\n Returns\n -------\n collections.OrderedDict\n Correct messsage or None if no correct message if found."}
{"_id": "q_12274", "text": "Prepare message to be sent.\n\n Parameters\n ----------\n commands : list of tuples or bytes string\n Commands as a list of tuples or a bytes string. cam.prefix is\n allways prepended before sending.\n\n Returns\n -------\n string\n Message to be sent."}
{"_id": "q_12275", "text": "Save scanning template to filename."}
{"_id": "q_12276", "text": "Load scanning template from filename.\n\n Template needs to exist in database, otherwise it will not load.\n\n Parameters\n ----------\n filename : str\n Filename to template to load. Filename may contain path also, in\n such case, the basename will be used. '.xml' will be stripped\n from the filename if it exists because of a bug; LASAF implicit\n add '.xml'. If '{ScanningTemplate}' is omitted, it will be added.\n\n Returns\n -------\n collections.OrderedDict\n Response from LASAF in an ordered dict.\n\n Example\n -------\n ::\n\n >>> # load {ScanningTemplate}leicacam.xml\n >>> cam.load_template('leicacam')\n\n >>> # load {ScanningTemplate}leicacam.xml\n >>> cam.load_template('{ScanningTemplate}leicacam')\n\n >>> # load {ScanningTemplate}leicacam.xml\n >>> cam.load_template('/path/to/{ScanningTemplate}leicacam.xml')"}
{"_id": "q_12277", "text": "Get information about given keyword. Defaults to stage."}
{"_id": "q_12278", "text": "r\"\"\"\n Include a Python source file in a docstring formatted in reStructuredText.\n\n :param fname: File name, relative to environment variable\n :bash:`${TRACER_DIR}`\n :type fname: string\n\n :param fpointer: Output function pointer. Normally is :code:`cog.out` but\n :code:`print` or other functions can be used for\n debugging\n :type fpointer: function object\n\n :param lrange: Line range to include, similar to Sphinx\n `literalinclude <http://sphinx-doc.org/markup/code.html\n #directive-literalinclude>`_ directive\n :type lrange: string\n\n :param sdir: Source file directory. If None the :bash:`${TRACER_DIR}`\n environment variable is used if it is defined, otherwise\n the directory where the :code:`docs.support.incfile` module\n is located is used\n :type sdir: string\n\n For example:\n\n .. code-block:: python\n\n def func():\n \\\"\\\"\\\"\n This is a docstring. This file shows how to use it:\n\n .. =[=cog\n .. import docs.support.incfile\n .. docs.support.incfile.incfile('func_example.py', cog.out)\n .. =]=\n .. code-block:: python\n\n # func_example.py\n if __name__ == '__main__':\n func()\n\n .. =[=end=]=\n \\\"\\\"\\\"\n return 'This is func output'"}
{"_id": "q_12279", "text": "Find and return the location of package.json."}
{"_id": "q_12280", "text": "Extract the JSPM configuration from package.json."}
{"_id": "q_12281", "text": "Validate response from YOURLS server."}
{"_id": "q_12282", "text": "Generate combined independent variable vector.\n\n The combination is from two waveforms and the (possibly interpolated)\n dependent variable vectors of these two waveforms"}
{"_id": "q_12283", "text": "Create new dependent variable vector."}
{"_id": "q_12284", "text": "Create new independent variable vector."}
{"_id": "q_12285", "text": "Verify that two waveforms can be combined with various mathematical functions."}
{"_id": "q_12286", "text": "Load the existing systemjs manifest and remove any entries that no longer\n exist on the storage."}
{"_id": "q_12287", "text": "Define trace parameters."}
{"_id": "q_12288", "text": "Run module tracing."}
{"_id": "q_12289", "text": "Shorten URL with optional keyword and title.\n\n Parameters:\n url: URL to shorten.\n keyword: Optionally choose keyword for short URL, otherwise automatic.\n title: Optionally choose title, otherwise taken from web page.\n\n Returns:\n ShortenedURL: Shortened URL and associated data.\n\n Raises:\n ~yourls.exceptions.YOURLSKeywordExistsError: The passed keyword\n already exists.\n\n .. note::\n\n This exception has a ``keyword`` attribute.\n\n ~yourls.exceptions.YOURLSURLExistsError: The URL has already been\n shortened.\n\n .. note::\n\n This exception has a ``url`` attribute, which is an instance\n of :py:class:`ShortenedURL` for the existing short URL.\n\n ~yourls.exceptions.YOURLSNoURLError: URL missing.\n\n ~yourls.exceptions.YOURLSNoLoopError: Cannot shorten a shortened URL.\n\n ~yourls.exceptions.YOURLSAPIError: Unhandled API error.\n\n ~yourls.exceptions.YOURLSHTTPError: HTTP error with response from\n YOURLS API.\n\n requests.exceptions.HTTPError: Generic HTTP error."}
{"_id": "q_12290", "text": "Expand short URL or keyword to long URL.\n\n Parameters:\n short: Short URL (``http://example.com/abc``) or keyword (abc).\n\n :return: Expanded/long URL, e.g.\n ``https://www.youtube.com/watch?v=dQw4w9WgXcQ``\n\n Raises:\n ~yourls.exceptions.YOURLSHTTPError: HTTP error with response from\n YOURLS API.\n requests.exceptions.HTTPError: Generic HTTP error."}
{"_id": "q_12291", "text": "Get stats for short URL or keyword.\n\n Parameters:\n short: Short URL (http://example.com/abc) or keyword (abc).\n\n Returns:\n ShortenedURL: Shortened URL and associated data.\n\n Raises:\n ~yourls.exceptions.YOURLSHTTPError: HTTP error with response from\n YOURLS API.\n requests.exceptions.HTTPError: Generic HTTP error."}
{"_id": "q_12292", "text": "Get stats about links.\n\n Parameters:\n filter: 'top', 'bottom', 'rand', or 'last'.\n limit: Number of links to return from filter.\n start: Optional start number.\n\n Returns:\n Tuple containing list of ShortenedURLs and DBStats.\n\n Example:\n\n .. code-block:: python\n\n links, stats = yourls.stats(filter='top', limit=10)\n\n Raises:\n ValueError: Incorrect value for filter parameter.\n requests.exceptions.HTTPError: Generic HTTP Error"}
{"_id": "q_12293", "text": "r\"\"\"\n Echo terminal output.\n\n Print STDOUT resulting from a given Bash shell command (relative to the\n package :code:`pypkg` directory) formatted in reStructuredText\n\n :param command: Bash shell command, relative to\n :bash:`${PMISC_DIR}/pypkg`\n :type command: string\n\n :param nindent: Indentation level\n :type nindent: integer\n\n :param mdir: Module directory\n :type mdir: string\n\n :param fpointer: Output function pointer. Normally is :code:`cog.out` but\n :code:`print` or other functions can be used for\n debugging\n :type fpointer: function object\n\n For example::\n\n .. This is a reStructuredText file snippet\n .. [[[cog\n .. import os, sys\n .. from docs.support.term_echo import term_echo\n .. file_name = sys.modules['docs.support.term_echo'].__file__\n .. mdir = os.path.realpath(\n .. os.path.dirname(\n .. os.path.dirname(os.path.dirname(file_name))\n .. )\n .. )\n .. [[[cog ste('build_docs.py -h', 0, mdir, cog.out) ]]]\n\n .. code-block:: bash\n\n $ ${PMISC_DIR}/pypkg/build_docs.py -h\n usage: build_docs.py [-h] [-d DIRECTORY] [-n NUM_CPUS]\n ...\n\n .. ]]]"}
{"_id": "q_12294", "text": "Small log helper"}
{"_id": "q_12295", "text": "break an iterable into chunks and yield those chunks as lists\n until there's nothing left to yeild."}
{"_id": "q_12296", "text": "recursively flatten nested objects"}
{"_id": "q_12297", "text": "add a handler for SIGINT that optionally prints a given message.\n For stopping scripts without having to see the stacktrace."}
{"_id": "q_12298", "text": "Make a placeholder object that uses its own name for its repr"}
{"_id": "q_12299", "text": "attempt to parse a size in bytes from a human-readable string."}
{"_id": "q_12300", "text": "Trace eng wave module exceptions."}
{"_id": "q_12301", "text": "Define Sphinx requirements links."}
{"_id": "q_12302", "text": "Generate Python interpreter version entries for 2.x or 3.x series."}
{"_id": "q_12303", "text": "Generate Python interpreter version entries."}
{"_id": "q_12304", "text": "Translate requirement specification to words."}
{"_id": "q_12305", "text": "Chunk input noise data into valid Touchstone file rows."}
{"_id": "q_12306", "text": "r\"\"\"\n Write a `Touchstone`_ file.\n\n Parameter data is first resized to an :code:`points` x :code:`nports` x\n :code:`nports` where :code:`points` represents the number of frequency\n points and :code:`nports` represents the number of ports in the file; then\n parameter data is written to file in scientific notation\n\n :param fname: Touchstone file name\n :type fname: `FileNameExists <https://pexdoc.readthedocs.io/en/stable/\n ptypes.html#filenameexists>`_\n\n :param options: Touchstone file options\n :type options: :ref:`TouchstoneOptions`\n\n :param data: Touchstone file parameter data\n :type data: :ref:`TouchstoneData`\n\n :param noise: Touchstone file parameter noise data (only supported in\n two-port files)\n :type noise: :ref:`TouchstoneNoiseData`\n\n :param frac_length: Number of digits to use in fractional part of data\n :type frac_length: non-negative integer\n\n :param exp_length: Number of digits to use in exponent\n :type exp_length: positive integer\n\n .. [[[cog cog.out(exobj.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.touchstone.write_touchstone\n\n :raises:\n * RuntimeError (Argument \\`data\\` is not valid)\n\n * RuntimeError (Argument \\`exp_length\\` is not valid)\n\n * RuntimeError (Argument \\`fname\\` is not valid)\n\n * RuntimeError (Argument \\`frac_length\\` is not valid)\n\n * RuntimeError (Argument \\`noise\\` is not valid)\n\n * RuntimeError (Argument \\`options\\` is not valid)\n\n * RuntimeError (File *[fname]* does not have a valid extension)\n\n * RuntimeError (Malformed data)\n\n * RuntimeError (Noise data only supported in two-port files)\n\n .. [[[end]]]"}
{"_id": "q_12307", "text": "Add independent variable vector bounds if they are not in vector."}
{"_id": "q_12308", "text": "Build unit math operations."}
{"_id": "q_12309", "text": "Perform generic operation on a waveform object."}
{"_id": "q_12310", "text": "Calculate running area under curve."}
{"_id": "q_12311", "text": "r\"\"\"\n Return the hyperbolic arc cosine of a waveform's dependent variable vector.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.acosh\n\n :raises:\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * ValueError (Math domain error)\n\n .. [[[end]]]"}
{"_id": "q_12312", "text": "r\"\"\"\n Return the arc sine of a waveform's dependent variable vector.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.asin\n\n :raises:\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * ValueError (Math domain error)\n\n .. [[[end]]]"}
{"_id": "q_12313", "text": "r\"\"\"\n Return a waveform's dependent variable vector expressed in decibels.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for peng.wave_functions.db\n\n :raises:\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * ValueError (Math domain error)\n\n .. [[[end]]]"}
{"_id": "q_12314", "text": "r\"\"\"\n Return the imaginary part of the Fast Fourier Transform of a waveform.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param npoints: Number of points to use in the transform. If **npoints**\n is less than the size of the independent variable vector\n the waveform is truncated; if **npoints** is greater than\n the size of the independent variable vector, the waveform\n is zero-padded\n :type npoints: positive integer\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.ffti\n\n :raises:\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`npoints\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n * RuntimeError (Non-uniform sampling)\n\n .. [[[end]]]"}
{"_id": "q_12315", "text": "r\"\"\"\n Return the phase of the Fast Fourier Transform of a waveform.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param npoints: Number of points to use in the transform. If **npoints**\n is less than the size of the independent variable vector\n the waveform is truncated; if **npoints** is greater than\n the size of the independent variable vector, the waveform\n is zero-padded\n :type npoints: positive integer\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :param unwrap: Flag that indicates whether phase should change phase shifts\n to their :code:`2*pi` complement (True) or not (False)\n :type unwrap: boolean\n\n :param rad: Flag that indicates whether phase should be returned in radians\n (True) or degrees (False)\n :type rad: boolean\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.fftp\n\n :raises:\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`npoints\\` is not valid)\n\n * RuntimeError (Argument \\`rad\\` is not valid)\n\n * RuntimeError (Argument \\`unwrap\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n * RuntimeError (Non-uniform sampling)\n\n .. [[[end]]]"}
{"_id": "q_12316", "text": "r\"\"\"\n Return the real part of the Fast Fourier Transform of a waveform.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param npoints: Number of points to use in the transform. If **npoints**\n is less than the size of the independent variable vector\n the waveform is truncated; if **npoints** is greater than\n the size of the independent variable vector, the waveform\n is zero-padded\n :type npoints: positive integer\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.fftr\n\n :raises:\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`npoints\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n * RuntimeError (Non-uniform sampling)\n\n .. [[[end]]]"}
{"_id": "q_12317", "text": "r\"\"\"\n Return the inverse Fast Fourier Transform of a waveform.\n\n The dependent variable vector of the returned waveform is expressed in decibels\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param npoints: Number of points to use in the transform. If **npoints**\n is less than the size of the independent variable vector\n the waveform is truncated; if **npoints** is greater than\n the size of the independent variable vector, the waveform\n is zero-padded\n :type npoints: positive integer\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.ifftdb\n\n :raises:\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`npoints\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n * RuntimeError (Non-uniform frequency spacing)\n\n .. [[[end]]]"}
{"_id": "q_12318", "text": "r\"\"\"\n Return the imaginary part of the inverse Fast Fourier Transform of a waveform.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param npoints: Number of points to use in the transform. If **npoints**\n is less than the size of the independent variable vector\n the waveform is truncated; if **npoints** is greater than\n the size of the independent variable vector, the waveform\n is zero-padded\n :type npoints: positive integer\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.iffti\n\n :raises:\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`npoints\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n * RuntimeError (Non-uniform frequency spacing)\n\n .. [[[end]]]"}
{"_id": "q_12319", "text": "r\"\"\"\n Return the phase of the inverse Fast Fourier Transform of a waveform.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param npoints: Number of points to use in the transform. If **npoints**\n is less than the size of the independent variable vector\n the waveform is truncated; if **npoints** is greater than\n the size of the independent variable vector, the waveform\n is zero-padded\n :type npoints: positive integer\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :param unwrap: Flag that indicates whether phase should change phase shifts\n to their :code:`2*pi` complement (True) or not (False)\n :type unwrap: boolean\n\n :param rad: Flag that indicates whether phase should be returned in radians\n (True) or degrees (False)\n :type rad: boolean\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.ifftp\n\n :raises:\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`npoints\\` is not valid)\n\n * RuntimeError (Argument \\`rad\\` is not valid)\n\n * RuntimeError (Argument \\`unwrap\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n * RuntimeError (Non-uniform frequency spacing)\n\n .. [[[end]]]"}
{"_id": "q_12320", "text": "r\"\"\"\n Return the real part of the inverse Fast Fourier Transform of a waveform.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param npoints: Number of points to use in the transform. If **npoints**\n is less than the size of the independent variable vector\n the waveform is truncated; if **npoints** is greater than\n the size of the independent variable vector, the waveform\n is zero-padded\n :type npoints: positive integer\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.ifftr\n\n :raises:\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`npoints\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n * RuntimeError (Non-uniform frequency spacing)\n\n .. [[[end]]]"}
{"_id": "q_12321", "text": "r\"\"\"\n Return the running integral of a waveform's dependent variable vector.\n\n The method used is the `trapezoidal\n <https://en.wikipedia.org/wiki/Trapezoidal_rule>`_ method\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.integral\n\n :raises:\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n .. [[[end]]]"}
{"_id": "q_12322", "text": "r\"\"\"\n Return the group delay of a waveform.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.group_delay\n\n :raises: RuntimeError (Argument \\`wave\\` is not valid)\n\n .. [[[end]]]"}
{"_id": "q_12323", "text": "r\"\"\"\n Return the natural logarithm of a waveform's dependent variable vector.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for peng.wave_functions.log\n\n :raises:\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * ValueError (Math domain error)\n\n .. [[[end]]]"}
{"_id": "q_12324", "text": "r\"\"\"\n Return the numerical average of a waveform's dependent variable vector.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.naverage\n\n :raises:\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n .. [[[end]]]"}
{"_id": "q_12325", "text": "r\"\"\"\n Return the minimum of a waveform's dependent variable vector.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :rtype: float\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.nmin\n\n :raises:\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n .. [[[end]]]"}
{"_id": "q_12326", "text": "r\"\"\"\n Round a waveform's dependent variable vector to a given number of decimal places.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param decimals: Number of decimals to round to\n :type decimals: integer\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.round\n\n :raises:\n * RuntimeError (Argument \\`decimals\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n .. [[[end]]]"}
{"_id": "q_12327", "text": "r\"\"\"\n Return the square root of a waveform's dependent variable vector.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.sqrt\n\n :raises: RuntimeError (Argument \\`wave\\` is not valid)\n\n .. [[[end]]]"}
{"_id": "q_12328", "text": "r\"\"\"\n Return a waveform that is a sub-set of a waveform, potentially re-sampled.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param dep_name: Independent variable name\n :type dep_name: `NonNullString <https://pexdoc.readthedocs.io/en/stable/\n ptypes.html#nonnullstring>`_\n\n :param indep_min: Independent vector start point of computation\n :type indep_min: integer or float\n\n :param indep_max: Independent vector stop point of computation\n :type indep_max: integer or float\n\n :param indep_step: Independent vector step\n :type indep_step: integer or float\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc(raised=True)) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.subwave\n\n :raises:\n * RuntimeError (Argument \\`dep_name\\` is not valid)\n\n * RuntimeError (Argument \\`indep_max\\` is not valid)\n\n * RuntimeError (Argument \\`indep_min\\` is not valid)\n\n * RuntimeError (Argument \\`indep_step\\` is greater than independent\n vector range)\n\n * RuntimeError (Argument \\`indep_step\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * RuntimeError (Incongruent \\`indep_min\\` and \\`indep_max\\`\n arguments)\n\n .. [[[end]]]"}
{"_id": "q_12329", "text": "r\"\"\"\n Convert a waveform's dependent variable vector to complex.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.wcomplex\n\n :raises: RuntimeError (Argument \\`wave\\` is not valid)\n\n .. [[[end]]]"}
{"_id": "q_12330", "text": "r\"\"\"\n Convert a waveform's dependent variable vector to float.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.wfloat\n\n :raises:\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * TypeError (Cannot convert complex to float)\n\n .. [[[end]]]"}
{"_id": "q_12331", "text": "r\"\"\"\n Convert a waveform's dependent variable vector to integer.\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.wint\n\n :raises:\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * TypeError (Cannot convert complex to integer)\n\n .. [[[end]]]"}
{"_id": "q_12332", "text": "r\"\"\"\n Return the dependent variable value at a given independent variable point.\n\n If the independent variable point is not in the independent variable vector\n the dependent variable value is obtained by linear interpolation\n\n :param wave: Waveform\n :type wave: :py:class:`peng.eng.Waveform`\n\n :param indep_var: Independent variable point for which the dependent\n variable is to be obtained\n :type indep_var: integer or float\n\n :rtype: :py:class:`peng.eng.Waveform`\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.wave_functions.wvalue\n\n :raises:\n * RuntimeError (Argument \\`indep_var\\` is not valid)\n\n * RuntimeError (Argument \\`wave\\` is not valid)\n\n * ValueError (Argument \\`indep_var\\` is not in the independent\n variable vector range)\n\n .. [[[end]]]"}
{"_id": "q_12333", "text": "Only allow lookups for jspm_packages.\n\n # TODO: figure out the 'jspm_packages' dir from packag.json."}
{"_id": "q_12334", "text": "Build mathematical expression from hierarchical list."}
{"_id": "q_12335", "text": "Return position of next matching closing delimiter."}
{"_id": "q_12336", "text": "Parse function calls."}
{"_id": "q_12337", "text": "Pair delimiters."}
{"_id": "q_12338", "text": "Parse mathematical expression using PyParsing."}
{"_id": "q_12339", "text": "Return list of the words in the string, using count of a separator as delimiter.\n\n :param text: String to split\n :type text: string\n\n :param sep: Separator\n :type sep: string\n\n :param count: Number of separators to use as delimiter\n :type count: integer\n\n :param lstrip: Flag that indicates whether whitespace is removed\n from the beginning of each list item (True) or not\n (False)\n :type lstrip: boolean\n\n :param rstrip: Flag that indicates whether whitespace is removed\n from the end of each list item (True) or not (False)\n :type rstrip: boolean\n\n :rtype: tuple"}
{"_id": "q_12340", "text": "r\"\"\"\n Return floating point equivalent of a number represented in engineering notation.\n\n :param snum: Number\n :type snum: :ref:`EngineeringNotationNumber`\n\n :rtype: string\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.functions.peng_float\n\n :raises: RuntimeError (Argument \\`snum\\` is not valid)\n\n .. [[[end]]]\n\n For example:\n\n >>> import peng\n >>> peng.peng_float(peng.peng(1235.6789E3, 3, False))\n 1236000.0"}
{"_id": "q_12341", "text": "r\"\"\"\n Return the fractional part of a number represented in engineering notation.\n\n :param snum: Number\n :type snum: :ref:`EngineeringNotationNumber`\n\n :rtype: integer\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.functions.peng_frac\n\n :raises: RuntimeError (Argument \\`snum\\` is not valid)\n\n .. [[[end]]]\n\n For example:\n\n >>> import peng\n >>> peng.peng_frac(peng.peng(1235.6789E3, 3, False))\n 236"}
{"_id": "q_12342", "text": "r\"\"\"\n Return the mantissa of a number represented in engineering notation.\n\n :param snum: Number\n :type snum: :ref:`EngineeringNotationNumber`\n\n :rtype: float\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.functions.peng_mant\n\n :raises: RuntimeError (Argument \\`snum\\` is not valid)\n\n .. [[[end]]]\n\n For example:\n\n >>> import peng\n >>> peng.peng_mant(peng.peng(1235.6789E3, 3, False))\n 1.236"}
{"_id": "q_12343", "text": "r\"\"\"\n Return engineering suffix from a starting suffix and an number of suffixes offset.\n\n :param suffix: Engineering suffix\n :type suffix: :ref:`EngineeringNotationSuffix`\n\n :param offset: Engineering suffix offset\n :type offset: integer\n\n :rtype: string\n\n .. [[[cog cog.out(exobj_eng.get_sphinx_autodoc()) ]]]\n .. Auto-generated exceptions documentation for\n .. peng.functions.peng_suffix_math\n\n :raises:\n * RuntimeError (Argument \\`offset\\` is not valid)\n\n * RuntimeError (Argument \\`suffix\\` is not valid)\n\n * ValueError (Argument \\`offset\\` is not valid)\n\n .. [[[end]]]\n\n For example:\n\n >>> import peng\n >>> peng.peng_suffix_math('u', 6)\n 'T'"}
{"_id": "q_12344", "text": "r\"\"\"\n Remove unnecessary delimiters in mathematical expressions.\n\n Delimiters (parenthesis, brackets, etc.) may be removed either because\n there are multiple consecutive delimiters enclosing a single expressions or\n because the delimiters are implied by operator precedence rules. Function\n names must start with a letter and then can contain alphanumeric characters\n and a maximum of one underscore\n\n :param expr: Mathematical expression\n :type expr: string\n\n :param ldelim: Single character left delimiter\n :type ldelim: string\n\n :param rdelim: Single character right delimiter\n :type rdelim: string\n\n :rtype: string\n\n :raises:\n * RuntimeError (Argument \\`expr\\` is not valid)\n\n * RuntimeError (Argument \\`ldelim\\` is not valid)\n\n * RuntimeError (Argument \\`rdelim\\` is not valid)\n\n * RuntimeError (Function name `*[function_name]*` is not valid)\n\n * RuntimeError (Mismatched delimiters)"}
{"_id": "q_12345", "text": "Convert number or number string to a number string in scientific notation.\n\n Full precision is maintained if the number is represented as a string\n\n :param number: Number to convert\n :type number: number or string\n\n :param frac_length: Number of digits of fractional part, None indicates\n that the fractional part of the number should not be\n limited\n :type frac_length: integer or None\n\n :param exp_length: Number of digits of the exponent; the actual length of\n the exponent takes precedence if it is longer\n :type exp_length: integer or None\n\n :param sign_always: Flag that indicates whether the sign always\n precedes the number for both non-negative and negative\n numbers (True) or only for negative numbers (False)\n :type sign_always: boolean\n\n :rtype: string\n\n For example:\n\n >>> import peng\n >>> peng.to_scientific_string(333)\n '3.33E+2'\n >>> peng.to_scientific_string(0.00101)\n '1.01E-3'\n >>> peng.to_scientific_string(99.999, 1, 2, True)\n '+1.0E+02'"}
{"_id": "q_12346", "text": "Return mantissa and exponent of a number in scientific notation.\n\n Full precision is maintained if the number is represented as a string\n\n :param number: Number\n :type number: integer, float or string\n\n :rtype: named tuple in which the first item is the mantissa (*string*)\n and the second item is the exponent (*integer*) of the number\n when expressed in scientific notation\n\n For example:\n\n >>> import peng\n >>> peng.to_scientific_tuple('135.56E-8')\n NumComp(mant='1.3556', exp=-6)\n >>> peng.to_scientific_tuple(0.0000013556)\n NumComp(mant='1.3556', exp=-6)"}
{"_id": "q_12347", "text": "Using a schema, deserialize a stream of consecutive Avro values.\n\n :param str schema: json string representing the Avro schema\n :param file-like stream: a buffered stream of binary input\n :param int buffer_size: size of bytes to read from the stream each time\n :return: yields a sequence of python data structures deserialized\n from the stream"}
{"_id": "q_12348", "text": "Seeks and removes the sourcemap comment. If found, the sourcemap line is\n returned.\n\n Bundled output files can have massive amounts of lines, and the sourceMap\n comment is always at the end. So, to extract it efficiently, we read out the\n lines of the file starting from the end. We look back at most 2 lines.\n\n :param:filepath: path to output bundle file containing the sourcemap comment\n :param:blocksize: integer saying how many bytes to read at once\n :return:string with the sourcemap comment or None"}
{"_id": "q_12349", "text": "Check whether `self.app` is missing the '.js' extension and if it needs it."}
{"_id": "q_12350", "text": "Trace the dependencies for app.\n\n A tracer-instance is shortlived, and re-tracing the same app should\n yield the same results. Since tracing is an expensive process, cache\n the result on the tracer instance."}
{"_id": "q_12351", "text": "Convert the bytes object to a hexdump.\n\n The output format will be:\n\n <offset, 4-byte> <16-bytes of output separated by 1 space> <16 ascii characters>"}
{"_id": "q_12352", "text": "Get a list of all valid identifiers for the current context.\n\n Returns:\n list(str): A list of all of the valid identifiers for this context"}
{"_id": "q_12353", "text": "Split a line into arguments using shlex and a dequoting routine."}
{"_id": "q_12354", "text": "Check if our context matches something that we have initialization commands\n for. If so, run them to initialize the context before proceeding with other\n commands."}
{"_id": "q_12355", "text": "Return help information for a context or function."}
{"_id": "q_12356", "text": "Find a function in the given context by name.\n\n This function will first search the list of builtins and if the\n desired function is not a builtin, it will continue to search\n the given context.\n\n Args:\n context (object): A dict or class that is a typedargs context\n funname (str): The name of the function to find\n\n Returns:\n callable: The found function."}
{"_id": "q_12357", "text": "Return a listing of all of the functions in this context including builtins.\n\n Args:\n context (object): The context to print a directory for.\n\n Returns:\n str"}
{"_id": "q_12358", "text": "Process arguments from the command line into positional and kw args.\n\n Arguments are consumed until the argument spec for the function is filled\n or a -- is found or there are no more arguments. Keyword arguments can be\n specified using --field=value, -f value or --field value. Positional\n arguments are specified just on the command line itself.\n\n If a keyword argument (`field`) is a boolean, it can be set to True by just passing\n --field or -f without needing to explicitly pass True unless this would cause\n ambiguity in parsing since the next expected positional argument is also a boolean\n or a string.\n\n Args:\n func (callable): A function previously annotated with type information\n args (list): A list of all of the potential arguments to this function.\n\n Returns:\n (args, kw_args, unused args): A tuple with a list of args, a dict of\n keyword args and a list of any unused args that were not processed."}
{"_id": "q_12359", "text": "Try to find the value for a keyword argument."}
{"_id": "q_12360", "text": "Invoke a function given a list of arguments with the function listed first.\n\n The function is searched for using the current context on the context stack\n and its annotated type information is used to convert all of the string parameters\n passed in line to appropriate python types.\n\n Args:\n line (list): The list of command line arguments.\n\n Returns:\n (object, list, bool): A tuple containing the return value of the function, if any,\n a boolean specifying if the function created a new context (False if a new context\n was created) and a list with the remainder of the command line if this function\n did not consume all arguments."}
{"_id": "q_12361", "text": "Invoke a one or more function given a list of arguments.\n\n The functions are searched for using the current context on the context stack\n and its annotated type information is used to convert all of the string parameters\n passed in line to appropriate python types.\n\n Args:\n line (list): The list of command line arguments.\n\n Returns:\n bool: A boolean specifying if the last function created a new context\n (False if a new context was created) and a list with the remainder of the\n command line if this function did not consume all arguments.)"}
{"_id": "q_12362", "text": "Parse and invoke a string line.\n\n Args:\n line (str): The line that we want to parse and invoke.\n\n Returns:\n bool: A boolean specifying if the last function created a new context\n (False if a new context was created) and a list with the remainder of the\n command line if this function did not consume all arguments.)"}
{"_id": "q_12363", "text": "Parse a single typed parameter statement."}
{"_id": "q_12364", "text": "Parse a single return statement declaration.\n\n The valid types of return declarion are a Returns: section heading\n followed a line that looks like:\n type [format-as formatter]: description\n\n OR\n\n type [show-as (string | context)]: description sentence"}
{"_id": "q_12365", "text": "Attempt to find the canonical name of this section."}
{"_id": "q_12366", "text": "Join adjacent lines together into paragraphs using either a blank line or indent as separator."}
{"_id": "q_12367", "text": "Wrap, format and print this docstring for a specific width.\n\n Args:\n width (int): The number of characters per line. If set to None\n this will be inferred from the terminal width and default\n to 80 if not passed or if passed as None and the terminal\n width cannot be determined.\n include_return (bool): Include the return information section\n in the output.\n include_params (bool): Include a parameter information section\n in the output.\n excluded_params (list): An optional list of parameter names to exclude.\n Options for excluding things are, for example, 'self' or 'cls'."}
{"_id": "q_12368", "text": "Convert value to type 'typename'\n\n If the conversion routine takes various kwargs to\n modify the conversion process, \\\\**kwargs is passed\n through to the underlying conversion function"}
{"_id": "q_12369", "text": "Convert value to type and format it as a string\n\n type must be a known type in the type system and format,\n if given, must specify a valid formatting option for the\n specified type."}
{"_id": "q_12370", "text": "Check if type is known to the type system.\n\n Returns:\n bool: True if the type is a known instantiated simple type, False otherwise"}
{"_id": "q_12371", "text": "Instantiate a complex type."}
{"_id": "q_12372", "text": "Check if format is known for given type.\n\n Returns boolean indicating if format is valid for the specified type."}
{"_id": "q_12373", "text": "Check if we have enough arguments to call this function.\n\n Args:\n pos_args (list): A list of all the positional values we have.\n kw_args (dict): A dict of all of the keyword args we have.\n\n Returns:\n bool: True if we have a filled spec, False otherwise."}
{"_id": "q_12374", "text": "Add type information for a parameter by name.\n\n Args:\n name (str): The name of the parameter we wish to annotate\n type_name (str): The name of the parameter's type\n validators (list): A list of either strings or n tuples that each\n specify a validator defined for type_name. If a string is passed,\n the validator is invoked with no extra arguments. If a tuple is\n passed, the validator will be invoked with the extra arguments.\n desc (str): Optional parameter description."}
{"_id": "q_12375", "text": "Add type information to the return value of this function.\n\n Args:\n type_name (str): The name of the type of the return value.\n formatter (str): An optional name of a formatting function specified\n for the type given in type_name."}
{"_id": "q_12376", "text": "Use a custom function to print the return value.\n\n Args:\n printer (callable): A function that should take in the return\n value and convert it to a string.\n desc (str): An optional description of the return value."}
{"_id": "q_12377", "text": "Try to convert a prefix into a parameter name.\n\n If the result could be ambiguous or there is no matching\n parameter, throw an ArgumentError\n\n Args:\n name (str): A prefix for a parameter name\n filled_args (list): A list of filled positional arguments that will be\n removed from consideration.\n\n Returns:\n str: The full matching parameter name"}
{"_id": "q_12378", "text": "Get the parameter type information by name.\n\n Args:\n name (str): The full name of a parameter.\n\n Returns:\n str: The type name or None if no type information is given."}
{"_id": "q_12379", "text": "Return our function signature as a string.\n\n By default this function uses the annotated name of the function\n however if you need to override that with a custom name you can\n pass name=<custom name>\n\n Args:\n name (str): Optional name to override the default name given\n in the function signature.\n\n Returns:\n str: The formatted function signature"}
{"_id": "q_12380", "text": "Format the return value of this function as a string.\n\n Args:\n value (object): The return value that we are supposed to format.\n\n Returns:\n str: The formatted return value, or None if this function indicates\n that it does not return data"}
{"_id": "q_12381", "text": "Convert and validate a positional argument.\n\n Args:\n index (int): The positional index of the argument\n arg_value (object): The value to convert and validate\n\n Returns:\n object: The converted value."}
{"_id": "q_12382", "text": "Check if there are any missing or duplicate arguments.\n\n Args:\n pos_args (list): A list of arguments that will be passed as positional\n arguments.\n kwargs (dict): A dictionary of the keyword arguments that will be passed.\n\n Returns:\n dict: A dictionary of argument name to argument value, pulled from either\n the value passed or the default value if no argument is passed.\n\n Raises:\n ArgumentError: If a positional or keyword argument does not fit in the spec.\n ValidationError: If an argument is passed twice."}
{"_id": "q_12383", "text": "Format this exception as a string including class name.\n\n Args:\n exclude_class (bool): Whether to exclude the exception class\n name when formatting this exception\n\n Returns:\n string: a multiline string with the message, class name and\n key value parameters passed to create the exception."}
{"_id": "q_12384", "text": "Check the type of all parameters with type information, converting\n as appropriate and then execute the function."}
{"_id": "q_12385", "text": "Find all annotated function inside of a container.\n\n Annotated functions are identified as those that:\n - do not start with a _ character\n - are either annotated with metadata\n - or strings that point to lazily loaded modules\n\n Args:\n container (object): The container to search for annotated functions.\n\n Returns:\n dict: A dict with all of the found functions in it."}
{"_id": "q_12386", "text": "Given a module, create a context from all of the top level annotated\n symbols in that module."}
{"_id": "q_12387", "text": "Decorate a function to give type information about its parameters.\n\n This function stores a type name, optional description and optional list\n of validation functions along with the decorated function it is called\n on in order to allow run time type conversions and validation.\n\n Args:\n name (string): name of the parameter\n type_name (string): The name of a type that will be known to the type\n system by the time this function is called for the first time. Types\n are lazily loaded so it is not required that the type resolve correctly\n at the point in the module where this function is defined.\n validators (list(string or tuple)): A list of validators. Each validator\n can be defined either using a string name or as an n-tuple of the form\n [name, \\\\*extra_args]. The name is used to look up a validator function\n of the form validate_name, which is called on the parameters value to\n determine if it is valid. If extra_args are given, they are passed\n as extra arguments to the validator function, which is called as:\n\n validator(value, \\\\*extra_args)\n desc (string): An optional descriptioon for this parameter that must be\n passed as a keyword argument.\n\n Returns:\n callable: A decorated function with additional type metadata"}
{"_id": "q_12388", "text": "Declare that a class defines a context.\n\n Contexts are for use with HierarchicalShell for discovering\n and using functionality from the command line.\n\n Args:\n name (str): Optional name for this context if you don't want\n to just use the class name."}
{"_id": "q_12389", "text": "Annotate a function using information from its docstring.\n\n The annotation actually happens at the time the function is first called\n to improve startup time. For this function to work, the docstring must be\n formatted correctly. You should use the typedargs pylint plugin to make\n sure there are no errors in the docstring."}
{"_id": "q_12390", "text": "Given an object with a docstring, return the first line of the docstring"}
{"_id": "q_12391", "text": "Download a file pointed to by url to a temp file on local disk\n\n :param str url:\n :return: local_file"}
{"_id": "q_12392", "text": "This method finds the time range which covers all the Run_Steps\n\n :param run_steps: list of Run_Step objects\n :return: tuple of start and end timestamps"}
{"_id": "q_12393", "text": "Helper function to parse diff config file, which contains SLA rules for diff comparisons"}
{"_id": "q_12394", "text": "Check if the specifed file exists and is not empty\n\n :param filename: full path to the file that needs to be checked\n :return: Status, Message"}
{"_id": "q_12395", "text": "Given an input timestamp string, determine what format is it likely in.\n\n :param string timestamp: the timestamp string for which we need to determine format\n :return: best guess timestamp format"}
{"_id": "q_12396", "text": "Given a timestamp string, return a time stamp in the epoch ms format. If no date is present in\n timestamp then today's date will be added as a prefix before conversion to epoch ms"}
{"_id": "q_12397", "text": "Extract SLAs from a set of rules"}
{"_id": "q_12398", "text": "Organize and store the count of data from the log line into the metric store by metric type, transaction, timestamp\n\n :param dict metric_store: The metric store used to store all the parsed jmeter log data\n :param dict line_data: dict with the extracted k:v from the log line\n :param list transaction_list: list of transaction to be used for storing the metrics from given line\n :param string aggregate_timestamp: timestamp used for storing the raw data. This accounts for aggregation time period\n :return: None"}
{"_id": "q_12399", "text": "Parse Jmeter workload output in XML format and extract overall and per transaction data and key statistics\n\n :param string granularity: The time period over which to aggregate and average the raw data. Valid values are 'hour', 'minute' or 'second'\n :return: status of the metric parse"}
{"_id": "q_12400", "text": "Given a size such as '2333M', return the converted value in G"}
{"_id": "q_12401", "text": "Parse the top output file\n Return status of the metric parse\n\n The raw log file is like the following:\n 2014-06-23\n top - 00:00:02 up 18 days, 7:08, 19 users, load average: 0.05, 0.03, 0.00\n Tasks: 447 total, 1 running, 443 sleeping, 2 stopped, 1 zombie\n Cpu(s): 1.6%us, 0.5%sy, 0.0%ni, 97.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n Mem: 62.841G total, 15.167G used, 47.675G free, 643.434M buffers\n Swap: 63.998G total, 0.000k used, 63.998G free, 11.324G cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 1730 root 20 0 4457m 10m 3328 S 1.9 0.0 80:13.45 lwregd\n The log lines can be generated by echo $t >> $RESULT/top.out &; top -b -n $COUNT -d $INTERVAL | grep -A 40 '^top' >> $RESULT/top.out &"}
{"_id": "q_12402", "text": "get a list of urls from a seeding url, return a list of urls\n\n :param str url: a full/absolute url, e.g. http://www.cnn.com/logs/\n :return: a list of full/absolute urls."}
{"_id": "q_12403", "text": "Generate CDF diff plots of the submetrics"}
{"_id": "q_12404", "text": "Return a timestamp from the raw epoch time based on the granularity preferences passed in.\n\n :param string timestamp: timestamp from the log line\n :param string granularity: aggregation granularity used for plots.\n :return: string aggregate_timestamp: timestamp used for metrics aggregation in all functions"}
{"_id": "q_12405", "text": "Organize and store the count of data from the log line into the metric store by columnm, group name, timestamp\n\n :param dict metric_store: The metric store used to store all the parsed the log data\n :param string groupby_name: the group name that the log line belongs to\n :param string aggregate_timestamp: timestamp used for storing the raw data. This accounts for aggregation time period\n :return: None"}
{"_id": "q_12406", "text": "Calculate stats such as percentile and mean\n\n :param dict metric_store: The metric store used to store all the parsed log data\n :return: None"}
{"_id": "q_12407", "text": "Compute anomaly scores for the time series."}
{"_id": "q_12408", "text": "Method to extract SAR metric names from the section given in the config. The SARMetric class assumes that\n the section name will contain the SAR types listed in self.supported_sar_types tuple\n\n :param str metric_name: Section name from the config\n :return: str which identifies what kind of SAR metric the section represents"}
{"_id": "q_12409", "text": "Perform the Oct2Py speed analysis.\r\n\r\n Uses timeit to test the raw execution of an Octave command,\r\n Then tests progressively larger array passing."}
{"_id": "q_12410", "text": "Quits this octave session and cleans up."}
{"_id": "q_12411", "text": "Restart an Octave session in a clean state"}
{"_id": "q_12412", "text": "Run the given function with the given args."}
{"_id": "q_12413", "text": "Test whether the name is an object."}
{"_id": "q_12414", "text": "Get or create a function pointer of the given name."}
{"_id": "q_12415", "text": "Get or create a user class of the given type."}
{"_id": "q_12416", "text": "Clean up resources used by the session."}
{"_id": "q_12417", "text": "Read the data from the given file path."}
{"_id": "q_12418", "text": "Save a Python object to an Octave file on the given path."}
{"_id": "q_12419", "text": "Convert the Octave values to values suitable for Python."}
{"_id": "q_12420", "text": "Create a struct from session data."}
{"_id": "q_12421", "text": "Convert the Python values to values suitable to send to Octave."}
{"_id": "q_12422", "text": "Test if a list contains simple numeric data."}
{"_id": "q_12423", "text": "Configure root logger."}
{"_id": "q_12424", "text": "Get a pointer to the private object."}
{"_id": "q_12425", "text": "Decorator to make functional view documentable via drf-autodocs"}
{"_id": "q_12426", "text": "Return true if file is a valid RAR file."}
{"_id": "q_12427", "text": "Read current member header into a RarInfo object."}
{"_id": "q_12428", "text": "Load archive members metadata."}
{"_id": "q_12429", "text": "Return a list of file names in the archive."}
{"_id": "q_12430", "text": "Print a table of contents for the RAR file."}
{"_id": "q_12431", "text": "Extract the RarInfo objects 'members' to a physical\n file on the path targetpath."}
{"_id": "q_12432", "text": "Convert a RAR archive member DOS time to a Python time tuple."}
{"_id": "q_12433", "text": "Wrap c function setting prototype."}
{"_id": "q_12434", "text": "Register tasks with cron."}
{"_id": "q_12435", "text": "Print the tasks that would be installed in the\n crontab, for debugging purposes."}
{"_id": "q_12436", "text": "Create a project handler\n\n Args:\n uri (str): schema://something formatted uri\n local_path (str): the project configs directory\n\n Return:\n ProjectHandler derived class instance"}
{"_id": "q_12437", "text": "Get the dependencies of the Project\n\n Args:\n recursive (bool): add the dependant project's dependencies too\n\n Returns:\n dict of project name and project instances"}
{"_id": "q_12438", "text": "Calls the project handler same named function\n\n Note: the project handler may add some extra arguments to the command,\n so when use this decorator, add **kwargs to the end of the arguments"}
{"_id": "q_12439", "text": "Takes an object and an iterable and produces a new object that is\n a copy of the original with data from ``iterable`` reincorporated. It\n is intended as the inverse of the ``to_iter`` function. Any state in\n ``self`` that is not modelled by the iterable should remain unchanged.\n\n The following equality should hold for your definition:\n\n .. code-block:: python\n\n from_iter(self, to_iter(self)) == self\n\n This function is used by EachLens to synthesise states from iterables,\n allowing it to focus every element of an iterable state.\n\n The corresponding method call for this hook is\n ``obj._lens_from_iter(iterable)``.\n\n There is no default implementation."}
{"_id": "q_12440", "text": "Set the focus to `newvalue`.\n\n >>> from lenses import lens\n >>> set_item_one_to_four = lens[1].set(4)\n >>> set_item_one_to_four([1, 2, 3])\n [1, 4, 3]"}
{"_id": "q_12441", "text": "Set many foci to values taken by iterating over `new_values`.\n\n >>> from lenses import lens\n >>> lens.Each().set_many(range(4, 7))([0, 1, 2])\n [4, 5, 6]"}
{"_id": "q_12442", "text": "Apply a function to the focus.\n\n >>> from lenses import lens\n >>> convert_item_one_to_string = lens[1].modify(str)\n >>> convert_item_one_to_string([1, 2, 3])\n [1, '2', 3]\n >>> add_ten_to_item_one = lens[1].modify(lambda n: n + 10)\n >>> add_ten_to_item_one([1, 2, 3])\n [1, 12, 3]"}
{"_id": "q_12443", "text": "Runs the lens over the `state` applying `f` to all the foci\n collecting the results together using the applicative functor\n functions defined in `lenses.typeclass`. `f` must return an\n applicative functor. For the case when no focus exists you must\n also provide a `pure` which should take a focus and return the\n pure form of the functor returned by `f`."}
{"_id": "q_12444", "text": "Returns a list of all the foci within `state`.\n\n Requires kind Fold. This method will raise TypeError if the\n optic has no way to get any foci."}
{"_id": "q_12445", "text": "Applies a function `fn` to all the foci within `state`.\n\n Requires kind Setter. This method will raise TypeError when the\n optic has no way to set foci."}
{"_id": "q_12446", "text": "The main function. Instantiates a GameState object and then\n enters a REPL-like main loop, waiting for input, updating the state\n based on the input, then outputting the new state."}
{"_id": "q_12447", "text": "returns the vector moved one step in the direction of the\n other, potentially diagonally."}
{"_id": "q_12448", "text": "Produces a new game state in which the robots have advanced\n towards the player by one step. Handles the robots crashing into\n one another too."}
{"_id": "q_12449", "text": "Returns a completed game state object, setting an optional\n message to display after the game is over."}
{"_id": "q_12450", "text": "Shows the board to the player on the console and asks them to\n make a move."}
{"_id": "q_12451", "text": "Play a game of naughts and crosses against the computer."}
{"_id": "q_12452", "text": "Return a board with a cell filled in by the current player. If\n the cell is already occupied then return the board unchanged."}
{"_id": "q_12453", "text": "The winner of this board if one exists."}
{"_id": "q_12454", "text": "Process single item. Add item to items and then upload to S3 if size of items\n >= max_chunk_size."}
{"_id": "q_12455", "text": "Do upload items to S3."}
{"_id": "q_12456", "text": "Build file object from items."}
{"_id": "q_12457", "text": "Parse a savefile as a pcap_savefile instance. Returns the savefile\n on success and None on failure. Verbose mode prints additional information\n about the file's processing. layers defines how many layers to descend and\n decode the packet. input_file should be a Python file object."}
{"_id": "q_12458", "text": "Reads the next individual packet from the capture file. Expects\n the file handle to be somewhere after the header, on the next\n per-packet header."}
{"_id": "q_12459", "text": "Strip the Ethernet frame from a packet."}
{"_id": "q_12460", "text": "Returns the asset information associated with a specific asset ID.\n\n :param asset_id:\n an asset identifier (the transaction ID of the RegistTransaction when the asset is\n registered)\n :type asset_id: str\n :return: dictionary containing the asset state information\n :rtype: dict"}
{"_id": "q_12461", "text": "Returns the block information associated with a specific hash value or block index.\n\n :param block_hash: a block hash value or a block index (block height)\n :param verbose:\n a boolean indicating whether the detailed block information should be returned in JSON\n format (otherwise the block information is returned as an hexadecimal string by the\n JSON-RPC endpoint)\n :type block_hash: str or int\n :type verbose: bool\n :return:\n dictionary containing the block information (or an hexadecimal string if verbose is set\n to False)\n :rtype: dict or str"}
{"_id": "q_12462", "text": "Returns the system fees associated with a specific block index.\n\n :param block_index: a block index (block height)\n :type block_index: int\n :return: system fees of the block, expressed in NeoGas units\n :rtype: str"}
{"_id": "q_12463", "text": "Returns the contract information associated with a specific script hash.\n\n :param script_hash: contract script hash\n :type script_hash: str\n :return: dictionary containing the contract information\n :rtype: dict"}
{"_id": "q_12464", "text": "Returns detailed information associated with a specific transaction hash.\n\n :param tx_hash: transaction hash\n :param verbose:\n a boolean indicating whether the detailed transaction information should be returned in\n JSON format (otherwise the transaction information is returned as an hexadecimal string\n by the JSON-RPC endpoint)\n :type tx_hash: str\n :type verbose: bool\n :return:\n dictionary containing the transaction information (or an hexadecimal string if verbose\n is set to False)\n :rtype: dict or str"}
{"_id": "q_12465", "text": "Returns the value stored in the storage of a contract script hash for a given key.\n\n :param script_hash: contract script hash\n :param key: key to look up in the storage\n :type script_hash: str\n :type key: str\n :return: value associated with the storage key\n :rtype: bytearray"}
{"_id": "q_12466", "text": "Invokes a contract's function with given parameters and returns the result.\n\n :param script_hash: contract script hash\n :param operation: name of the operation to invoke\n :param params: list of paramaters to be passed in to the smart contract\n :type script_hash: str\n :type operation: str\n :type params: list\n :return: result of the invocation\n :rtype: dictionary"}
{"_id": "q_12467", "text": "Invokes a script on the VM and returns the result.\n\n :param script: script runnable by the VM\n :type script: str\n :return: result of the invocation\n :rtype: dictionary"}
{"_id": "q_12468", "text": "Broadcasts a transaction over the NEO network and returns the result.\n\n :param hextx: hexadecimal string that has been serialized\n :type hextx: str\n :return: result of the transaction\n :rtype: bool"}
{"_id": "q_12469", "text": "Validates if the considered string is a valid NEO address.\n\n :param hex: string containing a potential NEO address\n :type hex: str\n :return: dictionary containing the result of the verification\n :rtype: dictionary"}
{"_id": "q_12470", "text": "Calls the JSON-RPC endpoint."}
{"_id": "q_12471", "text": "Returns True if the considered string is a valid SHA256 hash."}
{"_id": "q_12472", "text": "Returns a list of paramaters meant to be passed to JSON-RPC endpoints."}
{"_id": "q_12473", "text": "Tries to decode the values embedded in an invocation result dictionary."}
{"_id": "q_12474", "text": "Emulates keyword-only arguments under python2. Works with both python2 and python3.\n With this decorator you can convert all or some of the default arguments of your function\n into kwonly arguments. Use ``KWONLY_REQUIRED`` as the default value of required kwonly args.\n\n :param name: The name of the first default argument to be treated as a keyword-only argument. This default\n argument along with all default arguments that follow this one will be treated as keyword only arguments.\n\n You can also pass here the ``FIRST_DEFAULT_ARG`` constant in order to select the first default argument. This\n way you turn all default arguments into keyword-only arguments. As a shortcut you can use the\n ``@kwonly_defaults`` decorator (without any parameters) instead of ``@first_kwonly_arg(FIRST_DEFAULT_ARG)``.\n\n >>> from kwonly_args import first_kwonly_arg, KWONLY_REQUIRED, FIRST_DEFAULT_ARG, kwonly_defaults\n >>>\n >>> # this decoration converts the ``d1`` and ``d2`` default args into kwonly args\n >>> @first_kwonly_arg('d1')\n >>> def func(a0, a1, d0='d0', d1='d1', d2='d2', *args, **kwargs):\n >>> print(a0, a1, d0, d1, d2, args, kwargs)\n >>>\n >>> func(0, 1, 2, 3, 4)\n 0 1 2 d1 d2 (3, 4) {}\n >>>\n >>> func(0, 1, 2, 3, 4, d2='my_param')\n 0 1 2 d1 my_param (3, 4) {}\n >>>\n >>> # d0 is an optional deyword argument, d1 is required\n >>> def func(d0='d0', d1=KWONLY_REQUIRED):\n >>> print(d0, d1)\n >>>\n >>> # The ``FIRST_DEFAULT_ARG`` constant automatically selects the first default argument so it\n >>> # turns all default arguments into keyword-only ones. Both d0 and d1 are keyword-only arguments.\n >>> @first_kwonly_arg(FIRST_DEFAULT_ARG)\n >>> def func(a0, a1, d0='d0', d1='d1'):\n >>> print(a0, a1, d0, d1)\n >>>\n >>> # ``@kwonly_defaults`` is a shortcut for the ``@first_kwonly_arg(FIRST_DEFAULT_ARG)``\n >>> # in the previous example. This example has the same effect as the previous one.\n >>> @kwonly_defaults\n >>> def func(a0, a1, d0='d0', d1='d1'):\n >>> print(a0, a1, d0, d1)"}
{"_id": "q_12475", "text": "Call Heartbeat URL"}
{"_id": "q_12476", "text": "sends a request and gets a response from the Plivo REST API\n\n path: the URL (relative to the endpoint URL, after the /v1\n method: the HTTP method to use, defaults to POST\n data: for POST or PUT, a dict of data to send\n\n returns Plivo response in XML or raises an exception on error"}
{"_id": "q_12477", "text": "REST Reload Plivo Config helper"}
{"_id": "q_12478", "text": "REST Reload Plivo Cache Config helper"}
{"_id": "q_12479", "text": "REST Call Helper"}
{"_id": "q_12480", "text": "REST BulkCalls Helper"}
{"_id": "q_12481", "text": "REST GroupCalls Helper"}
{"_id": "q_12482", "text": "REST Transfer Live Call Helper"}
{"_id": "q_12483", "text": "REST Hangup All Live Calls Helper"}
{"_id": "q_12484", "text": "REST Hangup Live Call Helper"}
{"_id": "q_12485", "text": "REST Schedule Hangup Helper"}
{"_id": "q_12486", "text": "REST Cancel a Scheduled Hangup Helper"}
{"_id": "q_12487", "text": "REST RecordStart helper"}
{"_id": "q_12488", "text": "REST Play something on a Call Helper"}
{"_id": "q_12489", "text": "REST Cancel a Scheduled Play Helper"}
{"_id": "q_12490", "text": "REST Remove soundtouch audio effects on a Call"}
{"_id": "q_12491", "text": "REST Send digits to a Call"}
{"_id": "q_12492", "text": "REST Conference Unmute helper"}
{"_id": "q_12493", "text": "REST Conference Hangup helper"}
{"_id": "q_12494", "text": "REST Conference Deaf helper"}
{"_id": "q_12495", "text": "REST Conference RecordStart helper"}
{"_id": "q_12496", "text": "REST Conference Speak helper"}
{"_id": "q_12497", "text": "REST Conference List Members Helper"}
{"_id": "q_12498", "text": "Return an XML element representing this element"}
{"_id": "q_12499", "text": "validate a request from plivo\n\n uri: the full URI that Plivo requested on your server\n postVars: post vars that Plivo sent with the request\n expectedSignature: signature in HTTP X-Plivo-Signature header\n\n returns true if the request passes validation, false if not"}
{"_id": "q_12500", "text": "Breadth-first search.\n\n .. seealso::\n `Wikipedia BFS descritpion <http://en.wikipedia.org/wiki/Breadth-first_search>`_\n\n :param root: first to start the search\n :return: list of nodes"}
{"_id": "q_12501", "text": "Return a node with label. Create node if label is new"}
{"_id": "q_12502", "text": "Parse dom into a Graph.\n\n :param dom: dom as returned by minidom.parse or minidom.parseString\n :return: A Graph representation"}
{"_id": "q_12503", "text": "Parse a string into a Graph.\n\n :param string: String that is to be passed into Grapg\n :return: Graph"}
{"_id": "q_12504", "text": "This function handles timezone aware datetimes.\n Sometimes it is necessary to keep daylight saving time switches in mind.\n\n Args:\n instruction (string): a string that encodes 0 to n transformations of a time, i.e. \"-1h@h\", \"@mon+2d+4h\", ...\n dttm (datetime): a datetime with timezone\n timezone: a pytz timezone\n Returns:\n datetime: The datetime resulting from applying all transformations to the input datetime.\n\n Example:\n >>> import pytz\n >>> CET = pytz.timezone(\"Europe/Berlin\")\n >>> dttm = CET.localize(datetime(2017, 3, 26, 3, 44)\n >>> dttm\n datetime.datetime(2017, 3, 26, 3, 44, tzinfo=<DstTzInfo 'Europe/Berlin' CEST+2:00:00 DST>)\n\n >>> snap_tz(dttm, \"-2h@h\", CET)\n datetime.datetime(2017, 3, 26, 0, 0, tzinfo=<DstTzInfo 'Europe/Berlin' CET+1:00:00 STD>)\n >>> # switch from winter to summer time!"}
{"_id": "q_12505", "text": "Renders the barcode and saves it in `filename`.\n\n :parameters:\n filename : String\n Filename to save the barcode in (without filename\n extension).\n options : Dict\n The same as in `self.render`.\n\n :returns: The full filename with extension.\n :rtype: String"}
{"_id": "q_12506", "text": "Renders the barcode using `self.writer`.\n\n :parameters:\n writer_options : Dict\n Options for `self.writer`, see writer docs for details.\n\n :returns: Output of the writers render method."}
{"_id": "q_12507", "text": "Calculates the checksum for EAN13-Code.\n\n :returns: The checksum for `self.ean`.\n :rtype: Integer"}
{"_id": "q_12508", "text": "Renders the barcode to whatever the inheriting writer provides,\n using the registered callbacks.\n\n :parameters:\n code : List\n List of strings matching the writer spec\n (only contain 0 or 1)."}
{"_id": "q_12509", "text": "A to-bytes specific to Python 3.5 and below."}
{"_id": "q_12510", "text": "Small helper value to pack an index value into bytecode.\n\n This is used for version compat between 3.5- and 3.6+\n\n :param index: The item to pack.\n :return: The packed item."}
{"_id": "q_12511", "text": "Generates a simple call, with an index for something.\n\n :param opcode: The opcode to generate.\n :param index: The index to use as an argument.\n :return:"}
{"_id": "q_12512", "text": "Upload a file or folder to the S3-like service.\n\n If LOCAL_PATH is a folder, the files and subfolder structure in LOCAL_PATH are copied to REMOTE_PATH.\n\n If LOCAL_PATH is a file, the REMOTE_PATH file is created with the same contents."}
{"_id": "q_12513", "text": "For each section defined in the local config file, look up for a folder inside the local config folder\n named after the section. Uploads the environemnt file named as in the S3CONF variable for this section\n to the remote S3CONF path."}
{"_id": "q_12514", "text": "Compiles Pyte objects into a bytecode list.\n\n :param code: A list of objects to compile.\n :return: The computed bytecode."}
{"_id": "q_12515", "text": "Simulates the actions of the stack, to check safety.\n\n This returns the maximum needed stack."}
{"_id": "q_12516", "text": "Parse the file structure to a data structure given the path to\n a module directory."}
{"_id": "q_12517", "text": "Parse a file structure to a data structure given the path to\n a collection directory."}
{"_id": "q_12518", "text": "Parse a litezip file structure to a data structure given the path\n to the litezip directory."}
{"_id": "q_12519", "text": "Converts a completezip file structure to a litezip file structure.\n Returns a litezip data structure."}
{"_id": "q_12520", "text": "Iterate over the instructions in a bytecode string.\n\n Generates a sequence of Instruction namedtuples giving the details of each\n opcode. Additional information about the code's runtime environment\n (e.g. variable names, constants) can be specified using optional\n arguments."}
{"_id": "q_12521", "text": "Format instruction details for inclusion in disassembly output\n\n *lineno_width* sets the width of the line number field (0 omits it)\n *mark_as_current* inserts a '-->' marker arrow as part of the line"}
{"_id": "q_12522", "text": "Returns intersection of two lists. Assumes the lists are sorted by start positions"}
{"_id": "q_12523", "text": "Sorts list in place, then removes any intervals that are completely\n contained inside another interval"}
{"_id": "q_12524", "text": "Returns true iff this interval intersects the interval i"}
{"_id": "q_12525", "text": "If intervals intersect, returns their union, otherwise returns None"}
{"_id": "q_12526", "text": "If intervals intersect, returns their intersection, otherwise returns None"}
{"_id": "q_12527", "text": "Returns Fasta object with the same name, of the bases from start to end, but not including end"}
{"_id": "q_12528", "text": "Returns a list of ORFs that the sequence has, starting on the given\n frame. Each returned ORF is an interval.Interval object.\n If revomp=True, then finds the ORFs of the reverse complement\n of the sequence."}
{"_id": "q_12529", "text": "Returns true iff length is >= 6, is a multiple of 3, and there is exactly one stop codon in the sequence and it is at the end"}
{"_id": "q_12530", "text": "Returns a Fastq object. qual_scores expected to be a list of numbers, like you would get in a .qual file"}
{"_id": "q_12531", "text": "Removes any leading or trailing N or n characters from the sequence"}
{"_id": "q_12532", "text": "Returns the number of sequences in a file"}
{"_id": "q_12533", "text": "Makes interleaved file from two sequence files. If used, will append suffix1 onto end\n of every sequence name in infile_1, unless it already ends with suffix1. Similar for sufffix2."}
{"_id": "q_12534", "text": "Makes a file of contigs from scaffolds by splitting at every N.\n Use number_contigs=True to add .1, .2, etc onto end of each\n contig, instead of default to append coordinates."}
{"_id": "q_12535", "text": "Sorts input sequence file by biggest sequence first, writes sorted output file. Set smallest_first=True to have smallest first"}
{"_id": "q_12536", "text": "Sorts input sequence file by sort -d -k1,1, writes sorted output file."}
{"_id": "q_12537", "text": "Writes a FASTG file in SPAdes format from input file. Currently only whether or not a sequence is circular is supported. Put circular=set of ids, or circular=filename to make those sequences circular in the output. Puts coverage=1 on all contigs"}
{"_id": "q_12538", "text": "Add basic authentication to the requests of the clients."}
{"_id": "q_12539", "text": "yield objects from json files in the folder and subfolders."}
{"_id": "q_12540", "text": "Return a dict of schema names mapping to a Schema.\n\n The schema is of type schul_cloud_resources_api_v1.schema.Schema"}
{"_id": "q_12541", "text": "Return the schema."}
{"_id": "q_12542", "text": "Return a jsonschema.RefResolver for the schemas.\n\n All schemas returned be get_schemas() are resolved locally."}
{"_id": "q_12543", "text": "Validate an object against the schema.\n\n This function just passes if the schema matches the object.\n If the object does not match the schema, a ValidationException is raised.\n This error allows debugging."}
{"_id": "q_12544", "text": "Return a list of valid examples for the given schema."}
{"_id": "q_12545", "text": "Implements PBKDF2 with the same API as Django's existing\n implementation, using cryptography.\n\n :type password: any\n :type salt: any\n :type iterations: int\n :type dklen: int\n :type digest: cryptography.hazmat.primitives.hashes.HashAlgorithm"}
{"_id": "q_12546", "text": "A decorator for creating encrypted model fields.\n\n :type base_field: models.Field[T]\n :param bytes key: This is an optional argument.\n\n Allows for specifying an instance specific encryption key.\n :param int ttl: This is an optional argument.\n\n The amount of time in seconds that a value can be stored for. If the\n time to live of the data has passed, it will become unreadable.\n The expired value will return an :class:`Expired` object.\n :rtype: models.Field[EncryptedMixin, T]"}
{"_id": "q_12547", "text": "Pickled data is serialized as base64"}
{"_id": "q_12548", "text": "Returns a PEP 386-compliant version number from VERSION."}
{"_id": "q_12549", "text": "Process tokens and errors from redirect_uri."}
{"_id": "q_12550", "text": "Get OneDrive object representing list of objects in a folder."}
{"_id": "q_12551", "text": "Create a folder with a specified \"name\" attribute.\n\t\t\t\tfolder_id allows to specify a parent folder.\n\t\t\t\tmetadata mapping may contain additional folder properties to pass to an API."}
{"_id": "q_12552", "text": "Add comment message to a specified object."}
{"_id": "q_12553", "text": "Convert or dump object to unicode."}
{"_id": "q_12554", "text": "Return a value check function which raises a ValueError if the value does\n not match the supplied regular expression, see also `re.match`."}
{"_id": "q_12555", "text": "Return a value check function which raises a ValueError if the supplied\n regular expression does not match anywhere in the value, see also\n `re.search`."}
{"_id": "q_12556", "text": "Return a value check function which raises a ValueError if the supplied\n value when cast as `type` is less than `min` or greater than `max`."}
{"_id": "q_12557", "text": "Return a value check function which raises a ValueError if the supplied\n value when cast as `type` is less than or equal to `min` or greater than\n or equal to `max`."}
{"_id": "q_12558", "text": "Return a value check function which raises a ValueError if the supplied\n value when converted to a datetime using the supplied `format` string is\n less than `min` or greater than `max`."}
{"_id": "q_12559", "text": "Add a record length check, i.e., check whether the length of a record is\n consistent with the number of expected fields.\n\n Arguments\n ---------\n\n `code` - problem code to report if a record is not valid, defaults to\n `RECORD_LENGTH_CHECK_FAILED`\n\n `message` - problem message to report if a record is not valid\n\n `modulus` - apply the check to every nth record, defaults to 1 (check\n every record)"}
{"_id": "q_12560", "text": "Add a value check function for the specified field.\n\n Arguments\n ---------\n\n `field_name` - the name of the field to attach the value check function\n to\n\n `value_check` - a function that accepts a single argument (a value) and\n raises a `ValueError` if the value is not valid\n\n `code` - problem code to report if a value is not valid, defaults to\n `VALUE_CHECK_FAILED`\n\n `message` - problem message to report if a value is not valid\n\n `modulus` - apply the check to every nth record, defaults to 1 (check\n every record)"}
{"_id": "q_12561", "text": "Add a value predicate function for the specified field.\n\n N.B., everything you can do with value predicates can also be done with\n value check functions, whether you use one or the other is a matter of\n style.\n\n Arguments\n ---------\n\n `field_name` - the name of the field to attach the value predicate\n function to\n\n `value_predicate` - a function that accepts a single argument (a value)\n and returns False if the value is not valid\n\n `code` - problem code to report if a value is not valid, defaults to\n `VALUE_PREDICATE_FALSE`\n\n `message` - problem message to report if a value is not valid\n\n `modulus` - apply the check to every nth record, defaults to 1 (check\n every record)"}
{"_id": "q_12562", "text": "Add a record check function.\n\n Arguments\n ---------\n\n `record_check` - a function that accepts a single argument (a record as\n a dictionary of values indexed by field name) and raises a\n `RecordError` if the record is not valid\n\n `modulus` - apply the check to every nth record, defaults to 1 (check\n every record)"}
{"_id": "q_12563", "text": "Add a record predicate function.\n\n N.B., everything you can do with record predicates can also be done with\n record check functions, whether you use one or the other is a matter of\n style.\n\n Arguments\n ---------\n\n `record_predicate` - a function that accepts a single argument (a record\n as a dictionary of values indexed by field name) and returns False if\n the value is not valid\n\n `code` - problem code to report if a record is not valid, defaults to\n `RECORD_PREDICATE_FALSE`\n\n `message` - problem message to report if a record is not valid\n\n `modulus` - apply the check to every nth record, defaults to 1 (check\n every record)"}
{"_id": "q_12564", "text": "Add a unique check on a single column or combination of columns.\n\n Arguments\n ---------\n\n `key` - a single field name (string) specifying a field in which all\n values are expected to be unique, or a sequence of field names (tuple\n or list of strings) specifying a compound key\n\n `code` - problem code to report if a record is not valid, defaults to\n `UNIQUE_CHECK_FAILED`\n\n `message` - problem message to report if a record is not valid"}
{"_id": "q_12565", "text": "Initialise sets used for uniqueness checking."}
{"_id": "q_12566", "text": "Apply header checks on the given record `r`."}
{"_id": "q_12567", "text": "Apply record length checks on the given record `r`."}
{"_id": "q_12568", "text": "Invoke 'each' methods on `r`."}
{"_id": "q_12569", "text": "Apply 'assert' methods on `r`."}
{"_id": "q_12570", "text": "Apply 'check' methods on `r`."}
{"_id": "q_12571", "text": "Apply skip functions on `r`."}
{"_id": "q_12572", "text": "Create an example CSV validator for patient demographic data."}
{"_id": "q_12573", "text": "Recursively create and set the drop target for obj and childs"}
{"_id": "q_12574", "text": "Event handler for drag&drop functionality"}
{"_id": "q_12575", "text": "track default top level window for toolbox menu default action"}
{"_id": "q_12576", "text": "Open the inspector windows for a given object"}
{"_id": "q_12577", "text": "Loads HTML page from location and then displays it"}
{"_id": "q_12578", "text": "Process an outgoing communication"}
{"_id": "q_12579", "text": "Called by SelectionTag"}
{"_id": "q_12580", "text": "support cursor keys to move components one pixel at a time"}
{"_id": "q_12581", "text": "Capture the new control superficial image after an update"}
{"_id": "q_12582", "text": "When dealing with a Top-Level window position it absolute lower-right"}
{"_id": "q_12583", "text": "Returns the pyth item data associated with the item"}
{"_id": "q_12584", "text": "Set the python item data associated wit the wx item"}
{"_id": "q_12585", "text": "Remove the item from the list and unset the related data"}
{"_id": "q_12586", "text": "Remove all items and column headings"}
{"_id": "q_12587", "text": "Returns the label of the selected item or an empty string if none"}
{"_id": "q_12588", "text": "Construct a string representing the object"}
{"_id": "q_12589", "text": "Create a new object exactly similar to self"}
{"_id": "q_12590", "text": "called when adding a control to the window"}
{"_id": "q_12591", "text": "Re-parent a child control with the new wx_obj parent"}
{"_id": "q_12592", "text": "make several copies of the background bitmap"}
{"_id": "q_12593", "text": "Draw the image as background"}
{"_id": "q_12594", "text": "Custom draws the label when transparent background is needed"}
{"_id": "q_12595", "text": "Look for every file in the directory tree and return a dict\r\n Hacked from sphinx.autodoc"}
{"_id": "q_12596", "text": "Return a list of children sub-components that are column headings"}
{"_id": "q_12597", "text": "Update the grid if rows and columns have been added or deleted"}
{"_id": "q_12598", "text": "update the column attributes to add the appropriate renderer"}
{"_id": "q_12599", "text": "col -> sort the data based on the column indexed by col"}
{"_id": "q_12600", "text": "Remove all rows and reset internal structures"}
{"_id": "q_12601", "text": "Fetch the value from the table and prepare the edit control"}
{"_id": "q_12602", "text": "Complete the editing of the current cell. Returns True if changed"}
{"_id": "q_12603", "text": "Return True to allow the given key to start editing"}
{"_id": "q_12604", "text": "This will be called to let the editor do something with the first key"}
{"_id": "q_12605", "text": "A metaclass generator. Returns a metaclass which\r\n will register it's class as the class that handles input type=typeName"}
{"_id": "q_12606", "text": "enable or disable all menu items"}
{"_id": "q_12607", "text": "enable or disable all top menus"}
{"_id": "q_12608", "text": "check if all top menus are enabled"}
{"_id": "q_12609", "text": "Helper method to remove a menu avoiding using its position"}
{"_id": "q_12610", "text": "Process form submission"}
{"_id": "q_12611", "text": "Add a tag attribute to the wx window"}
{"_id": "q_12612", "text": "Get an autodoc.Documenter class suitable for documenting the given\n object.\n\n *obj* is the Python object to be documented, and *parent* is an\n another Python object (e.g. a module or a class) to which *obj*\n belongs to."}
{"_id": "q_12613", "text": "Reformat a function signature to a more compact form."}
{"_id": "q_12614", "text": "Import a Python object given its full name."}
{"_id": "q_12615", "text": "Smart linking role.\n\n Expands to ':obj:`text`' if `text` is an object that can be imported;\n otherwise expands to '*text*'."}
{"_id": "q_12616", "text": "Show a dialog to select a font"}
{"_id": "q_12617", "text": "Show a dialog to pick a color"}
{"_id": "q_12618", "text": "Show a dialog to choose a directory"}
{"_id": "q_12619", "text": "Shows a find text dialog"}
{"_id": "q_12620", "text": "Set icon based on resource values"}
{"_id": "q_12621", "text": "Display or hide the window, optionally disabling all other windows"}
{"_id": "q_12622", "text": "Open, read and eval the resource from the source file"}
{"_id": "q_12623", "text": "Save the resource to the source file"}
{"_id": "q_12624", "text": "Create a gui2py window based on the python resource"}
{"_id": "q_12625", "text": "Write content to the clipboard, data can be either a string or a bitmap"}
{"_id": "q_12626", "text": "Select the object and show its properties"}
{"_id": "q_12627", "text": "load the selected item in the property editor"}
{"_id": "q_12628", "text": "Pack given values v1, v2, ... into given bytearray `buf`, starting\n at given bit offset `offset`. Pack according to given format\n string `fmt`. Give `fill_padding` as ``False`` to leave padding\n bits in `buf` unmodified."}
{"_id": "q_12629", "text": "Swap bytes in `data` according to `fmt`, starting at byte `offset`\n and return the result. `fmt` must be an iterable, iterating over\n number of bytes to swap. For example, the format string ``'24'``\n applied to the bytes ``b'\\\\x00\\\\x11\\\\x22\\\\x33\\\\x44\\\\x55'`` will\n produce the result ``b'\\\\x11\\\\x00\\\\x55\\\\x44\\\\x33\\\\x22'``."}
{"_id": "q_12630", "text": "Telegram chat type can be either \"private\", \"group\", \"supergroup\" or\n \"channel\".\n Return user ID if it is of type \"private\", chat ID otherwise."}
{"_id": "q_12631", "text": "Return a Message instance according to the data received from\n Facebook Messenger API."}
{"_id": "q_12632", "text": "Get response running the view with await syntax if it is a\n coroutine function, otherwise just run it the normal way."}
{"_id": "q_12633", "text": "Use the new message to search for a registered view according\n to its pattern."}
{"_id": "q_12634", "text": "Read mat 5 file header of the file fd.\n Returns a dict with header values."}
{"_id": "q_12635", "text": "Read full header tag.\n\n Return a dict with the parsed header, the file position of next tag,\n a file like object for reading the uncompressed element data."}
{"_id": "q_12636", "text": "Read a numeric matrix.\n Returns an array with rows of the numeric matrix."}
{"_id": "q_12637", "text": "Read a cell array.\n Returns an array with rows of the cell array."}
{"_id": "q_12638", "text": "Read a struct array.\n Returns a dict with fields of the struct array."}
{"_id": "q_12639", "text": "Determine if end-of-file is reached for file fd."}
{"_id": "q_12640", "text": "Write data element tag and data.\n\n The tag contains the array type and the number of\n bytes the array data will occupy when written to file.\n\n If data occupies 4 bytes or less, it is written immediately\n as a Small Data Element (SDE)."}
{"_id": "q_12641", "text": "Write variable data to file"}
{"_id": "q_12642", "text": "Returns True if test is True for all array elements.\n Otherwise, returns False."}
{"_id": "q_12643", "text": "Private method to execute command.\n\n Args:\n command(Command): The defined command.\n data(dict): The uri variable and body.\n uppack(bool): If unpack value from result.\n\n Returns:\n The unwrapped value field in the json response."}
{"_id": "q_12644", "text": "Switch to the given window.\n\n Support:\n Web(WebView)\n\n Args:\n window_name(str): The window to change focus to.\n\n Returns:\n WebDriver Object."}
{"_id": "q_12645", "text": "Sets the x,y position of the current window.\n\n Support:\n Web(WebView)\n\n Args:\n x(int): the x-coordinate in pixels.\n y(int): the y-coordinate in pixels.\n window_handle(str): Identifier of window_handle,\n default to 'current'.\n\n Returns:\n WebDriver Object."}
{"_id": "q_12646", "text": "Switches focus to the specified frame, by index, name, or webelement.\n\n Support:\n Web(WebView)\n\n Args:\n frame_reference(None|int|WebElement):\n The identifier of the frame to switch to.\n None means to set to the default context.\n An integer representing the index.\n A webelement means that is an (i)frame to switch to.\n Otherwise throw an error.\n\n Returns:\n WebDriver Object."}
{"_id": "q_12647", "text": "Execute JavaScript Synchronously in current context.\n\n Support:\n Web(WebView)\n\n Args:\n script: The JavaScript to execute.\n *args: Arguments for your JavaScript.\n\n Returns:\n Returns the return value of the function."}
{"_id": "q_12648", "text": "Set a cookie.\n\n Support:\n Web(WebView)\n\n Args:\n cookie_dict: A dictionary contain keys: \"name\", \"value\",\n [\"path\"], [\"domain\"], [\"secure\"], [\"httpOnly\"], [\"expiry\"].\n\n Returns:\n WebElement Object."}
{"_id": "q_12649", "text": "Save the screenshot to local.\n\n Support:\n Android iOS Web(WebView)\n\n Args:\n filename(str): The path to save the image.\n quietly(bool): If True, omit the IOError when\n failed to save the image.\n\n Returns:\n WebElement Object.\n\n Raises:\n WebDriverException.\n IOError."}
{"_id": "q_12650", "text": "Find an element in the current context.\n\n Support:\n Android iOS Web(WebView)\n\n Args:\n using(str): The element location strategy.\n value(str): The value of the location strategy.\n\n Returns:\n WebElement Object.\n\n Raises:\n WebDriverException."}
{"_id": "q_12651", "text": "Find elements in the current context.\n\n Support:\n Android iOS Web(WebView)\n\n Args:\n using(str): The element location strategy.\n value(str): The value of the location strategy.\n\n Returns:\n Return a List<Element | None>, if no element matched, the list is empty.\n\n Raises:\n WebDriverException."}
{"_id": "q_12652", "text": "Wait for driver till satisfy the given condition\n\n Support:\n Android iOS Web(WebView)\n\n Args:\n timeout(int): How long we should be retrying stuff.\n interval(int): How long between retries.\n asserter(callable): The asserter func to determine the result.\n\n Returns:\n Return the driver.\n\n Raises:\n WebDriverException."}
{"_id": "q_12653", "text": "Wait for element till satisfy the given condition\n\n Support:\n Android iOS Web(WebView)\n\n Args:\n using(str): The element location strategy.\n value(str): The value of the location strategy.\n timeout(int): How long we should be retrying stuff.\n interval(int): How long between retries.\n asserter(callable): The asserter func to determine the result.\n\n Returns:\n Return the Element.\n\n Raises:\n WebDriverException."}
{"_id": "q_12654", "text": "Wait for elements till satisfy the given condition\n\n Support:\n Android iOS Web(WebView)\n\n Args:\n using(str): The element location strategy.\n value(str): The value of the location strategy.\n timeout(int): How long we should be retrying stuff.\n interval(int): How long between retries.\n asserter(callable): The asserter func to determine the result.\n\n Returns:\n Return the list of Element if any of them satisfy the condition.\n\n Raises:\n WebDriverException."}
{"_id": "q_12655", "text": "Raise WebDriverException if returned status is not zero."}
{"_id": "q_12656", "text": "Fluent interface decorator to return self if method return None."}
{"_id": "q_12657", "text": "Clear used and unused dicts before each formatting."}
{"_id": "q_12658", "text": "format a string by a map\n\n Args:\n format_string(str): A format string\n mapping(dict): A map to format the string\n\n Returns:\n A formatted string.\n\n Raises:\n KeyError: if key is not provided by the given map."}
{"_id": "q_12659", "text": "Find name of exception by WebDriver defined error code.\n\n Args:\n code(str): Error code defined in protocol.\n\n Returns:\n The error name defined in protocol."}
{"_id": "q_12660", "text": "Internal method to send request to the remote server.\n\n Args:\n method(str): HTTP Method(GET/POST/PUT/DELET/HEAD).\n url(str): The request url.\n body(dict): The JSON object to be sent.\n\n Returns:\n A dict represent the json body from server response.\n\n Raises:\n ConnectionError: Meet network problem (e.g. DNS failure,\n refused connection, etc).\n Timeout: A request times out.\n HTTPError: HTTP request returned an unsuccessful status code."}
{"_id": "q_12661", "text": "Private method to execute command with data.\n\n Args:\n command(Command): The defined command.\n data(dict): The uri variable and body.\n\n Returns:\n The unwrapped value field in the json response."}
{"_id": "q_12662", "text": "find an element in the current element.\n\n Support:\n Android iOS Web(WebView)\n\n Args:\n using(str): The element location strategy.\n value(str): The value of the location strategy.\n\n Returns:\n WebElement Object.\n\n Raises:\n WebDriverException."}
{"_id": "q_12663", "text": "Check if an element in the current element.\n\n Support:\n Android iOS Web(WebView)\n\n Args:\n using(str): The element location strategy.\n value(str): The value of the location strategy.\n\n Returns:\n Return Element if the element does exists and return None otherwise.\n\n Raises:\n WebDriverException."}
{"_id": "q_12664", "text": "find elements in the current element.\n\n Support:\n Android iOS Web(WebView)\n\n Args:\n using(str): The element location strategy.\n value(str): The value of the location strategy.\n\n Returns:\n Return a List<Element | None>, if no element matched, the list is empty.\n\n Raises:\n WebDriverException."}
{"_id": "q_12665", "text": "Assert whether the target is displayed\n\n Args:\n target(WebElement): WebElement Object.\n\n Returns:\n Return True if the element is displayed or return False otherwise."}
{"_id": "q_12666", "text": "Take next available controller id and plug in to Virtual USB Bus"}
{"_id": "q_12667", "text": "Unplug controller from Virtual USB Bus and free up ID"}
{"_id": "q_12668", "text": "Set a value on the controller\n If percent is True all controls will accept a value between -1.0 and 1.0\n\n If not then:\n Triggers are 0 to 255\n Axis are -32768 to 32767\n\n Control List:\n AxisLx , Left Stick X-Axis\n AxisLy , Left Stick Y-Axis\n AxisRx , Right Stick X-Axis\n AxisRy , Right Stick Y-Axis\n BtnBack , Menu/Back Button\n BtnStart , Start Button\n BtnA , A Button\n BtnB , B Button\n BtnX , X Button\n BtnY , Y Button\n BtnThumbL , Left Thumbstick Click\n BtnThumbR , Right Thumbstick Click\n BtnShoulderL , Left Shoulder Button\n BtnShoulderR , Right Shoulder Button\n Dpad , Set Dpad Value (0 = Off, Use DPAD_### Constants)\n TriggerL , Left Trigger\n TriggerR , Right Trigger"}
{"_id": "q_12669", "text": "Returns a list of buttons currently pressed"}
{"_id": "q_12670", "text": "Imports all available previews classes."}
{"_id": "q_12671", "text": "Adds a preview to the index."}
{"_id": "q_12672", "text": "Looks up a preview in the index, returning a detail view response."}
{"_id": "q_12673", "text": "The URL to access this preview."}
{"_id": "q_12674", "text": "Renders the message view to a response."}
{"_id": "q_12675", "text": "Renders and returns an unsent message with the provided context.\n\n Any extra keyword arguments passed will be passed through as keyword\n arguments to the message constructor.\n\n :param extra_context: Any additional context to use when rendering the\n templated content.\n :type extra_context: :class:`dict`\n :returns: A message instance.\n :rtype: :attr:`.message_class`"}
{"_id": "q_12676", "text": "Renders and sends an email message.\n\n All keyword arguments other than ``extra_context`` are passed through\n as keyword arguments when constructing a new :attr:`message_class`\n instance for this message.\n\n This method exists primarily for convenience, and the proper\n rendering of your message should not depend on the behavior of this\n method. To alter how a message is created, override\n :meth:``render_to_message`` instead, since that should always be\n called, even if a message is not sent.\n\n :param extra_context: Any additional context data that will be used\n when rendering this message.\n :type extra_context: :class:`dict`"}
{"_id": "q_12677", "text": "A simple method that runs a ManagementUtility."}
{"_id": "q_12678", "text": "Perform the actual serialization.\n\n Args:\n value: the image to transform\n Returns:\n a url pointing at a scaled and cached image"}
{"_id": "q_12679", "text": "Publish record to redis logging list"}
{"_id": "q_12680", "text": "Given an iterable with nested iterables, generate a flat iterable"}
{"_id": "q_12681", "text": "Returns a decorator function for adding a node filter.\n\n Args:\n name (str): The name of the filter.\n **kwargs: Variable keyword arguments for the filter.\n\n Returns:\n Callable[[Callable[[Element, Any], bool]]]: A decorator function for adding a node\n filter."}
{"_id": "q_12682", "text": "Asserts that the page has the given path. By default this will compare against the\n path+query portion of the full URL.\n\n Args:\n path (str | RegexObject): The string or regex that the current \"path\" should match.\n **kwargs: Arbitrary keyword arguments for :class:`CurrentPathQuery`.\n\n Returns:\n True\n\n Raises:\n ExpectationNotMet: If the assertion hasn't succeeded during the wait time."}
{"_id": "q_12683", "text": "Asserts that the page doesn't have the given path.\n\n Args:\n path (str | RegexObject): The string or regex that the current \"path\" should match.\n **kwargs: Arbitrary keyword arguments for :class:`CurrentPathQuery`.\n\n Returns:\n True\n\n Raises:\n ExpectationNotMet: If the assertion hasn't succeeded during the wait time."}
{"_id": "q_12684", "text": "Checks if the page doesn't have the given path.\n\n Args:\n path (str | RegexObject): The string or regex that the current \"path\" should match.\n **kwargs: Arbitrary keyword arguments for :class:`CurrentPathQuery`.\n\n Returns:\n bool: Whether it doesn't match."}
{"_id": "q_12685", "text": "Returns the given expression filtered by the given value.\n\n Args:\n expr (xpath.expression.AbstractExpression): The expression to filter.\n value (object): The desired value with which the expression should be filtered.\n\n Returns:\n xpath.expression.AbstractExpression: The filtered expression."}
{"_id": "q_12686", "text": "Returns an instance of the given browser with the given capabilities.\n\n Args:\n browser_name (str): The name of the desired browser.\n capabilities (Dict[str, str | bool], optional): The desired capabilities of the browser.\n Defaults to None.\n options: Arbitrary keyword arguments for the browser-specific subclass of\n :class:`webdriver.Remote`.\n\n Returns:\n WebDriver: An instance of the desired browser."}
{"_id": "q_12687", "text": "Returns the XPath query for this selector.\n\n Args:\n exact (bool, optional): Whether to exactly match text.\n\n Returns:\n str: The XPath query for this selector."}
{"_id": "q_12688", "text": "Switch to the given frame.\n\n If you use this method you are responsible for making sure you switch back to the parent\n frame when done in the frame changed to. :meth:`frame` is preferred over this method and\n should be used when possible. May not be supported by all drivers.\n\n Args:\n frame (Element | str): The iframe/frame element to switch to."}
{"_id": "q_12689", "text": "Execute the wrapped code, accepting an alert.\n\n Args:\n text (str | RegexObject, optional): Text to match against the text in the modal.\n wait (int | float, optional): Maximum time to wait for the modal to appear after\n executing the wrapped code.\n\n Raises:\n ModalNotFound: If a modal dialog hasn't been found."}
{"_id": "q_12690", "text": "Execute the wrapped code, accepting a confirm.\n\n Args:\n text (str | RegexObject, optional): Text to match against the text in the modal.\n wait (int | float, optional): Maximum time to wait for the modal to appear after\n executing the wrapped code.\n\n Raises:\n ModalNotFound: If a modal dialog hasn't been found."}
{"_id": "q_12691", "text": "Execute the wrapped code, dismissing a confirm.\n\n Args:\n text (str | RegexObject, optional): Text to match against the text in the modal.\n wait (int | float, optional): Maximum time to wait for the modal to appear after\n executing the wrapped code.\n\n Raises:\n ModalNotFound: If a modal dialog hasn't been found."}
{"_id": "q_12692", "text": "Execute the wrapped code, dismissing a prompt.\n\n Args:\n text (str | RegexObject, optional): Text to match against the text in the modal.\n wait (int | float, optional): Maximum time to wait for the modal to appear after\n executing the wrapped code.\n\n Raises:\n ModalNotFound: If a modal dialog hasn't been found."}
{"_id": "q_12693", "text": "Raise errors encountered by the server."}
{"_id": "q_12694", "text": "Returns whether the given node matches the filter rule with the given value.\n\n Args:\n node (Element): The node to filter.\n value (object): The desired value with which the node should be evaluated.\n\n Returns:\n bool: Whether the given node matches."}
{"_id": "q_12695", "text": "Checks if the page or current node has a radio button or checkbox with the given label,\n value, or id, that is currently checked.\n\n Args:\n locator (str): The label, name, or id of a checked field.\n **kwargs: Arbitrary keyword arguments for :class:`SelectorQuery`.\n\n Returns:\n bool: Whether it exists."}
{"_id": "q_12696", "text": "Checks if the page or current node has no radio button or checkbox with the given label,\n value, or id that is currently checked.\n\n Args:\n locator (str): The label, name, or id of a checked field.\n **kwargs: Arbitrary keyword arguments for :class:`SelectorQuery`.\n\n Returns:\n bool: Whether it doesn't exist."}
{"_id": "q_12697", "text": "Checks if the page or current node has a radio button or checkbox with the given label,\n value, or id, that is currently unchecked.\n\n Args:\n locator (str): The label, name, or id of an unchecked field.\n **kwargs: Arbitrary keyword arguments for :class:`SelectorQuery`.\n\n Returns:\n bool: Whether it exists."}
{"_id": "q_12698", "text": "Asserts that the page or current node has the given text content, ignoring any HTML tags.\n\n Args:\n *args: Variable length argument list for :class:`TextQuery`.\n **kwargs: Arbitrary keyword arguments for :class:`TextQuery`.\n\n Returns:\n True\n\n Raises:\n ExpectationNotMet: If the assertion hasn't succeeded during the wait time."}
{"_id": "q_12699", "text": "Asserts that the page or current node doesn't have the given text content, ignoring any\n HTML tags.\n\n Args:\n *args: Variable length argument list for :class:`TextQuery`.\n **kwargs: Arbitrary keyword arguments for :class:`TextQuery`.\n\n Returns:\n True\n\n Raises:\n ExpectationNotMet: If the assertion hasn't succeeded during the wait time."}
{"_id": "q_12700", "text": "Asserts that the page has the given title.\n\n Args:\n title (str | RegexObject): The string or regex that the title should match.\n **kwargs: Arbitrary keyword arguments for :class:`TitleQuery`.\n\n Returns:\n True\n\n Raises:\n ExpectationNotMet: If the assertion hasn't succeeded during the wait time."}
{"_id": "q_12701", "text": "Asserts that the page doesn't have the given title.\n\n Args:\n title (str | RegexObject): The string that the title should include.\n **kwargs: Arbitrary keyword arguments for :class:`TitleQuery`.\n\n Returns:\n True\n\n Raises:\n ExpectationNotMet: If the assertion hasn't succeeded during the wait time."}
{"_id": "q_12702", "text": "Checks if the page has the given title.\n\n Args:\n title (str | RegexObject): The string or regex that the title should match.\n **kwargs: Arbitrary keyword arguments for :class:`TitleQuery`.\n\n Returns:\n bool: Whether it matches."}
{"_id": "q_12703", "text": "Returns the inner content of a given XML node, including tags.\n\n Args:\n node (lxml.etree.Element): The node whose inner content is desired.\n\n Returns:\n str: The inner content of the node."}
{"_id": "q_12704", "text": "Returns the inner text of a given XML node, excluding tags.\n\n Args:\n node: (lxml.etree.Element): The node whose inner text is desired.\n\n Returns:\n str: The inner text of the node."}
{"_id": "q_12705", "text": "Returns the given URL with all query keys properly escaped.\n\n Args:\n url (str): The URL to normalize.\n\n Returns:\n str: The normalized URL."}
{"_id": "q_12706", "text": "This method is Capybara's primary defense against asynchronicity problems. It works by\n attempting to run a given decorated function until it succeeds. The exact behavior of this\n method depends on a number of factors. Basically there are certain exceptions which, when\n raised from the decorated function, instead of bubbling up, are caught, and the function is\n re-run.\n\n Certain drivers have no support for asynchronous processes. These drivers run the function,\n and any error raised bubbles up immediately. This allows faster turn around in the case\n where an expectation fails.\n\n Only exceptions that are :exc:`ElementNotFound` or any subclass thereof cause the block to\n be rerun. Drivers may specify additional exceptions which also cause reruns. This usually\n occurs when a node is manipulated which no longer exists on the page. For example, the\n Selenium driver specifies ``selenium.common.exceptions.StateElementReferenceException``.\n\n As long as any of these exceptions are thrown, the function is re-run, until a certain\n amount of time passes. The amount of time defaults to :data:`capybara.default_max_wait_time`\n and can be overridden through the ``wait`` argument. This time is compared with the system\n time to see how much time has passed. If the return value of ``time.time()`` is stubbed\n out, Capybara will raise :exc:`FrozenInTime`.\n\n Args:\n func (Callable, optional): The function to decorate.\n wait (int, optional): Number of seconds to retry this function.\n errors (Tuple[Type[Exception]], optional): Exception types that cause the function to be\n rerun. Defaults to ``driver.invalid_element_errors`` + :exc:`ElementNotFound`.\n\n Returns:\n Callable: The decorated function, or a decorator function.\n\n Raises:\n FrozenInTime: If the return value of ``time.time()`` appears stuck."}
{"_id": "q_12707", "text": "Returns whether to catch the given error.\n\n Args:\n error (Exception): The error to consider.\n errors (Tuple[Type[Exception], ...], optional): The exception types that should be\n caught. Defaults to :class:`ElementNotFound` plus any driver-specific invalid\n element errors.\n\n Returns:\n bool: Whether to catch the given error."}
{"_id": "q_12708", "text": "Returns a expectation failure message for the given query description.\n\n Args:\n description (str): A description of the failed query.\n options (Dict[str, Any]): The query options.\n\n Returns:\n str: A message describing the failure."}
{"_id": "q_12709", "text": "Returns whether the given count matches the given query options.\n\n If no quantity options are specified, any count is considered acceptable.\n\n Args:\n count (int): The count to be validated.\n options (Dict[str, int | Iterable[int]]): A dictionary of query options.\n\n Returns:\n bool: Whether the count matches the options."}
{"_id": "q_12710", "text": "Normalizes the given value to a string of text with extra whitespace removed.\n\n Byte sequences are decoded. ``None`` is converted to an empty string. Everything else\n is simply cast to a string.\n\n Args:\n value (Any): The data to normalize.\n\n Returns:\n str: The normalized text."}
{"_id": "q_12711", "text": "Returns the given text with outer whitespace removed and inner whitespace collapsed.\n\n Args:\n text (str): The text to normalize.\n\n Returns:\n str: The normalized text."}
{"_id": "q_12712", "text": "Returns a compiled regular expression for the given text.\n\n Args:\n text (str | RegexObject): The text to match.\n exact (bool, optional): Whether the generated regular expression should match exact\n strings. Defaults to False.\n\n Returns:\n RegexObject: A compiled regular expression that will match the text."}
{"_id": "q_12713", "text": "Descriptor to change the class wide getter on a property.\n\n :param fcget: new class-wide getter.\n :type fcget: typing.Optional[typing.Callable[[typing.Any, ], typing.Any]]\n :return: AdvancedProperty\n :rtype: AdvancedProperty"}
{"_id": "q_12714", "text": "Descriptor to change instance method.\n\n :param imeth: New instance method.\n :type imeth: typing.Optional[typing.Callable]\n :return: SeparateClassMethod\n :rtype: SeparateClassMethod"}
{"_id": "q_12715", "text": "Descriptor to change class method.\n\n :param cmeth: New class method.\n :type cmeth: typing.Optional[typing.Callable]\n :return: SeparateClassMethod\n :rtype: SeparateClassMethod"}
{"_id": "q_12716", "text": "Get outer traceback text for logging."}
{"_id": "q_12717", "text": "Get object repr block."}
{"_id": "q_12718", "text": "Get logger for log calls.\n\n :param instance: Owner class instance. Filled only if instance created, else None.\n :type instance: typing.Optional[owner]\n :return: logger instance\n :rtype: logging.Logger"}
{"_id": "q_12719", "text": "Logger instance to use as override."}
{"_id": "q_12720", "text": "Low-level method to call the Slack API.\n\n Args:\n method: {str} method name to call\n params: {dict} GET parameters\n The token will always be added"}
{"_id": "q_12721", "text": "List of channels of this slack team"}
{"_id": "q_12722", "text": "Checks the format of the sakefile dictionary\n to ensure it conforms to specification\n\n Args:\n A dictionary that is the parsed Sakefile (from sake.py)\n The setting dictionary (for print functions)\n Returns:\n True if the Sakefile is conformant\n False if not"}
{"_id": "q_12723", "text": "This function gives us the option to emit errors or warnings\n after sake upgrades"}
{"_id": "q_12724", "text": "Returns sha1 hash of the file supplied as an argument"}
{"_id": "q_12725", "text": "Writes a sha1 dictionary stored in memory to\n the .shastore file"}
{"_id": "q_12726", "text": "Takes sha1 hash of all dependencies and outputs of all targets\n\n Args:\n The graph we are going to build\n The settings dictionary\n\n Returns:\n A dictionary where the keys are the filenames and the\n value is the sha1 hash"}
{"_id": "q_12727", "text": "Helper function that returns the node data\n of the node with the name supplied"}
{"_id": "q_12728", "text": "A sink is a node with no children.\n This means that this is the end of the line,\n and it should be run last in topo sort. This\n returns a list of all sinks in a graph"}
{"_id": "q_12729", "text": "If you specify a target that shares a dependency with another target,\n both targets need to be updated. This is because running one will resolve\n the sha mismatch and sake will think that the other one doesn't have to\n run. This is called a \"tie\". This function will find such ties."}
{"_id": "q_12730", "text": "Takes the sakefile dictionary and builds a NetworkX graph\n\n Args:\n A dictionary that is the parsed Sakefile (from sake.py)\n The settings dictionary\n\n Returns:\n A NetworkX graph"}
{"_id": "q_12731", "text": "Removes all the output files from all targets. Takes\n the graph as the only argument\n\n Args:\n The networkx graph object\n The settings dictionary\n\n Returns:\n 0 if successful\n 1 if removing even one file failed"}
{"_id": "q_12732", "text": "High-level function for creating messages. Return packed bytes.\n\n Args:\n text: {str}\n channel: {str} Either name or ID"}
{"_id": "q_12733", "text": "Translate machine identifiers into human-readable"}
{"_id": "q_12734", "text": "Send message to Slack"}
{"_id": "q_12735", "text": "Main interface. Instantiate the SlackAPI, connect to RTM\n and start the client."}
{"_id": "q_12736", "text": "Pass in raw arguments, instantiate Slack API and begin client."}
{"_id": "q_12737", "text": "Prepare transcriptiondata from the transcription sources."}
{"_id": "q_12738", "text": "Check the consistency of a given transcription system conversino"}
{"_id": "q_12739", "text": "Parse a sound from its name"}
{"_id": "q_12740", "text": "Runs the ipfn algorithm. Automatically detects of working with numpy ndarray or pandas dataframes."}
{"_id": "q_12741", "text": "Given a string add necessary codes to format the string."}
{"_id": "q_12742", "text": "Run when a task starts."}
{"_id": "q_12743", "text": "Display info about playbook statistics."}
{"_id": "q_12744", "text": "Converts a CIDR formatted prefix into an address netmask representation.\n Argument sep specifies the separator between the address and netmask parts.\n By default it's a single space.\n\n Examples:\n >>> \"{{ '192.168.0.1/24|prefix_to_addrmask }}\" -> \"192.168.0.1 255.255.255.0\"\n >>> \"{{ '192.168.0.1/24|prefix_to_addrmask('/') }}\" -> \"192.168.0.1/255.255.255.0\""}
{"_id": "q_12745", "text": "Add a model.\n\n The model will be asssigned to a class attribute with the YANG name of the model.\n\n Args:\n model (PybindBase): Model to add.\n force (bool): If not set, verify the model is in SUPPORTED_MODELS\n\n Examples:\n\n >>> import napalm_yang\n >>> config = napalm_yang.base.Root()\n >>> config.add_model(napalm_yang.models.openconfig_interfaces)\n >>> config.interfaces\n <pyangbind.lib.yangtypes.YANGBaseClass object at 0x10bef6680>"}
{"_id": "q_12746", "text": "Returns a dictionary with the values of the model. Note that the values\n of the leafs are YANG classes.\n\n Args:\n filter (bool): If set to ``True``, show only values that have been set.\n\n Returns:\n dict: A dictionary with the values of the model.\n\n Example:\n\n >>> pretty_print(config.get(filter=True))\n >>> {\n >>> \"interfaces\": {\n >>> \"interface\": {\n >>> \"et1\": {\n >>> \"config\": {\n >>> \"description\": \"My description\",\n >>> \"mtu\": 1500\n >>> },\n >>> \"name\": \"et1\"\n >>> },\n >>> \"et2\": {\n >>> \"config\": {\n >>> \"description\": \"Another description\",\n >>> \"mtu\": 9000\n >>> },\n >>> \"name\": \"et2\"\n >>> }\n >>> }\n >>> }\n >>> }"}
{"_id": "q_12747", "text": "Returns a dictionary with the values of the model. Note that the values\n of the leafs are evaluated to python types.\n\n Args:\n filter (bool): If set to ``True``, show only values that have been set.\n\n Returns:\n dict: A dictionary with the values of the model.\n\n Example:\n\n >>> pretty_print(config.to_dict(filter=True))\n >>> {\n >>> \"interfaces\": {\n >>> \"interface\": {\n >>> \"et1\": {\n >>> \"config\": {\n >>> \"description\": \"My description\",\n >>> \"mtu\": 1500\n >>> },\n >>> \"name\": \"et1\"\n >>> },\n >>> \"et2\": {\n >>> \"config\": {\n >>> \"description\": \"Another description\",\n >>> \"mtu\": 9000\n >>> },\n >>> \"name\": \"et2\"\n >>> }\n >>> }\n >>> }\n >>> }"}
{"_id": "q_12748", "text": "Parse native state and load it into the corresponding models. Only models\n that have been added to the root object will be parsed.\n\n If ``native`` is passed to the method that's what we will parse, otherwise, we will use the\n ``device`` to retrieve it.\n\n Args:\n device (NetworkDriver): Device to load the configuration from.\n profile (list): Profiles that the device supports. If no ``profile`` is passed it will\n be read from ``device``.\n native (list string): Native output to parse.\n\n Examples:\n\n >>> # Load from device\n >>> state = napalm_yang.base.Root()\n >>> state.add_model(napalm_yang.models.openconfig_interfaces)\n >>> state.parse_config(device=d)\n\n >>> # Load from file\n >>> with open(\"junos.state\", \"r\") as f:\n >>> state_data = f.read()\n >>>\n >>> state = napalm_yang.base.Root()\n >>> state.add_model(napalm_yang.models.openconfig_interfaces)\n >>> state.parse_config(native=[state_data], profile=\"junos\")"}
{"_id": "q_12749", "text": "Translate the object to native configuration.\n\n In this context, merge and replace means the following:\n\n * **Merge** - Elements that exist in both ``self`` and ``merge`` will use by default the\n values in ``merge`` unless ``self`` specifies a new one. Elements that exist only\n in ``self`` will be translated as they are and elements present only in ``merge``\n will be removed.\n * **Replace** - All the elements in ``replace`` will either be removed or replaced by\n elements in ``self``.\n\n You can specify one of ``merge``, ``replace`` or none of them. If none of them are set we\n will just translate configuration.\n\n Args:\n profile (list): Which profiles to use.\n merge (Root): Object we want to merge with.\n replace (Root): Object we want to replace."}
{"_id": "q_12750", "text": "Loads and returns all filters."}
{"_id": "q_12751", "text": "Given a model, return a representation of the model in a dict.\n\n This is mostly useful to have a quick visual represenation of the model.\n\n Args:\n\n model (PybindBase): Model to transform.\n mode (string): Whether to print config, state or all elements (\"\" for all)\n\n Returns:\n\n dict: A dictionary representing the model.\n\n Examples:\n\n\n >>> config = napalm_yang.base.Root()\n >>>\n >>> # Adding models to the object\n >>> config.add_model(napalm_yang.models.openconfig_interfaces())\n >>> config.add_model(napalm_yang.models.openconfig_vlan())\n >>> # Printing the model in a human readable format\n >>> pretty_print(napalm_yang.utils.model_to_dict(config))\n >>> {\n >>> \"openconfig-interfaces:interfaces [rw]\": {\n >>> \"interface [rw]\": {\n >>> \"config [rw]\": {\n >>> \"description [rw]\": \"string\",\n >>> \"enabled [rw]\": \"boolean\",\n >>> \"mtu [rw]\": \"uint16\",\n >>> \"name [rw]\": \"string\",\n >>> \"type [rw]\": \"identityref\"\n >>> },\n >>> \"hold_time [rw]\": {\n >>> \"config [rw]\": {\n >>> \"down [rw]\": \"uint32\",\n >>> \"up [rw]\": \"uint32\"\n (trimmed for clarity)"}
{"_id": "q_12752", "text": "Given two models, return the difference between them.\n\n Args:\n\n f (Pybindbase): First element.\n s (Pybindbase): Second element.\n\n Returns:\n\n dict: A dictionary highlighting the differences.\n\n Examples:\n\n >>> diff = napalm_yang.utils.diff(candidate, running)\n >>> pretty_print(diff)\n >>> {\n >>> \"interfaces\": {\n >>> \"interface\": {\n >>> \"both\": {\n >>> \"Port-Channel1\": {\n >>> \"config\": {\n >>> \"mtu\": {\n >>> \"first\": \"0\",\n >>> \"second\": \"9000\"\n >>> }\n >>> }\n >>> }\n >>> },\n >>> \"first_only\": [\n >>> \"Loopback0\"\n >>> ],\n >>> \"second_only\": [\n >>> \"Loopback1\"\n >>> ]\n >>> }\n >>> }\n >>> }"}
{"_id": "q_12753", "text": "POST to URL and get result as a response object.\n\n :param url: URL to POST.\n :type url: str\n :param data: Data to send in the form body.\n :type data: str\n :rtype: requests.Response"}
{"_id": "q_12754", "text": "Construct a full URL that can be used to obtain an authorization\n code from the provider authorization_uri. Use this URI in a client\n frame to cause the provider to generate an authorization code.\n\n :rtype: str"}
{"_id": "q_12755", "text": "Return query parameters as a dict from the specified URL.\n\n :param url: URL.\n :type url: str\n :rtype: dict"}
{"_id": "q_12756", "text": "Return a URL with the query component removed.\n\n :param url: URL to dequery.\n :type url: str\n :rtype: str"}
{"_id": "q_12757", "text": "Construct a URL based off of base containing all parameters in\n the query portion of base plus any additional parameters.\n\n :param base: Base URL\n :type base: str\n ::param additional_params: Additional query parameters to include.\n :type additional_params: dict\n :rtype: str"}
{"_id": "q_12758", "text": "Handle an internal exception that was caught and suppressed.\n\n :param exc: Exception to process.\n :type exc: Exception"}
{"_id": "q_12759", "text": "Return a response object from the given parameters.\n\n :param body: Buffer/string containing the response body.\n :type body: str\n :param headers: Dict of headers to include in the requests.\n :type headers: dict\n :param status_code: HTTP status code.\n :type status_code: int\n :rtype: requests.Response"}
{"_id": "q_12760", "text": "Return a HTTP 302 redirect response object containing the error.\n\n :param redirect_uri: Client redirect URI.\n :type redirect_uri: str\n :param err: OAuth error message.\n :type err: str\n :rtype: requests.Response"}
{"_id": "q_12761", "text": "Generate authorization code HTTP response.\n\n :param response_type: Desired response type. Must be exactly \"code\".\n :type response_type: str\n :param client_id: Client ID.\n :type client_id: str\n :param redirect_uri: Client redirect URI.\n :type redirect_uri: str\n :rtype: requests.Response"}
{"_id": "q_12762", "text": "Generate access token HTTP response from a refresh token.\n\n :param grant_type: Desired grant type. Must be \"refresh_token\".\n :type grant_type: str\n :param client_id: Client ID.\n :type client_id: str\n :param client_secret: Client secret.\n :type client_secret: str\n :param refresh_token: Refresh token.\n :type refresh_token: str\n :rtype: requests.Response"}
{"_id": "q_12763", "text": "Get authorization code response from a URI. This method will\n ignore the domain and path of the request, instead\n automatically parsing the query string parameters.\n\n :param uri: URI to parse for authorization information.\n :type uri: str\n :rtype: requests.Response"}
{"_id": "q_12764", "text": "Get a token response from POST data.\n\n :param data: POST data containing authorization information.\n :type data: dict\n :rtype: requests.Response"}
{"_id": "q_12765", "text": "Get authorization object representing status of authentication."}
{"_id": "q_12766", "text": "Return an object described by the accessor by traversing the attributes\r\n of context."}
{"_id": "q_12767", "text": "Calculate how many days the month spans."}
{"_id": "q_12768", "text": "Open the smbus interface on the specified bus."}
{"_id": "q_12769", "text": "Read a single byte from the specified device."}
{"_id": "q_12770", "text": "Read many bytes from the specified device."}
{"_id": "q_12771", "text": "Read a single byte from the specified cmd register of the device."}
{"_id": "q_12772", "text": "Write many bytes to the specified device. buf is a bytearray"}
{"_id": "q_12773", "text": "Write a byte of data to the specified cmd register of the device."}
{"_id": "q_12774", "text": "Write a buffer of data to the specified cmd register of the device."}
{"_id": "q_12775", "text": "calculate the sampling period in seconds"}
{"_id": "q_12776", "text": "Calculate the adc value that corresponds to a specific bin boundary diameter in microns.\n\n :param bb: Bin Boundary in microns\n\n :type bb: float\n\n :rtype: int"}
{"_id": "q_12777", "text": "Checks the connection between the Raspberry Pi and the OPC\n\n :rtype: Boolean"}
{"_id": "q_12778", "text": "Read the configuration variables and returns them as a dictionary\n\n :rtype: dictionary\n\n :Example:\n\n >>> alpha.config()\n {\n 'BPD 13': 1.6499,\n 'BPD 12': 1.6499,\n 'BPD 11': 1.6499,\n 'BPD 10': 1.6499,\n 'BPD 15': 1.6499,\n 'BPD 14': 1.6499,\n 'BSVW 15': 1.0,\n ...\n }"}
{"_id": "q_12779", "text": "Read the second set of configuration variables and return as a dictionary.\n\n **NOTE: This method is supported by firmware v18+.**\n\n :rtype: dictionary\n\n :Example:\n\n >>> a.config2()\n {\n 'AMFanOnIdle': 0,\n 'AMIdleIntervalCount': 0,\n 'AMMaxDataArraysInFile': 61798,\n 'AMSamplingInterval': 1,\n 'AMOnlySavePMData': 0,\n 'AMLaserOnIdle': 0\n }"}
{"_id": "q_12780", "text": "Toggle the power state of the laser.\n\n :param state: Boolean state of the laser\n\n :type state: boolean\n\n :rtype: boolean\n\n :Example:\n\n >>> alpha.toggle_laser(True)\n True"}
{"_id": "q_12781", "text": "Read the firmware version of the OPC-N2. Firmware v18+ only.\n\n :rtype: dict\n\n :Example:\n\n >>> alpha.read_firmware()\n {\n 'major': 18,\n 'minor': 2,\n 'version': 18.2\n }"}
{"_id": "q_12782", "text": "Read the PM data and reset the histogram\n\n **NOTE: This method is supported by firmware v18+.**\n\n :rtype: dictionary\n\n :Example:\n\n >>> alpha.pm()\n {\n 'PM1': 0.12,\n 'PM2.5': 0.24,\n 'PM10': 1.42\n }"}
{"_id": "q_12783", "text": "Read the bin particle density\n\n :returns: float"}
{"_id": "q_12784", "text": "Starts HDLC controller's threads."}
{"_id": "q_12785", "text": "Stops HDLC controller's threads."}
{"_id": "q_12786", "text": "Cuts this object from_start to the number requestd\n returns new instance"}
{"_id": "q_12787", "text": "Note returns a new Date obj"}
{"_id": "q_12788", "text": "Find all the timestrings within a block of text.\n\n >>> timestring.findall(\"once upon a time, about 3 weeks ago, there was a boy whom was born on august 15th at 7:20 am. epic.\")\n [\n ('3 weeks ago,', <timestring.Date 2014-02-09 00:00:00 4483019280>),\n ('august 15th at 7:20 am', <timestring.Date 2014-08-15 07:20:00 4483019344>)\n ]"}
{"_id": "q_12789", "text": "Check the token and raise an `oauth.Error` exception if invalid."}
{"_id": "q_12790", "text": "Checks nonce of request, and return True if valid."}
{"_id": "q_12791", "text": "Returns file's CDN url.\n\n Usage example::\n\n >>> file_ = File('a771f854-c2cb-408a-8c36-71af77811f3b')\n >>> file_.cdn_url\n https://ucarecdn.com/a771f854-c2cb-408a-8c36-71af77811f3b/\n\n You can set default effects::\n\n >>> file_.default_effects = 'effect/flip/-/effect/mirror/'\n >>> file_.cdn_url\n https://ucarecdn.com/a771f854-c2cb-408a-8c36-71af77811f3b/-/effect/flip/-/effect/mirror/"}
{"_id": "q_12792", "text": "Creates a Local File Copy on Uploadcare Storage.\n\n Args:\n - effects:\n Adds CDN image effects. If ``self.default_effects`` property\n is set effects will be combined with default effects.\n - store:\n If ``store`` option is set to False the copy of your file will\n be deleted in 24 hour period after the upload.\n Works only if `autostore` is enabled in the project."}
{"_id": "q_12793", "text": "Constructs ``File`` instance from file information.\n\n For example you have result of\n ``/files/1921953c-5d94-4e47-ba36-c2e1dd165e1a/`` API request::\n\n >>> file_info = {\n # ...\n 'uuid': '1921953c-5d94-4e47-ba36-c2e1dd165e1a',\n # ...\n }\n >>> File.construct_from(file_info)\n <uploadcare.File 1921953c-5d94-4e47-ba36-c2e1dd165e1a>"}
{"_id": "q_12794", "text": "Uploads a file and returns ``File`` instance.\n\n Args:\n - file_obj: file object to upload to\n - store (Optional[bool]): Should the file be automatically stored\n upon upload. Defaults to None.\n - False - do not store file\n - True - store file (can result in error if autostore\n is disabled for project)\n - None - use project settings\n\n Returns:\n ``File`` instance"}
{"_id": "q_12795", "text": "Uploads file from given url and returns ``File`` instance.\n\n Args:\n - url (str): URL of file to upload to\n - store (Optional[bool]): Should the file be automatically stored\n upon upload. Defaults to None.\n - False - do not store file\n - True - store file (can result in error if autostore\n is disabled for project)\n - None - use project settings\n - filename (Optional[str]): Name of the uploaded file. If this not\n specified the filename will be obtained from response headers\n or source URL. Defaults to None.\n - timeout (Optional[int]): seconds to wait for successful upload.\n Defaults to 30.\n - interval (Optional[float]): interval between upload status checks.\n Defaults to 0.3.\n - until_ready (Optional[bool]): should we wait until file is\n available via CDN. Defaults to False.\n\n Returns:\n ``File`` instance\n\n Raises:\n ``TimeoutError`` if file wasn't uploaded in time"}
{"_id": "q_12796", "text": "Returns CDN urls of all files from group without API requesting.\n\n Usage example::\n\n >>> file_group = FileGroup('0513dda0-582f-447d-846f-096e5df9e2bb~2')\n >>> file_group.file_cdn_urls[0]\n 'https://ucarecdn.com/0513dda0-582f-447d-846f-096e5df9e2bb~2/nth/0/'"}
{"_id": "q_12797", "text": "Constructs ``FileGroup`` instance from group information."}
{"_id": "q_12798", "text": "Base method for storage operations."}
{"_id": "q_12799", "text": "Iterates over the \"iter_content\" and draws a progress bar to stdout."}
{"_id": "q_12800", "text": "Makes Uploading API request and returns response as ``dict``.\n\n It takes settings from ``conf`` module.\n\n Make sure that given ``path`` does not contain leading slash.\n\n Usage example::\n\n >>> file_obj = open('photo.jpg', 'rb')\n >>> uploading_request('POST', 'base/', files={'file': file_obj})\n {\n 'file': '9b9f4483-77b8-40ae-a198-272ba6280004'\n }\n >>> File('9b9f4483-77b8-40ae-a198-272ba6280004')"}
{"_id": "q_12801", "text": "Get delivery log from Redis"}
{"_id": "q_12802", "text": "Get the possible events from settings"}
{"_id": "q_12803", "text": "This is an asynchronous sender callable that uses the Django ORM to store\n webhooks. Redis is used to handle the message queue.\n\n dkwargs argument requires the following key/values:\n\n :event: A string representing an event.\n\n kwargs argument requires the following key/values\n\n :owner: The user who created/owns the event"}
{"_id": "q_12804", "text": "Monitors the TUN adapter and sends data over serial port.\n\n Returns:\n ret: Number of bytes sent over serial port"}
{"_id": "q_12805", "text": "Check the serial port for data to write to the TUN adapter."}
{"_id": "q_12806", "text": "Get the field settings, if the configured setting is a string try\n to get a 'profile' from the global config."}
{"_id": "q_12807", "text": "Pass the submitted value through the sanitizer before returning it."}
{"_id": "q_12808", "text": "Get the field sanitizer.\n\n The priority is the first defined in the following order:\n - A sanitizer provided to the widget.\n - Profile (field settings) specific sanitizer, if defined in settings.\n - Global sanitizer defined in settings.\n - Simple no-op sanitizer which just returns the provided value."}
{"_id": "q_12809", "text": "Maxheap version of a heappop."}
{"_id": "q_12810", "text": "Maxheap version of a heappop followed by a heappush."}
{"_id": "q_12811", "text": "Fast version of a heappush followed by a heappop."}
{"_id": "q_12812", "text": "Return a list of cameras."}
{"_id": "q_12813", "text": "Return bytes of camera image."}
{"_id": "q_12814", "text": "Update motion settings matching camera_id with keyword args."}
{"_id": "q_12815", "text": "Update cameras and motion settings with latest from API."}
{"_id": "q_12816", "text": "Async function to connect to QTM\n\n :param host: Address of the computer running QTM.\n :param port: Port number to connect to, should be the port configured for little endian.\n :param version: What version of the protocol to use, tested for 1.17 and above but could\n work with lower versions as well.\n :param on_disconnect: Function to be called when a disconnect from QTM occurs.\n :param on_event: Function to be called when there's an event from QTM.\n :param timeout: The default timeout time for calls to QTM.\n :param loop: Alternative event loop, will use asyncio default if None.\n\n :rtype: A :class:`.QRTConnection`"}
{"_id": "q_12817", "text": "Get the QTM version."}
{"_id": "q_12818", "text": "Wait for an event from QTM.\n\n :param event: A :class:`qtm.QRTEvent`\n to wait for a specific event. Otherwise wait for any event.\n\n :param timeout: Max time to wait for event.\n\n :rtype: A :class:`qtm.QRTEvent`"}
{"_id": "q_12819", "text": "Get measured values from QTM for a single frame.\n\n :param components: A list of components to receive, could be 'all' or any combination of\n '2d', '2dlin', '3d', '3dres', '3dnolabels',\n '3dnolabelsres', 'force', 'forcesingle', '6d', '6dres',\n '6deuler', '6deulerres', 'gazevector', 'image', 'timecode',\n 'skeleton', 'skeleton:global'\n\n :rtype: A :class:`qtm.QRTPacket` containing requested components"}
{"_id": "q_12820", "text": "Start RT from file. You need to be in control of QTM to be able to do this."}
{"_id": "q_12821", "text": "Save a measurement.\n\n :param filename: Filename you wish to save as.\n :param overwrite: If QTM should overwrite existing measurement."}
{"_id": "q_12822", "text": "Load a project.\n\n :param project_path: Path to project you want to load."}
{"_id": "q_12823", "text": "Used to update QTM settings, see QTM RT protocol for more information.\n\n :param xml: XML document as a str. See QTM RT Documentation for details."}
{"_id": "q_12824", "text": "Received from QTM and route accordingly"}
{"_id": "q_12825", "text": "Get force data."}
{"_id": "q_12826", "text": "Get a single force data channel."}
{"_id": "q_12827", "text": "Get image."}
{"_id": "q_12828", "text": "Get 3D markers without label."}
{"_id": "q_12829", "text": "Get 3D markers without label with residual."}
{"_id": "q_12830", "text": "Get 2D markers.\n\n :param index: Specify which camera to get 2D from, will be returned as\n first entry in the returned array."}
{"_id": "q_12831", "text": "Get 2D linearized markers.\n\n :param index: Specify which camera to get 2D from, will be returned as\n first entry in the returned array."}
{"_id": "q_12832", "text": "Determine if ``li`` is the last list item for a given list"}
{"_id": "q_12833", "text": "The ilvl on an li tag tells the li tag at what level of indentation this\n tag is at. This is used to determine if the li tag needs to be nested or\n not."}
{"_id": "q_12834", "text": "The function will return True if the r tag passed in is considered bold."}
{"_id": "q_12835", "text": "The function will return True if the r tag passed in is considered\n underlined."}
{"_id": "q_12836", "text": "Certain p tags are denoted as ``Title`` tags. This function will return\n True if the passed in p tag is considered a title."}
{"_id": "q_12837", "text": "There is a separate file holds the targets to links as well as the targets\n for images. Return a dictionary based on the relationship id and the\n target."}
{"_id": "q_12838", "text": "Return the list type. If numId or ilvl not in the numbering dict then\n default to returning decimal.\n\n This function only cares about ordered lists, unordered lists get dealt\n with elsewhere."}
{"_id": "q_12839", "text": "Build the list structure and return the root list"}
{"_id": "q_12840", "text": "Generate the string data that for this particular t tag."}
{"_id": "q_12841", "text": "Callback function that is called everytime a data packet arrives from QTM"}
{"_id": "q_12842", "text": "On socket creation"}
{"_id": "q_12843", "text": "Send discovery packet for QTM to respond to"}
{"_id": "q_12844", "text": "Load the MNIST digits dataset."}
{"_id": "q_12845", "text": "Load the CIFAR10 image dataset."}
{"_id": "q_12846", "text": "Plot an array of images.\n\n We assume that we are given a matrix of data whose shape is (n*n, s*s*c) --\n that is, there are n^2 images along the first axis of the array, and each\n image is c squares measuring s pixels on a side. Each row of the input will\n be plotted as a sub-region within a single image array containing an n x n\n grid of images."}
{"_id": "q_12847", "text": "Create a plot of conv filters, visualized as pixel arrays."}
{"_id": "q_12848", "text": "Encode a text string by replacing characters with alphabet index.\n\n Parameters\n ----------\n txt : str\n A string to encode.\n\n Returns\n -------\n classes : list of int\n A sequence of alphabet index values corresponding to the given text."}
{"_id": "q_12849", "text": "Create a callable that returns a batch of training data.\n\n Parameters\n ----------\n steps : int\n Number of time steps in each batch.\n batch_size : int\n Number of training examples per batch.\n rng : :class:`numpy.random.RandomState` or int, optional\n A random number generator, or an integer seed for a random number\n generator. If not provided, the random number generator will be\n created with an automatically chosen seed.\n\n Returns\n -------\n batch : callable\n A callable that, when called, returns a batch of data that can be\n used to train a classifier model."}
{"_id": "q_12850", "text": "Add a convolutional weight array to this layer's parameters.\n\n Parameters\n ----------\n name : str\n Name of the parameter to add.\n mean : float, optional\n Mean value for randomly-initialized weights. Defaults to 0.\n std : float, optional\n Standard deviation of initial matrix values. Defaults to\n :math:`1 / sqrt(n_i + n_o)`.\n sparsity : float, optional\n Fraction of weights to set to zero. Defaults to 0."}
{"_id": "q_12851", "text": "Encode a dataset using the hidden layer activations of our network.\n\n Parameters\n ----------\n x : ndarray\n A dataset to encode. Rows of this dataset capture individual data\n points, while columns represent the variables in each data point.\n\n layer : str, optional\n The name of the hidden layer output to use. By default, we use\n the \"middle\" hidden layer---for example, for a 4,2,4 or 4,3,2,3,4\n autoencoder, we use the layer with size 2.\n\n sample : bool, optional\n If True, then draw a sample using the hidden activations as\n independent Bernoulli probabilities for the encoded data. This\n assumes the hidden layer has a logistic sigmoid activation function.\n\n Returns\n -------\n ndarray :\n The given dataset, encoded by the appropriate hidden layer\n activation."}
{"_id": "q_12852", "text": "Decode an encoded dataset by computing the output layer activation.\n\n Parameters\n ----------\n z : ndarray\n A matrix containing encoded data from this autoencoder.\n layer : int or str or :class:`Layer <layers.Layer>`, optional\n The index or name of the hidden layer that was used to encode `z`.\n\n Returns\n -------\n decoded : ndarray\n The decoded dataset."}
{"_id": "q_12853", "text": "Find a layer output name for the given layer specifier.\n\n Parameters\n ----------\n layer : None, int, str, or :class:`theanets.layers.Layer`\n A layer specification. If this is None, the \"middle\" layer in the\n network will be used (i.e., the layer at the middle index in the\n list of network layers). If this is an integer, the corresponding\n layer in the network's layer list will be used. If this is a string,\n the layer with the corresponding name will be returned.\n\n Returns\n -------\n name : str\n The fully-scoped output name for the desired layer."}
{"_id": "q_12854", "text": "Compute R^2 coefficient of determination for a given input.\n\n Parameters\n ----------\n x : ndarray (num-examples, num-inputs)\n An array containing data to be fed into the network. Multiple\n examples are arranged as rows in this array, with columns containing\n the variables for each example.\n\n Returns\n -------\n r2 : float\n The R^2 correlation between the prediction of this netork and its\n input. This can serve as one measure of the information loss of the\n autoencoder."}
{"_id": "q_12855", "text": "Compute the logit values that underlie the softmax output.\n\n Parameters\n ----------\n x : ndarray (num-examples, num-variables)\n An array containing examples to classify. Examples are given as the\n rows in this array.\n\n Returns\n -------\n l : ndarray (num-examples, num-classes)\n An array of posterior class logit values, one row of logit values\n per row of input data."}
{"_id": "q_12856", "text": "Compute the mean accuracy on a set of labeled data.\n\n Parameters\n ----------\n x : ndarray (num-examples, num-variables)\n An array containing examples to classify. Examples are given as the\n rows in this array.\n y : ndarray (num-examples, )\n A vector of integer class labels, one for each row of input data.\n w : ndarray (num-examples, )\n A vector of weights, one for each row of input data.\n\n Returns\n -------\n score : float\n The (possibly weighted) mean accuracy of the model on the data."}
{"_id": "q_12857", "text": "Returns a callable that chooses sequences from netcdf data."}
{"_id": "q_12858", "text": "Load a saved network from a pickle file on disk.\n\n This method sets the ``network`` attribute of the experiment to the\n loaded network model.\n\n Parameters\n ----------\n filename : str\n Load the keyword arguments and parameters of a network from a pickle\n file at the named path. If this name ends in \".gz\" then the input\n will automatically be gunzipped; otherwise the input will be treated\n as a \"raw\" pickle.\n\n Returns\n -------\n network : :class:`Network <graph.Network>`\n A newly-constructed network, with topology and parameters loaded\n from the given pickle file."}
{"_id": "q_12859", "text": "Create a vector of randomly-initialized values.\n\n Parameters\n ----------\n size : int\n Length of vecctor to create.\n mean : float, optional\n Mean value for initial vector values. Defaults to 0.\n std : float, optional\n Standard deviation for initial vector values. Defaults to 1.\n rng : :class:`numpy.random.RandomState` or int, optional\n A random number generator, or an integer seed for a random number\n generator. If not provided, the random number generator will be created\n with an automatically chosen seed.\n\n Returns\n -------\n vector : numpy array\n An array containing random values. This often represents the bias for a\n layer of computation units."}
{"_id": "q_12860", "text": "Get the outputs from a network that match a pattern.\n\n Parameters\n ----------\n outputs : dict or sequence of (str, theano expression)\n Output expressions to filter for matches. If this is a dictionary, its\n ``items()`` will be processed for matches.\n patterns : sequence of str\n A sequence of glob-style patterns to match against. Any parameter\n matching any pattern in this sequence will be included in the match.\n\n Yields\n ------\n matches : pair of str, theano expression\n Generates a sequence of (name, expression) pairs. The name is the name\n of the output that matched, and the expression is the symbolic output in\n the network graph."}
{"_id": "q_12861", "text": "Get the parameters from a network that match a pattern.\n\n Parameters\n ----------\n layers : list of :class:`theanets.layers.Layer`\n A list of network layers to retrieve parameters from.\n patterns : sequence of str\n A sequence of glob-style patterns to match against. Any parameter\n matching any pattern in this sequence will be included in the match.\n\n Yields\n ------\n matches : pair of str, theano expression\n Generates a sequence of (name, expression) pairs. The name is the name\n of the parameter that matched, and the expression represents the\n parameter symbolically."}
{"_id": "q_12862", "text": "Construct common regularizers from a set of keyword arguments.\n\n Keyword arguments not listed below will be passed to\n :func:`Regularizer.build` if they specify the name of a registered\n :class:`Regularizer`.\n\n Parameters\n ----------\n graph : :class:`theanets.graph.Network`\n A network graph to regularize.\n\n regularizers : dict or tuple/list of :class:`Regularizer`, optional\n If this is a list or a tuple, the contents of the list will be returned\n as the regularizers. This is to permit custom lists of regularizers to\n be passed easily.\n\n If this is a dict, its contents will be added to the other keyword\n arguments passed in.\n\n rng : int or theano RandomStreams, optional\n If an integer is provided, it will be used to seed the random number\n generators for the dropout or noise regularizers. If a theano\n RandomStreams object is provided, it will be used directly. Defaults to\n 13.\n\n input_dropout : float, optional\n Apply dropout to input layers in the network graph, with this dropout\n rate. Defaults to 0 (no dropout).\n\n hidden_dropout : float, optional\n Apply dropout to hidden layers in the network graph, with this dropout\n rate. Defaults to 0 (no dropout).\n\n output_dropout : float, optional\n Apply dropout to the output layer in the network graph, with this\n dropout rate. Defaults to 0 (no dropout).\n\n input_noise : float, optional\n Apply noise to input layers in the network graph, with this standard\n deviation. Defaults to 0 (no noise).\n\n hidden_noise : float, optional\n Apply noise to hidden layers in the network graph, with this standard\n deviation. Defaults to 0 (no noise).\n\n output_noise : float, optional\n Apply noise to the output layer in the network graph, with this\n standard deviation. Defaults to 0 (no noise).\n\n Returns\n -------\n regs : list of :class:`Regularizer`\n A list of regularizers to apply to the given network graph."}
{"_id": "q_12863", "text": "A list of Theano variables used in this loss."}
{"_id": "q_12864", "text": "Build a Theano expression for computing the accuracy of graph output.\n\n Parameters\n ----------\n outputs : dict of Theano expressions\n A dictionary mapping network output names to Theano expressions\n representing the outputs of a computation graph.\n\n Returns\n -------\n acc : Theano expression\n A Theano expression representing the accuracy of the output compared\n to the target data."}
{"_id": "q_12865", "text": "Helper method for defining a basic loop in theano.\n\n Parameters\n ----------\n inputs : sequence of theano expressions\n Inputs to the scan operation.\n outputs : sequence of output specifiers\n Specifiers for the outputs of the scan operation. This should be a\n sequence containing:\n - None for values that are output by the scan but not tapped as\n inputs,\n - an integer or theano scalar (``ndim == 0``) indicating the batch\n size for initial zero state,\n - a theano tensor variable (``ndim > 0``) containing initial state\n data, or\n - a dictionary containing a full output specifier. See\n ``outputs_info`` in the Theano documentation for ``scan``.\n name : str, optional\n Name of the scan variable to create. Defaults to ``'scan'``.\n step : callable, optional\n The callable to apply in the loop. Defaults to :func:`self._step`.\n constants : sequence of tensor, optional\n A sequence of parameters, if any, needed by the step function.\n\n Returns\n -------\n output(s) : theano expression(s)\n Theano expression(s) representing output(s) from the scan.\n updates : sequence of update tuples\n A sequence of updates to apply inside a theano function."}
{"_id": "q_12866", "text": "Select a random sample of n items from xs."}
{"_id": "q_12867", "text": "Clear the current loss functions from the network and add a new one.\n\n All parameters and keyword arguments are passed to :func:`add_loss`\n after clearing the current losses."}
{"_id": "q_12868", "text": "Train the network until the trainer converges.\n\n All arguments are passed to :func:`itertrain`.\n\n Returns\n -------\n training : dict\n A dictionary of monitor values computed using the training dataset,\n at the conclusion of training. This dictionary will at least contain\n a 'loss' key that indicates the value of the loss function. Other\n keys may be available depending on the trainer being used.\n validation : dict\n A dictionary of monitor values computed using the validation\n dataset, at the conclusion of training."}
{"_id": "q_12869", "text": "Construct a string key for representing a computation graph.\n\n This key will be unique for a given (a) network topology, (b) set of\n losses, and (c) set of regularizers.\n\n Returns\n -------\n key : str\n A hash representing the computation graph for the current network."}
{"_id": "q_12870", "text": "Connect the layers in this network to form a computation graph.\n\n Parameters\n ----------\n regularizers : list of :class:`theanets.regularizers.Regularizer`\n A list of the regularizers to apply while building the computation\n graph.\n\n Returns\n -------\n outputs : list of Theano variables\n A list of expressions giving the output of each layer in the graph.\n updates : list of update tuples\n A list of updates that should be performed by a Theano function that\n computes something using this graph."}
{"_id": "q_12871", "text": "A list of Theano variables for feedforward computations."}
{"_id": "q_12872", "text": "A list of Theano variables for loss computations."}
{"_id": "q_12873", "text": "Get a parameter from a layer in the network.\n\n Parameters\n ----------\n which : int or str\n The layer that owns the parameter to return.\n\n If this is an integer, then 0 refers to the input layer, 1 refers\n to the first hidden layer, 2 to the second, and so on.\n\n If this is a string, the layer with the corresponding name, if any,\n will be used.\n\n param : int or str\n Name of the parameter to retrieve from the specified layer, or its\n index in the parameter list of the layer.\n\n Raises\n ------\n KeyError\n If there is no such layer, or if there is no such parameter in the\n specified layer.\n\n Returns\n -------\n param : Theano shared variable\n A shared parameter variable from the indicated layer."}
{"_id": "q_12874", "text": "Compute a forward pass of all layers from the given input.\n\n All keyword arguments are passed directly to :func:`build_graph`.\n\n Parameters\n ----------\n x : ndarray (num-examples, num-variables)\n An array containing data to be fed into the network. Multiple\n examples are arranged as rows in this array, with columns containing\n the variables for each example.\n\n Returns\n -------\n layers : list of ndarray (num-examples, num-units)\n The activation values of each layer in the the network when given\n input `x`. For each of the hidden layers, an array is returned\n containing one row per input example; the columns of each array\n correspond to units in the respective layer. The \"output\" of the\n network is the last element of this list."}
{"_id": "q_12875", "text": "Compute a forward pass of the inputs, returning the network output.\n\n All keyword arguments end up being passed to :func:`build_graph`.\n\n Parameters\n ----------\n x : ndarray (num-examples, num-variables)\n An array containing data to be fed into the network. Multiple\n examples are arranged as rows in this array, with columns containing\n the variables for each example.\n\n Returns\n -------\n y : ndarray (num-examples, num-variables)\n Returns the values of the network output units when given input `x`.\n Rows in this array correspond to examples, and columns to output\n variables."}
{"_id": "q_12876", "text": "Compute R^2 coefficient of determination for a given labeled input.\n\n Parameters\n ----------\n x : ndarray (num-examples, num-inputs)\n An array containing data to be fed into the network. Multiple\n examples are arranged as rows in this array, with columns containing\n the variables for each example.\n y : ndarray (num-examples, num-outputs)\n An array containing expected target data for the network. Multiple\n examples are arranged as rows in this array, with columns containing\n the variables for each example.\n\n Returns\n -------\n r2 : float\n The R^2 correlation between the prediction of this netork and its\n target output."}
{"_id": "q_12877", "text": "Return expressions to run as updates during network training.\n\n Returns\n -------\n updates : list of (parameter, expression) pairs\n A list of named parameter update expressions for this network."}
{"_id": "q_12878", "text": "Number of \"neurons\" in this layer's default output."}
{"_id": "q_12879", "text": "Create Theano variables representing the outputs of this layer.\n\n Parameters\n ----------\n inputs : dict of Theano expressions\n Symbolic inputs to this layer, given as a dictionary mapping string\n names to Theano expressions. Each string key should be of the form\n \"{layer_name}:{output_name}\" and refers to a specific output from\n a specific layer in the graph.\n\n Returns\n -------\n outputs : dict\n A dictionary mapping names to Theano expressions for the outputs\n from this layer.\n updates : sequence of (parameter, expression) tuples\n Updates that should be performed by a Theano function that computes\n something using this layer."}
{"_id": "q_12880", "text": "Bind this layer into a computation graph.\n\n This method is a wrapper for performing common initialization tasks. It\n calls :func:`resolve`, :func:`setup`, and :func:`log`.\n\n Parameters\n ----------\n graph : :class:`Network <theanets.graph.Network>`\n A computation network in which this layer is to be bound.\n reset : bool, optional\n If ``True`` (the default), reset the resolved layers for this layer.\n initialize : bool, optional\n If ``True`` (the default), initialize the parameters for this layer\n by calling :func:`setup`.\n\n Raises\n ------\n theanets.util.ConfigurationError :\n If an input cannot be resolved."}
{"_id": "q_12881", "text": "Resolve the names of inputs for this layer into shape tuples.\n\n Parameters\n ----------\n layers : list of :class:`Layer`\n A list of the layers that are available for resolving inputs.\n\n Raises\n ------\n theanets.util.ConfigurationError :\n If an input cannot be resolved."}
{"_id": "q_12882", "text": "Log some information about this layer."}
{"_id": "q_12883", "text": "Log information about this layer's parameters."}
{"_id": "q_12884", "text": "Helper method to format our name into a string."}
{"_id": "q_12885", "text": "Given a list of layers, find the layer output with the given name.\n\n Parameters\n ----------\n name : str\n Name of a layer to resolve.\n layers : list of :class:`theanets.layers.base.Layer`\n A list of layers to search in.\n\n Raises\n ------\n util.ConfigurationError :\n If there is no such layer, or if there are more than one.\n\n Returns\n -------\n name : str\n The fully-scoped name of the desired output.\n shape : tuple of None and/or int\n The shape of the named output."}
{"_id": "q_12886", "text": "Get a shared variable for a parameter by name.\n\n Parameters\n ----------\n key : str or int\n The name of the parameter to look up, or the index of the parameter\n in our parameter list. These are both dependent on the\n implementation of the layer.\n\n Returns\n -------\n param : shared variable\n A shared variable containing values for the given parameter.\n\n Raises\n ------\n KeyError\n If a param with the given name does not exist."}
{"_id": "q_12887", "text": "Helper method to create a new bias vector.\n\n Parameters\n ----------\n name : str\n Name of the parameter to add.\n size : int\n Size of the bias vector.\n mean : float, optional\n Mean value for randomly-initialized biases. Defaults to 0.\n std : float, optional\n Standard deviation for randomly-initialized biases. Defaults to 1."}
{"_id": "q_12888", "text": "Create a specification dictionary for this layer.\n\n Returns\n -------\n spec : dict\n A dictionary specifying the configuration of this layer."}
{"_id": "q_12889", "text": "Returns the image of a LogGabor\n\n Note that the convention for coordinates follows that of matrices:\n the origin is at the top left of the image, and coordinates are first\n the rows (vertical axis, going down) then the columns (horizontal axis,\n going right)."}
{"_id": "q_12890", "text": "Return true if substring is in string for files\n in specified path"}
{"_id": "q_12891", "text": "Find doublefann library"}
{"_id": "q_12892", "text": "Run SWIG with specified parameters"}
{"_id": "q_12893", "text": "Add an IntervalTier or a TextTier on the specified location.\n\n :param str name: Name of the tier, duplicate names is allowed.\n :param str tier_type: Type of the tier.\n :param int number: Place to insert the tier, when ``None`` the number\n is generated and the tier will be placed on the bottom.\n :returns: The created tier.\n :raises ValueError: If the number is out of bounds."}
{"_id": "q_12894", "text": "Remove a tier, when multiple tiers exist with that name only the\n first is removed.\n\n :param name_num: Name or number of the tier to remove.\n :type name_num: int or str\n :raises IndexError: If there is no tier with that number."}
{"_id": "q_12895", "text": "Gives a tier, when multiple tiers exist with that name only the\n first is returned.\n\n :param name_num: Name or number of the tier to return.\n :type name_num: int or str\n :returns: The tier.\n :raises IndexError: If the tier doesn't exist."}
{"_id": "q_12896", "text": "Convert the object to an pympi.Elan.Eaf object\n\n :param int pointlength: Length of respective interval from points in\n seconds\n :param bool skipempty: Skip the empty annotations\n :returns: :class:`pympi.Elan.Eaf` object\n :raises ImportError: If the Eaf module can't be loaded.\n :raises ValueError: If the pointlength is not strictly positive."}
{"_id": "q_12897", "text": "Add an interval to the IntervalTier.\n\n :param float begin: Start time of the interval.\n :param float end: End time of the interval.\n :param str value: Text of the interval.\n :param bool check: Flag to check for overlap.\n :raises Exception: If overlap, begin > end or wrong tiertype."}
{"_id": "q_12898", "text": "Remove an interval, if no interval is found nothing happens.\n\n :param int time: Time of the interval.\n :raises TierTypeException: If the tier is not a IntervalTier."}
{"_id": "q_12899", "text": "Remove a point, if no point is found nothing happens.\n\n :param int time: Time of the point.\n :raises TierTypeException: If the tier is not a TextTier."}
{"_id": "q_12900", "text": "Returns the true list of intervals including the empty intervals."}
{"_id": "q_12901", "text": "Function to pretty print the xml, meaning adding tabs and newlines.\n\n :param ElementTree.Element el: Current element.\n :param int level: Current level."}
{"_id": "q_12902", "text": "Add an annotation.\n\n :param str id_tier: Name of the tier.\n :param int start: Start time of the annotation.\n :param int end: End time of the annotation.\n :param str value: Value of the annotation.\n :param str svg_ref: Svg reference.\n :raises KeyError: If the tier is non existent.\n :raises ValueError: If one of the values is negative or start is bigger\n then end or if the tiers already contains ref\n annotations."}
{"_id": "q_12903", "text": "Add an entry to a controlled vocabulary.\n\n :param str cv_id: Name of the controlled vocabulary to add an entry.\n :param str cve_id: Name of the entry.\n :param list values: List of values of the form:\n ``(value, lang_ref, description)`` where description can be\n ``None``.\n :param str ext_ref: External reference.\n :throws KeyError: If there is no controlled vocabulary with that id.\n :throws ValueError: If a language in one of the entries doesn't exist."}
{"_id": "q_12904", "text": "Add an external reference.\n\n :param str eid: Name of the external reference.\n :param str etype: Type of the external reference, has to be in\n ``['iso12620', 'ecv', 'cve_id', 'lexen_id', 'resource_url']``.\n :param str value: Value of the external reference.\n :throws KeyError: if etype is not in the list of possible types."}
{"_id": "q_12905", "text": "Add a language.\n\n :param str lang_id: ID of the language.\n :param str lang_def: Definition of the language(preferably ISO-639-3).\n :param str lang_label: Label of the language."}
{"_id": "q_12906", "text": "Add lexicon reference.\n\n :param str lrid: Lexicon reference internal ID.\n :param str name: Lexicon reference display name.\n :param str lrtype: Lexicon reference service type.\n :param str url: Lexicon reference service location\n :param str lexicon_id: Lexicon reference service id.\n :param str lexicon_name: Lexicon reference service name.\n :param str datacat_id: Lexicon reference identifier of data category.\n :param str datacat_name: Lexicon reference name of data category."}
{"_id": "q_12907", "text": "Add a linguistic type.\n\n :param str lingtype: Name of the linguistic type.\n :param str constraints: Constraint name.\n :param bool timealignable: Flag for time alignable.\n :param bool graphicreferences: Flag for graphic references.\n :param str extref: External reference.\n :param dict param_dict: TAG attributes, when this is not ``None`` it\n will ignore all other options. Please only use\n dictionaries coming from the\n :func:`get_parameters_for_linguistic_type`\n :raises KeyError: If a constraint is not defined"}
{"_id": "q_12908", "text": "Add a locale.\n\n :param str language_code: The language code of the locale.\n :param str country_code: The country code of the locale.\n :param str variant: The variant of the locale."}
{"_id": "q_12909", "text": "Add a secondary linked file.\n\n :param str file_path: Path of the file.\n :param str relpath: Relative path of the file.\n :param str mimetype: Mimetype of the file, if ``None`` it tries to\n guess it according to the file extension which currently only works\n for wav, mpg, mpeg and xml.\n :param int time_origin: Time origin for the media file.\n :param str assoc_with: Associated with field.\n :raises KeyError: If mimetype had to be guessed and a non standard\n extension or an unknown mimetype."}
{"_id": "q_12910", "text": "Clean up all unused timeslots.\n\n .. warning:: This can and will take time for larger tiers.\n\n When you want to do a lot of operations on a lot of tiers please unset\n the flags for cleaning in the functions so that the cleaning is only\n performed afterwards."}
{"_id": "q_12911", "text": "Extracts the selected time frame as a new object.\n\n :param int start: Start time.\n :param int end: End time.\n :returns: class:`pympi.Elan.Eaf` object containing the extracted frame."}
{"_id": "q_12912", "text": "Generate the next annotation id, this function is mainly used\n internally."}
{"_id": "q_12913", "text": "Generate the next timeslot id, this function is mainly used\n internally\n\n :param int time: Initial time to assign to the timeslot.\n :raises ValueError: If the time is negative."}
{"_id": "q_12914", "text": "Give all child tiers for a tier.\n\n :param str id_tier: Name of the tier.\n :returns: List of all children\n :raises KeyError: If the tier is non existent."}
{"_id": "q_12915", "text": "Give the full time interval of the file. Note that the real interval\n can be longer because the sound file attached can be longer.\n\n :returns: Tuple of the form: ``(min_time, max_time)``."}
{"_id": "q_12916", "text": "Give the ref annotation after a time. If an annotation overlaps\n with `ktime`` that annotation will be returned.\n\n :param str id_tier: Name of the tier.\n :param int time: Time to get the annotation after.\n :returns: Annotation after that time in a list\n :raises KeyError: If the tier is non existent."}
{"_id": "q_12917", "text": "Merge tiers into a new tier and when the gap is lower then the\n threshhold glue the annotations together.\n\n :param list tiers: List of tier names.\n :param str tiernew: Name for the new tier, if ``None`` the name will be\n generated.\n :param int gapt: Threshhold for the gaps, if the this is set to 10 it\n means that all gaps below 10 are ignored.\n :param str sep: Separator for the merged annotations.\n :param bool safe: Ignore zero length annotations(when working with\n possible malformed data).\n :returns: Name of the created tier.\n :raises KeyError: If a tier is non existent."}
{"_id": "q_12918", "text": "remove all annotations from a tier\n\n :param str id_tier: Name of the tier.\n :raises KeyError: If the tier is non existent."}
{"_id": "q_12919", "text": "Remove a controlled vocabulary description.\n\n :param str cv_id: Name of the controlled vocabulary.\n :paarm str cve_id: Name of the entry.\n :throws KeyError: If there is no controlled vocabulary with that name."}
{"_id": "q_12920", "text": "Remove all licenses matching both key and value.\n\n :param str name: Name of the license.\n :param str url: URL of the license."}
{"_id": "q_12921", "text": "Remove all linked files that match all the criteria, criterias that\n are ``None`` are ignored.\n\n :param str file_path: Path of the file.\n :param str relpath: Relative filepath.\n :param str mimetype: Mimetype of the file.\n :param int time_origin: Time origin.\n :param str ex_from: Extracted from."}
{"_id": "q_12922", "text": "Remove all properties matching both key and value.\n\n :param str key: Key of the property.\n :param str value: Value of the property."}
{"_id": "q_12923", "text": "Remove a reference annotation.\n\n :param str id_tier: Name of tier.\n :param int time: Time of the referenced annotation\n :raises KeyError: If the tier is non existent.\n :returns: Number of removed annotations."}
{"_id": "q_12924", "text": "Remove a tier.\n\n :param str id_tier: Name of the tier.\n :param bool clean: Flag to also clean the timeslots.\n :raises KeyError: If tier is non existent."}
{"_id": "q_12925", "text": "Remove multiple tiers, note that this is a lot faster then removing\n them individually because of the delayed cleaning of timeslots.\n\n :param list tiers: Names of the tier to remove.\n :raises KeyError: If a tier is non existent."}
{"_id": "q_12926", "text": "Rename a tier. Note that this renames also the child tiers that have\n the tier as a parent.\n\n :param str id_from: Original name of the tier.\n :param str id_to: Target name of the tier.\n :throws KeyError: If the tier doesnt' exist."}
{"_id": "q_12927", "text": "Shift all annotations in time. Annotations that are in the beginning\n and a left shift is applied can be squashed or discarded.\n\n :param int time: Time shift width, negative numbers make a left shift.\n :returns: Tuple of a list of squashed annotations and a list of removed\n annotations in the format: ``(tiername, start, end, value)``."}
{"_id": "q_12928", "text": "Get experiment or experiment job.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples for getting an experiment:\n\n \\b\n ```bash\n $ polyaxon experiment get # if experiment is cached\n ```\n\n \\b\n ```bash\n $ polyaxon experiment --experiment=1 get\n ```\n\n \\b\n ```bash\n $ polyaxon experiment -xp 1 --project=cats-vs-dogs get\n ```\n\n \\b\n ```bash\n $ polyaxon experiment -xp 1 -p alain/cats-vs-dogs get\n ```\n\n Examples for getting an experiment job:\n\n \\b\n ```bash\n $ polyaxon experiment get -j 1 # if experiment is cached\n ```\n\n \\b\n ```bash\n $ polyaxon experiment --experiment=1 get --job=10\n ```\n\n \\b\n ```bash\n $ polyaxon experiment -xp 1 --project=cats-vs-dogs get -j 2\n ```\n\n \\b\n ```bash\n $ polyaxon experiment -xp 1 -p alain/cats-vs-dogs get -j 2\n ```"}
{"_id": "q_12929", "text": "Delete experiment.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Example:\n\n \\b\n ```bash\n $ polyaxon experiment delete\n ```"}
{"_id": "q_12930", "text": "Update experiment.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon experiment -xp 2 update --description=\"new description for my experiments\"\n ```\n\n \\b\n ```bash\n $ polyaxon experiment -xp 2 update --tags=\"foo, bar\" --name=\"unique-name\"\n ```"}
{"_id": "q_12931", "text": "Restart experiment.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon experiment --experiment=1 restart\n ```"}
{"_id": "q_12932", "text": "Get experiment or experiment job statuses.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples getting experiment statuses:\n\n \\b\n ```bash\n $ polyaxon experiment statuses\n ```\n\n \\b\n ```bash\n $ polyaxon experiment -xp 1 statuses\n ```\n\n Examples getting experiment job statuses:\n\n \\b\n ```bash\n $ polyaxon experiment statuses -j 3\n ```\n\n \\b\n ```bash\n $ polyaxon experiment -xp 1 statuses --job 1\n ```"}
{"_id": "q_12933", "text": "Get experiment or experiment job resources.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples for getting experiment resources:\n\n \\b\n ```bash\n $ polyaxon experiment -xp 19 resources\n ```\n\n For GPU resources\n\n \\b\n ```bash\n $ polyaxon experiment -xp 19 resources --gpu\n ```\n\n Examples for getting experiment job resources:\n\n \\b\n ```bash\n $ polyaxon experiment -xp 19 resources -j 1\n ```\n\n For GPU resources\n\n \\b\n ```bash\n $ polyaxon experiment -xp 19 resources -j 1 --gpu\n ```"}
{"_id": "q_12934", "text": "Upload code of the current directory while respecting the .polyaxonignore file."}
{"_id": "q_12935", "text": "Get cluster and nodes info."}
{"_id": "q_12936", "text": "Check a polyaxonfile."}
{"_id": "q_12937", "text": "Decorator for CLI with Sentry client handling.\n\n see https://github.com/getsentry/raven-python/issues/904 for more details."}
{"_id": "q_12938", "text": "Commands for jobs."}
{"_id": "q_12939", "text": "Get job.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon job --job=1 get\n ```\n\n \\b\n ```bash\n $ polyaxon job --job=1 --project=project_name get\n ```"}
{"_id": "q_12940", "text": "Update job.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Example:\n\n \\b\n ```bash\n $ polyaxon job -j 2 update --description=\"new description for my job\"\n ```"}
{"_id": "q_12941", "text": "Restart job.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon job --job=1 restart\n ```"}
{"_id": "q_12942", "text": "Get job statuses.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon job -j 2 statuses\n ```"}
{"_id": "q_12943", "text": "Get job resources.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon job -j 2 resources\n ```\n\n For GPU resources\n\n \\b\n ```bash\n $ polyaxon job -j 2 resources --gpu\n ```"}
{"_id": "q_12944", "text": "Get job logs.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon job -j 2 logs\n ```\n\n \\b\n ```bash\n $ polyaxon job logs\n ```"}
{"_id": "q_12945", "text": "Prints as formatted JSON"}
{"_id": "q_12946", "text": "Login to Polyaxon."}
{"_id": "q_12947", "text": "Show current logged Polyaxon user."}
{"_id": "q_12948", "text": "Delete build job.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Example:\n\n \\b\n ```bash\n $ polyaxon build delete\n ```\n\n \\b\n ```bash\n $ polyaxon build -b 2 delete\n ```"}
{"_id": "q_12949", "text": "Update build.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Example:\n\n \\b\n ```bash\n $ polyaxon build -b 2 update --description=\"new description for my build\"\n ```"}
{"_id": "q_12950", "text": "Stop build job.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon build stop\n ```\n\n \\b\n ```bash\n $ polyaxon build -b 2 stop\n ```"}
{"_id": "q_12951", "text": "Get build job resources.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon build -b 2 resources\n ```\n\n For GPU resources\n\n \\b\n ```bash\n $ polyaxon build -b 2 resources --gpu\n ```"}
{"_id": "q_12952", "text": "Commands for bookmarks."}
{"_id": "q_12953", "text": "List bookmarked projects for user.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon bookmark projects\n ```\n\n \\b\n ```bash\n $ polyaxon bookmark -u adam projects\n ```"}
{"_id": "q_12954", "text": "Remove trailing spaces unless they are quoted with a backslash."}
{"_id": "q_12955", "text": "Yield all matching patterns for path."}
{"_id": "q_12956", "text": "Check whether a path is ignored. For directories, include a trailing slash."}
{"_id": "q_12957", "text": "Given a list of patterns, returns a if a path matches any pattern."}
{"_id": "q_12958", "text": "Returns a whether a path should be ignored or not."}
{"_id": "q_12959", "text": "Commands for experiment groups."}
{"_id": "q_12960", "text": "Get experiment group by uuid.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples:\n\n \\b\n ```bash\n $ polyaxon group -g 13 get\n ```"}
{"_id": "q_12961", "text": "Delete experiment group.\n\n Uses [Caching](/references/polyaxon-cli/#caching)"}
{"_id": "q_12962", "text": "Update experiment group.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Example:\n\n \\b\n ```bash\n $ polyaxon group -g 2 update --description=\"new description for this group\"\n ```\n\n \\b\n ```bash\n $ polyaxon update --tags=\"foo, bar\"\n ```"}
{"_id": "q_12963", "text": "Stop experiments in the group.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Examples: stop only pending experiments\n\n \\b\n ```bash\n $ polyaxon group stop --pending\n ```\n\n Examples: stop all unfinished\n\n \\b\n ```bash\n $ polyaxon group stop\n ```\n\n \\b\n ```bash\n $ polyaxon group -g 2 stop\n ```"}
{"_id": "q_12964", "text": "Set and get the global configurations."}
{"_id": "q_12965", "text": "Get the global config values by keys.\n\n Example:\n\n \\b\n ```bash\n $ polyaxon config get host http_port\n ```"}
{"_id": "q_12966", "text": "Teardown a polyaxon deployment given a config file."}
{"_id": "q_12967", "text": "Print the current version of the cli and platform."}
{"_id": "q_12968", "text": "Grant superuser role to a user.\n\n Example:\n\n \\b\n ```bash\n $ polyaxon superuser grant david\n ```"}
{"_id": "q_12969", "text": "Revoke superuser role to a user.\n\n Example:\n\n \\b\n ```bash\n $ polyaxon superuser revoke david\n ```"}
{"_id": "q_12970", "text": "Prints the notebook url for this project.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Example:\n\n \\b\n ```bash\n $ polyaxon notebook url\n ```"}
{"_id": "q_12971", "text": "Start a notebook deployment for this project.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Example:\n\n \\b\n ```bash\n $ polyaxon notebook start -f file -f file_override ...\n ```\n\n Example: upload before running\n\n \\b\n ```bash\n $ polyaxon -p user12/mnist notebook start -f file -u\n ```"}
{"_id": "q_12972", "text": "Upgrade deployment."}
{"_id": "q_12973", "text": "Teardown Polyaxon."}
{"_id": "q_12974", "text": "Delete project.\n\n Uses [Caching](/references/polyaxon-cli/#caching)"}
{"_id": "q_12975", "text": "Update project.\n\n Uses [Caching](/references/polyaxon-cli/#caching)\n\n Example:\n\n \\b\n ```bash\n $ polyaxon update foobar --description=\"Image Classification with DL using TensorFlow\"\n ```\n\n \\b\n ```bash\n $ polyaxon update mike1/foobar --description=\"Image Classification with DL using TensorFlow\"\n ```\n\n \\b\n ```bash\n $ polyaxon update --tags=\"foo, bar\"\n ```"}
{"_id": "q_12976", "text": "Download code of the current project."}
{"_id": "q_12977", "text": "Get the paragraph base embedding level. Returns 0 for LTR,\n 1 for RTL.\n\n `text` a unicode object.\n\n Set `upper_is_rtl` to True to treat upper case chars as strong 'R'\n for debugging (default: False)."}
{"_id": "q_12978", "text": "Get the paragraph base embedding level and direction,\n set the storage to the array of chars"}
{"_id": "q_12979", "text": "Apply X1 to X9 rules of the unicode algorithm.\n\n See http://unicode.org/reports/tr9/#Explicit_Levels_and_Directions"}
{"_id": "q_12980", "text": "Resolving neutral types. Implements N1 and N2\n\n See: http://unicode.org/reports/tr9/#Resolving_Neutral_Types"}
{"_id": "q_12981", "text": "Inject the current working file"}
{"_id": "q_12982", "text": "Convert compiled .ui file from PySide2 to Qt.py\n\n Arguments:\n lines (list): Each line of of .ui file\n\n Usage:\n >> with open(\"myui.py\") as f:\n .. lines = convert(f.readlines())"}
{"_id": "q_12983", "text": "Qt.py command-line interface"}
{"_id": "q_12984", "text": "Return the most desirable of the currently registered GUIs"}
{"_id": "q_12985", "text": "write the 'object' line; additional args are packed in string"}
{"_id": "q_12986", "text": "Write the complete dx object to the file.\n\n This is the simple OpenDX format which includes the data into\n the header via the 'object array ... data follows' statement.\n\n Only simple regular arrays are supported.\n\n The format should be compatible with VMD's dx reader plugin."}
{"_id": "q_12987", "text": "Read DX field from file.\n\n dx = OpenDX.field.read(dxfile)\n\n The classid is discarded and replaced with the one from the file."}
{"_id": "q_12988", "text": "Initialize the corresponding DXclass from the data.\n\n class = DXInitObject.initialize()"}
{"_id": "q_12989", "text": "Parse the dx file and construct a DX field object with component classes.\n\n A :class:`field` instance *DXfield* must be provided to be\n filled by the parser::\n\n DXfield_object = OpenDX.field(*args)\n parse(DXfield_object)\n\n A tokenizer turns the dx file into a stream of tokens. A\n hierarchy of parsers examines the stream. The level-0 parser\n ('general') distinguishes comments and objects (level-1). The\n object parser calls level-3 parsers depending on the object\n found. The basic idea is that of a 'state machine'. There is\n one parser active at any time. The main loop is the general\n parser.\n\n * Constructing the dx objects with classtype and classid is\n not implemented yet.\n * Unknown tokens raise an exception."}
{"_id": "q_12990", "text": "Level-0 parser and main loop.\n\n Look for a token that matches a level-1 parser and hand over control."}
{"_id": "q_12991", "text": "Level-1 parser for comments.\n\n pattern: #.*\n Append comment (with initial '# ' stripped) to all comments."}
{"_id": "q_12992", "text": "Level-1 parser for objects.\n\n pattern: 'object' id 'class' type ...\n\n id ::= integer|string|'\"'white space string'\"'\n type ::= string"}
{"_id": "q_12993", "text": "Level-2 parser for gridpositions.\n\n pattern:\n object 1 class gridpositions counts 97 93 99\n origin -46.5 -45.5 -48.5\n delta 1 0 0\n delta 0 1 0\n delta 0 0 1"}
{"_id": "q_12994", "text": "Level-2 parser for gridconnections.\n\n pattern:\n object 2 class gridconnections counts 97 93 99"}
{"_id": "q_12995", "text": "Split s into tokens and update the token buffer.\n\n __tokenize(string)\n\n New tokens are appended to the token buffer, discarding white\n space. Based on http://effbot.org/zone/xml-scanner.htm"}
{"_id": "q_12996", "text": "Return a mesh grid for N dimensions.\n\n The input are N arrays, each of which contains the values along one axis of\n the coordinate system. The arrays do not have to have the same number of\n entries. The function returns arrays that can be fed into numpy functions\n so that they produce values for *all* points spanned by the axes *arrs*.\n\n Original from\n http://stackoverflow.com/questions/1827489/numpy-meshgrid-in-3d and fixed.\n\n .. SeeAlso: :func:`numpy.meshgrid` for the 2D case."}
{"_id": "q_12997", "text": "Resample to a new regular grid.\n\n\n Parameters\n ----------\n factor : float\n The number of grid cells are scaled with `factor` in each\n dimension, i.e., ``factor * N_i`` cells along each\n dimension i.\n\n\n Returns\n -------\n Grid\n\n\n See Also\n --------\n resample"}
{"_id": "q_12998", "text": "Initializes Grid from a CCP4 file."}
{"_id": "q_12999", "text": "Initializes Grid from a OpenDX file."}
{"_id": "q_13000", "text": "Initialize Grid from gOpenMol plt file."}
{"_id": "q_13001", "text": "Returns the coordinates of the centers of all grid cells as an\n iterator."}
{"_id": "q_13002", "text": "Detect the byteorder of stream `ccp4file` and return format character.\n\n Try all endinaness and alignment options until we find\n something that looks sensible (\"MAPS \" in the first 4 bytes).\n\n (The ``machst`` field could be used to obtain endianness, but\n it does not specify alignment.)\n\n .. SeeAlso::\n\n :mod:`struct`"}
{"_id": "q_13003", "text": "The Message object has a circular reference on itself, thus we have to allow\n Type referencing by name. Here we lookup any Types referenced by name and\n replace with the real class."}
{"_id": "q_13004", "text": "Get the data for a specific device for a specific end date\n\n Keyword Arguments:\n limit - max 288\n end_date - is Epoch in milliseconds\n\n :return:"}
{"_id": "q_13005", "text": "Get all devices\n\n :return:\n A list of AmbientWeatherStation instances."}
{"_id": "q_13006", "text": "Start subscription manager for real time data."}
{"_id": "q_13007", "text": "Update home info."}
{"_id": "q_13008", "text": "Update home info async."}
{"_id": "q_13009", "text": "Return list of Tibber homes."}
{"_id": "q_13010", "text": "Return the currency."}
{"_id": "q_13011", "text": "Return the price unit."}
{"_id": "q_13012", "text": "Unsubscribe to Tibber rt subscription."}
{"_id": "q_13013", "text": "Removes the temporary value set for None attributes."}
{"_id": "q_13014", "text": "Get the data as it will be charted. The first set will be\n\t\tthe actual first data set. The second will be the sum of the\n\t\tfirst and the second, etc."}
{"_id": "q_13015", "text": "Draw a constant line on the y-axis with the label"}
{"_id": "q_13016", "text": "Cache the parameters necessary to transform x & y coordinates"}
{"_id": "q_13017", "text": "For every key, value pair, return the mapping for the\n\tequivalent value, key pair\n\n\t>>> reverse_mapping({'a': 'b'}) == {'b': 'a'}\n\tTrue"}
{"_id": "q_13018", "text": "Add a data set to the graph\n\n\t\t>>> graph.add_data({data:[1,2,3,4]}) # doctest: +SKIP\n\n\t\tNote that a 'title' key is ignored.\n\n\t\tMultiple calls to add_data will sum the elements, and the pie will\n\t\tdisplay the aggregated data. e.g.\n\n\t\t>>> graph.add_data({data:[1,2,3,4]}) # doctest: +SKIP\n\t\t>>> graph.add_data({data:[2,3,5,7]}) # doctest: +SKIP\n\n\t\tis the same as:\n\n\t\t>>> graph.add_data({data:[3,5,8,11]}) # doctest: +SKIP\n\n\t\tIf data is added of with differing lengths, the corresponding\n\t\tvalues will be assumed to be zero.\n\n\t\t>>> graph.add_data({data:[1,2,3,4]}) # doctest: +SKIP\n\t\t>>> graph.add_data({data:[5,7]}) # doctest: +SKIP\n\n\t\tis the same as:\n\n\t\t>>> graph.add_data({data:[5,7]}) # doctest: +SKIP\n\t\t>>> graph.add_data({data:[1,2,3,4]}) # doctest: +SKIP\n\n\t\tand\n\n\t\t>>> graph.add_data({data:[6,9,3,4]}) # doctest: +SKIP"}
{"_id": "q_13019", "text": "Add svg definitions"}
{"_id": "q_13020", "text": "Process the template with the data and\n\t\tconfig which has been set and return the resulting SVG.\n\n\t\tRaises ValueError when no data set has\n\t\tbeen added to the graph object."}
{"_id": "q_13021", "text": "Calculates the margin to the left of the plot area, setting\n\t\tborder_left."}
{"_id": "q_13022", "text": "Calculate the margin in pixels to the right of the plot area,\n\t\tsetting border_right."}
{"_id": "q_13023", "text": "Calculate the margin in pixels below the plot area, setting\n\t\tborder_bottom."}
{"_id": "q_13024", "text": "The central logic for drawing the graph.\n\n\t\tSets self.graph (the 'g' element in the SVG root)"}
{"_id": "q_13025", "text": "Draw the X axis labels"}
{"_id": "q_13026", "text": "Draw the X-axis guidelines"}
{"_id": "q_13027", "text": "Draw the Y-axis guidelines"}
{"_id": "q_13028", "text": "Base SVG Document Creation"}
{"_id": "q_13029", "text": "Get the stylesheets for this instance"}
{"_id": "q_13030", "text": "Build the execution environment."}
{"_id": "q_13031", "text": "Write the data to the output socket."}
{"_id": "q_13032", "text": "A Cherrypy wsgiserver-compatible wrapper."}
{"_id": "q_13033", "text": "\\\n Multipurpose method for sending responses to channel or via message to\n a single user"}
{"_id": "q_13034", "text": "\\\n Respond to periodic PING messages from server"}
{"_id": "q_13035", "text": "\\\n When the connection to the server is registered, send all pending\n data."}
{"_id": "q_13036", "text": "\\\n Register the worker with the boss"}
{"_id": "q_13037", "text": "\\\n Run tasks in a greenlet, pulling from the workers' task queue and\n reporting results to the command channel"}
{"_id": "q_13038", "text": "\\\n Decorator to ensure that commands only can come from the boss"}
{"_id": "q_13039", "text": "\\\n Actual messages listened for by the worker bot - note that worker-execute\n actually dispatches again by adding the command to the task queue,\n from which it is pulled then matched against self.task_patterns"}
{"_id": "q_13040", "text": "\\\n Work on a task from the BotnetBot"}
{"_id": "q_13041", "text": "\\\n Indicate that the worker with given nick is performing this task"}
{"_id": "q_13042", "text": "Passwords should be tough.\n\n That means they should use:\n - mixed case letters,\n - numbers,\n - (optionally) ascii symbols and spaces.\n\n The (contrversial?) decision to limit the passwords to ASCII only\n is for the sake of:\n - simplicity (no need to normalise UTF-8 input)\n - security (some character sets are visible as typed into password fields)\n\n TODO: In future, it may be worth considering:\n - rejecting common passwords. (Where can we get a list?)\n - rejecting passwords with too many repeated characters.\n\n It should be noted that no restriction has been placed on the length of the\n password here, as that can easily be achieved with use of the min_length\n attribute of a form/serializer field."}
{"_id": "q_13043", "text": "Use `token` to allow one-time access to a view.\n\n Set the user as a class attribute or raise an `InvalidExpiredToken`.\n\n Token expiry can be set in `settings` with `VERIFY_ACCOUNT_EXPIRY` and is\n set in seconds."}
{"_id": "q_13044", "text": "Delete the user's avatar.\n\n We set `user.avatar = None` instead of calling `user.avatar.delete()`\n to avoid test errors with `django.inmemorystorage`."}
{"_id": "q_13045", "text": "Throttle POST requests only."}
{"_id": "q_13046", "text": "single global executor"}
{"_id": "q_13047", "text": "single global client instance"}
{"_id": "q_13048", "text": "Check for a task state like `docker service ps id`"}
{"_id": "q_13049", "text": "Stop and remove the service\n\n Consider using stop/start when Docker adds support"}
{"_id": "q_13050", "text": "Check lower-cased email is unique."}
{"_id": "q_13051", "text": "Check the old password is valid and set the new password."}
{"_id": "q_13052", "text": "Create auth token. Differs from DRF that it always creates new token\n but not re-using them."}
{"_id": "q_13053", "text": "Validate `email` and send a request to confirm it."}
{"_id": "q_13054", "text": "Update token's expiration datetime on every auth action."}
{"_id": "q_13055", "text": "Email context to reset a user password."}
{"_id": "q_13056", "text": "Send a notification by email."}
{"_id": "q_13057", "text": "Password reset email handler."}
{"_id": "q_13058", "text": "Custom authentication to check if auth token has expired."}
{"_id": "q_13059", "text": "Displays bokeh output inside a notebook."}
{"_id": "q_13060", "text": "Returns a CustomJS callback that can be attached to send the\n widget state across the notebook comms."}
{"_id": "q_13061", "text": "The default Renderer function which handles HoloViews objects."}
{"_id": "q_13062", "text": "Forces a parameter value to be text"}
{"_id": "q_13063", "text": "Given a list of objects, returns a dictionary mapping from\n string name for the object to the object itself."}
{"_id": "q_13064", "text": "Returns the instance owning the supplied instancemethod or\n the class owning the supplied classmethod."}
{"_id": "q_13065", "text": "Take the http_auth value and split it into the attributes that\n carry the http auth username and password\n\n :param str|tuple http_auth: The http auth value"}
{"_id": "q_13066", "text": "Returns True if the cluster is up, False otherwise."}
{"_id": "q_13067", "text": "Get the basic info from the current cluster.\n\n :rtype: dict"}
{"_id": "q_13068", "text": "Coroutine. Queries cluster Health API.\n\n Returns a 2-tuple, where first element is request status, and second\n element is a dictionary with response data.\n\n :param params: dictionary of query parameters, will be handed over to\n the underlying :class:`~torando_elasticsearch.AsyncHTTPConnection`\n class for serialization"}
{"_id": "q_13069", "text": "Total CPU load for Synology DSM"}
{"_id": "q_13070", "text": "Total Memory Size of Synology DSM"}
{"_id": "q_13071", "text": "Total upload speed being used"}
{"_id": "q_13072", "text": "Returns all available volumes"}
{"_id": "q_13073", "text": "Total size of volume"}
{"_id": "q_13074", "text": "Average temperature of all disks making up the volume"}
{"_id": "q_13075", "text": "Function to handle sessions for a GET request"}
{"_id": "q_13076", "text": "Updates the various instanced modules"}
{"_id": "q_13077", "text": "Getter for various Utilisation variables"}
{"_id": "q_13078", "text": "Walk a py-radix tree and aggregate it.\n\n Arguments\n l_tree -- radix.Radix() object"}
{"_id": "q_13079", "text": "Metric for ordinal data."}
{"_id": "q_13080", "text": "Metric for ratio data."}
{"_id": "q_13081", "text": "Distances of the different possible values.\n\n Parameters\n ----------\n value_domain : array_like, with shape (V,)\n Possible values V the units can take.\n If the level of measurement is not nominal, it must be ordered.\n\n distance_metric : callable\n Callable that return the distance of two given values.\n\n n_v : ndarray, with shape (V,)\n Number of pairable elements for each value.\n\n Returns\n -------\n d : ndarray, with shape (V, V)\n Distance matrix for each value pair."}
{"_id": "q_13082", "text": "Return the value counts given the reliability data.\n\n Parameters\n ----------\n reliability_data : ndarray, with shape (M, N)\n Reliability data matrix which has the rate the i coder gave to the j unit, where M is the number of raters\n and N is the unit count.\n Missing rates are represented with `np.nan`.\n\n value_domain : array_like, with shape (V,)\n Possible values the units can take.\n\n Returns\n -------\n value_counts : ndarray, with shape (N, V)\n Number of coders that assigned a certain value to a determined unit, where N is the number of units\n and V is the value count."}
{"_id": "q_13083", "text": "Maps to fortran CDF_Inquire.\n\n Assigns parameters returned by CDF_Inquire\n to pysatCDF instance. Not intended\n for regular direct use by user."}
{"_id": "q_13084", "text": "Gets all CDF z-variable information, not data though.\n\n Maps to calls using var_inquire. Gets information on\n data type, number of elements, number of dimensions, etc."}
{"_id": "q_13085", "text": "Loads all variables from CDF.\n \n Note this routine is called automatically\n upon instantiation."}
{"_id": "q_13086", "text": "Calls fortran functions to load CDF variable data\n\n Parameters\n ----------\n names : list_like\n list of variables names\n data_types : list_like\n list of all loaded data type codes as used by CDF\n rec_nums : list_like\n list of record numbers in CDF file. Provided by variable_info\n dim_sizes :\n list of dimensions as provided by variable_info.\n input_type_code : int\n Specific type code to load\n func : function\n Fortran function via python interface that will be used for actual loading.\n epoch : bool\n Flag indicating type is epoch. Translates things to datetime standard.\n data_offset :\n Offset value to be applied to data. Required for unsigned integers in CDF.\n epoch16 : bool\n Flag indicating type is epoch16. Translates things to datetime standard."}
{"_id": "q_13087", "text": "Read all attribute properties, g, r, and z attributes"}
{"_id": "q_13088", "text": "Creates the context for a specific request."}
{"_id": "q_13089", "text": "The cached token of the current tenant."}
{"_id": "q_13090", "text": "Helper function for building an attribute dictionary."}
{"_id": "q_13091", "text": "Returns uptime in seconds or None, on Linux."}
{"_id": "q_13092", "text": "Returns uptime in seconds or None, on AmigaOS."}
{"_id": "q_13093", "text": "Returns uptime in seconds or None, on Plan 9."}
{"_id": "q_13094", "text": "Returns uptime in seconds or None, on Solaris."}
{"_id": "q_13095", "text": "Returns uptime in seconds or None, on Syllable."}
{"_id": "q_13096", "text": "Returns boot time if remotely possible, or None if not."}
{"_id": "q_13097", "text": "Make sure that the class behaves like the data structure that it\n is, so that we don't get a ListFile trying to represent a dict."}
{"_id": "q_13098", "text": "Initialize a new file that starts out with some data. Pass data\n as a list, dict, or JSON string."}
{"_id": "q_13099", "text": "Change the value of the given key in the given file to the given value"}
{"_id": "q_13100", "text": "Migrates the old config file format to the new one"}
{"_id": "q_13101", "text": "Start the webserver that will receive the code"}
{"_id": "q_13102", "text": "Wait until the user accepted or rejected the request"}
{"_id": "q_13103", "text": "Request new access information from reddit using the built in webserver"}
{"_id": "q_13104", "text": "Check if the token is still valid and requests a new if it is not\n\t\tvalid anymore\n\n\t\tCall this method before a call to praw\n\t\tif there might have passed more than one hour\n\n\t\tforce: if true, a new token will be retrieved no matter what"}
{"_id": "q_13105", "text": "Check if plugin is configured."}
{"_id": "q_13106", "text": "Create DynamoDB table for run manifests\n\n Arguments:\n dynamodb_client - boto3 DynamoDB client (not service)\n table_name - string representing existing table name"}
{"_id": "q_13107", "text": "Check if prefix is archived in Glacier, by checking storage class of\n first object inside that prefix\n\n Arguments:\n s3_client - boto3 S3 client (not service)\n bucket - valid extracted bucket (without protocol and prefix)\n example: sowplow-events-data\n prefix - valid S3 prefix (usually, run_id)\n example: snowplow-archive/enriched/archive/"}
{"_id": "q_13108", "text": "Extract date part from run id\n\n Arguments:\n key - full key name, such as shredded-archive/run=2012-12-11-01-31-33/\n (trailing slash is required)\n\n >>> extract_run_id('shredded-archive/run=2012-12-11-01-11-33/')\n 'shredded-archive/run=2012-12-11-01-11-33/'\n >>> extract_run_id('shredded-archive/run=2012-12-11-01-11-33')\n >>> extract_run_id('shredded-archive/run=2012-13-11-01-11-33/')"}
{"_id": "q_13109", "text": "Extracts Schema information from Iglu URI\n\n >>> extract_schema(\"iglu:com.acme-corporation_underscore/event_name-dash/jsonschema/1-10-1\")['vendor']\n 'com.acme-corporation_underscore'"}
{"_id": "q_13110", "text": "Create an Elasticsearch field name from a schema string"}
{"_id": "q_13111", "text": "Convert a contexts JSON to an Elasticsearch-compatible list of key-value pairs\n For example, the JSON\n\n {\n \"data\": [\n {\n \"data\": {\n \"unique\": true\n },\n \"schema\": \"iglu:com.acme/unduplicated/jsonschema/1-0-0\"\n },\n {\n \"data\": {\n \"value\": 1\n },\n \"schema\": \"iglu:com.acme/duplicated/jsonschema/1-0-0\"\n },\n {\n \"data\": {\n \"value\": 2\n },\n \"schema\": \"iglu:com.acme/duplicated/jsonschema/1-0-0\"\n }\n ],\n \"schema\": \"iglu:com.snowplowanalytics.snowplow/contexts/jsonschema/1-0-0\"\n }\n\n would become\n\n [\n (\"context_com_acme_duplicated_1\", [{\"value\": 1}, {\"value\": 2}]),\n (\"context_com_acme_unduplicated_1\", [{\"unique\": true}])\n ]"}
{"_id": "q_13112", "text": "Convert an unstructured event JSON to a list containing one Elasticsearch-compatible key-value pair\n For example, the JSON\n\n {\n \"data\": {\n \"data\": {\n \"key\": \"value\"\n },\n \"schema\": \"iglu:com.snowplowanalytics.snowplow/link_click/jsonschema/1-0-1\"\n },\n \"schema\": \"iglu:com.snowplowanalytics.snowplow/unstruct_event/jsonschema/1-0-0\"\n }\n\n would become\n\n [\n (\n \"unstruct_com_snowplowanalytics_snowplow_link_click_1\", {\n \"key\": \"value\"\n }\n )\n ]"}
{"_id": "q_13113", "text": "Convert a Snowplow enriched event TSV into a JSON"}
{"_id": "q_13114", "text": "Convert a Snowplow enriched event in the form of an array of fields into a JSON"}
{"_id": "q_13115", "text": "Sending ICMP packets.\n\n :return: ``ping`` command execution result.\n :rtype: :py:class:`.PingResult`\n :raises ValueError: If parameters not valid."}
{"_id": "q_13116", "text": "Print a set of variables"}
{"_id": "q_13117", "text": "Highlight common SQL words in a string."}
{"_id": "q_13118", "text": "Dump a variable to a HTML string with sensible output for template context fields.\n It filters out all fields which are not usable in a template context."}
{"_id": "q_13119", "text": "Briefly print the dictionary keys."}
{"_id": "q_13120", "text": "Send a verification email for the email address."}
{"_id": "q_13121", "text": "Send a notification about a duplicate signup."}
{"_id": "q_13122", "text": "Mark the instance's email as verified."}
{"_id": "q_13123", "text": "Determine if the confirmation has expired.\n\n Returns:\n bool:\n ``True`` if the confirmation has expired and ``False``\n otherwise."}
{"_id": "q_13124", "text": "Send a verification email to the user."}
{"_id": "q_13125", "text": "Create a new email and send a confirmation to it.\n\n Returns:\n The newly creating ``EmailAddress`` instance."}
{"_id": "q_13126", "text": "Update the instance the serializer is bound to.\n\n Args:\n instance:\n The instance the serializer is bound to.\n validated_data:\n The data to update the serializer with.\n\n Returns:\n The updated instance."}
{"_id": "q_13127", "text": "Validate the provided data.\n\n Returns:\n dict:\n The validated data.\n\n Raises:\n serializers.ValidationError:\n If the provided password is invalid."}
{"_id": "q_13128", "text": "Validate the provided confirmation key.\n\n Returns:\n str:\n The validated confirmation key.\n\n Raises:\n serializers.ValidationError:\n If there is no email confirmation with the given key or\n the confirmation has expired."}
{"_id": "q_13129", "text": "Validate the provided reset key.\n\n Returns:\n The validated key.\n\n Raises:\n serializers.ValidationError:\n If the provided key does not exist."}
{"_id": "q_13130", "text": "Create a new user from the data passed to the serializer.\n\n If the provided email has not been verified yet, the user is\n created and a verification email is sent to the address.\n Otherwise we send a notification to the email address that\n someone attempted to register with an email that's already been\n verified.\n\n Args:\n validated_data (dict):\n The data passed to the serializer after it has been\n validated.\n\n Returns:\n A new user created from the provided data."}
{"_id": "q_13131", "text": "Resend a verification email to the provided address.\n\n If the provided email is already verified no action is taken."}
{"_id": "q_13132", "text": "Create a new email address."}
{"_id": "q_13133", "text": "Return all unexpired password reset tokens."}
{"_id": "q_13134", "text": "Handle execution of the command."}
{"_id": "q_13135", "text": "Get a user by their ID.\n\n Args:\n user_id:\n The ID of the user to fetch.\n\n Returns:\n The user with the specified ID if they exist and ``None``\n otherwise."}
{"_id": "q_13136", "text": "Returns True if the given username and password authenticate for the\n given service. Returns False otherwise.\n\n ``username``: the username to authenticate\n\n ``password``: the password in plain text\n\n ``service``: the PAM service to authenticate against.\n Defaults to 'login'\n\n The above parameters can be strings or bytes. If they are strings,\n they will be encoded using the encoding given by:\n\n ``encoding``: the encoding to use for the above parameters if they\n are given as strings. Defaults to 'utf-8'\n\n ``resetcred``: Use the pam_setcred() function to\n reinitialize the credentials.\n Defaults to 'True'."}
{"_id": "q_13137", "text": "Save the provided data using the class' serializer.\n\n Args:\n request:\n The request being made.\n\n Returns:\n An ``APIResponse`` instance. If the request was successful\n the response will have a 200 status code and contain the\n serializer's data. Otherwise a 400 status code and the\n request's errors will be returned."}
{"_id": "q_13138", "text": "Return an HTML tree block describing the given object."}
{"_id": "q_13139", "text": "Return the dict key or attribute name of obj which refers to\n referent."}
{"_id": "q_13140", "text": "Parse the next token in the stream.\n\n Returns a `LatexToken`. Raises `LatexWalkerEndOfStream` if end of stream reached.\n\n .. deprecated:: 1.0\n Please use :py:meth:`LatexWalker.get_token()` instead."}
{"_id": "q_13141", "text": "Parses latex content `s`.\n\n Returns a tuple `(nodelist, pos, len)` where nodelist is a list of `LatexNode` 's.\n\n If `stop_upon_closing_brace` is given, then `len` includes the closing brace, but the\n closing brace is not included in any of the nodes in the `nodelist`.\n\n .. deprecated:: 1.0\n Please use :py:meth:`LatexWalker.get_latex_nodes()` instead."}
{"_id": "q_13142", "text": "Walk the object tree, ignoring duplicates and circular refs."}
{"_id": "q_13143", "text": "Extracts text from `content` meant for database indexing. `content` is\n some LaTeX code.\n\n .. deprecated:: 1.0\n Please use :py:class:`LatexNodes2Text` instead."}
{"_id": "q_13144", "text": "Set where to look for input files when encountering the ``\\\\input`` or\n ``\\\\include`` macro.\n\n Alternatively, you may also override :py:meth:`read_input_file()` to\n implement a custom file lookup mechanism.\n\n The argument `tex_input_directory` is the directory relative to which to\n search for input files.\n\n If `strict_input` is set to `True`, then we always check that the\n referenced file lies within the subtree of `tex_input_directory`,\n prohibiting for instance hacks with '..' in filenames or using symbolic\n links to refer to files out of the directory tree.\n\n The argument `latex_walker_init_args` allows you to specify the parse\n flags passed to the constructor of\n :py:class:`pylatexenc.latexwalker.LatexWalker` when parsing the input\n file."}
{"_id": "q_13145", "text": "This method may be overridden to implement a custom lookup mechanism when\n encountering ``\\\\input`` or ``\\\\include`` directives.\n\n The default implementation looks for a file of the given name relative\n to the directory set by :py:meth:`set_tex_input_directory()`. If\n `strict_input=True` was set, we ensure strictly that the file resides in\n a subtree of the reference input directory (after canonicalizing the\n paths and resolving all symlinks).\n\n You may override this method to obtain the input data in however way you\n see fit. (In that case, a call to `set_tex_input_directory()` may not\n be needed as that function simply sets properties which are used by the\n default implementation of `read_input_file()`.)\n\n This function accepts the referred filename as argument (the argument to\n the ``\\\\input`` macro), and should return a string with the file\n contents (or generate a warning or raise an error)."}
{"_id": "q_13146", "text": "Parses the given `latex` code and returns its textual representation.\n\n The `parse_flags` are the flags to give on to the\n :py:class:`pylatexenc.latexwalker.LatexWalker` constructor."}
{"_id": "q_13147", "text": "Imports the media fixtures files finder class described by import_path, where\n import_path is the full Python path to the class."}
{"_id": "q_13148", "text": "Looks for files in the app directories."}
{"_id": "q_13149", "text": "Find a requested media file in an app's media fixtures locations."}
{"_id": "q_13150", "text": "Deletes the given relative path using the destination storage backend."}
{"_id": "q_13151", "text": "Checks if the target file should be deleted if it already exists"}
{"_id": "q_13152", "text": "Attempt to link ``path``"}
{"_id": "q_13153", "text": "Attempt to copy ``path`` with storage"}
{"_id": "q_13154", "text": "Set the current space to Space ``name`` and return it.\n\n If called without arguments, the current space is returned.\n Otherwise, the current space is set to the space named ``name``\n and the space is returned."}
{"_id": "q_13155", "text": "Create a child space.\n\n Args:\n name (str, optional): Name of the space. Defaults to ``SpaceN``,\n where ``N`` is a number determined automatically.\n bases (optional): A space or a sequence of spaces to be the base\n space(s) of the created space.\n formula (optional): Function to specify the parameters of\n dynamic child spaces. The signature of this function is used\n for setting parameters for dynamic child spaces.\n This function should return a mapping of keyword arguments\n to be passed to this method when the dynamic child spaces\n are created.\n\n Returns:\n The new child space."}
{"_id": "q_13156", "text": "Create a child space from an Excel range.\n\n To use this method, ``openpyxl`` package must be installed.\n\n Args:\n book (str): Path to an Excel file.\n range_ (str): Range expression, such as \"A1\", \"$G4:$K10\",\n or named range \"NamedRange1\".\n sheet (str): Sheet name (case ignored).\n name (str, optional): Name of the space. Defaults to ``SpaceN``,\n where ``N`` is a number determined automatically.\n names_row (optional): an index number indicating\n what row contains the names of cells and parameters.\n Defaults to the top row (0).\n param_cols (optional): a sequence of index numbers\n indicating parameter columns.\n Defaults to only the leftmost column ([0]).\n names_col (optional): an index number, starting from 0,\n indicating what column contains additional parameters.\n param_rows (optional): a sequence of index numbers, starting from\n 0, indicating rows of additional parameters, in case cells are\n defined in two dimensions.\n transpose (optional): Defaults to ``False``.\n If set to ``True``, \"row(s)\" and \"col(s)\" in the parameter\n names are interpreted inversely, i.e.\n all indexes passed to \"row(s)\" parameters are interpreted\n as column indexes,\n and all indexes passed to \"col(s)\" parameters as row indexes.\n space_param_order: a sequence to specify space parameters and\n their orders. The elements of the sequence denote the indexes\n of ``param_cols`` elements, and optionally the index of\n ``param_rows`` elements shifted by the length of\n ``param_cols``. The elements of this parameter and\n ``cell_param_order`` must not overlap.\n cell_param_order (optional): a sequence to reorder the parameters.\n The elements of the sequence denote the indexes of\n ``param_cols`` elements, and optionally the index of\n ``param_rows`` elements shifted by the length of\n ``param_cols``. The elements of this parameter and\n ``cell_space_order`` must not overlap.\n\n Returns:\n The new child space created from the Excel range."}
{"_id": "q_13157", "text": "Create a new child space.\n\n Args:\n name (str): Name of the space. If omitted, the space is\n created automatically.\n bases: If specified, the new space becomes a derived space of\n the `base` space.\n formula: Function whose parameters used to set space parameters.\n refs: a mapping of refs to be added.\n arguments: ordered dict of space parameter names to their values.\n source: A source module from which cell definitions are read.\n prefix: Prefix to the autogenerated name when name is None."}
{"_id": "q_13158", "text": "Create a node from arguments and return it"}
{"_id": "q_13159", "text": "Get node from object name and arg string\n\n Not Used. Left for future reference purpose."}
{"_id": "q_13160", "text": "Custom showtraceback for monkey-patching IPython's InteractiveShell\n\n https://stackoverflow.com/questions/1261668/cannot-override-sys-excepthook"}
{"_id": "q_13161", "text": "Monkey patch shell's error handler.\n\n This method is to monkey-patch the showtraceback method of\n IPython's InteractiveShell to\n\n __IPYTHON__ is not detected when starting an IPython kernel,\n so this method is called from start_kernel in spyder-modelx."}
{"_id": "q_13162", "text": "Restore default IPython showtraceback"}
{"_id": "q_13163", "text": "Retrieve an object by its absolute name."}
{"_id": "q_13164", "text": "Display the model tree window.\n\n Args:\n model: :class:`Model <modelx.core.model.Model>` object.\n Defaults to the current model.\n\n Warnings:\n For this function to work with Spyder, *Graphics backend* option\n of Spyder must be set to *inline*."}
{"_id": "q_13165", "text": "Get interfaces from their implementations."}
{"_id": "q_13166", "text": "Update all LazyEvals in self\n\n self.lzy_evals must be set to LazyEval object(s) enough to\n update all owned LazyEval objects."}
{"_id": "q_13167", "text": "Return parameter names if the parameters are shareable among cells.\n\n Parameters are shareable among multiple cells when all the cells\n have the parameters in the same order if they ever have any.\n\n For example, if cells are foo(), bar(x), baz(x, y), then\n ('x', 'y') are shareable parameters amounts them, as 'x' and 'y'\n appear in the same order in the parameter list if they ever appear.\n\n Args:\n cells: An iterator yielding cells.\n\n Returns:\n None if parameters are not share,\n tuple of shareable parameter names,\n () if cells are all scalars."}
{"_id": "q_13168", "text": "Make a copy of itself and return it."}
{"_id": "q_13169", "text": "Return the value of the cells."}
{"_id": "q_13170", "text": "Return a range as nested dict of openpyxl cells."}
{"_id": "q_13171", "text": "Read values from an Excel range into a dictionary.\n\n `range_expr` ie either a range address string, such as \"A1\", \"$C$3:$E$5\",\n or a defined name string for a range, such as \"NamedRange1\".\n If a range address is provided, `sheet` argument must also be provided.\n If a named range is provided and `sheet` is not, book level defined name\n is searched. If `sheet` is also provided, sheet level defined name for the\n specified `sheet` is searched.\n If range_expr points to a single cell, its value is returned.\n\n `dictgenerator` is a generator function that yields keys and values of \n the returned dictionary. the excel range, as a nested tuple of openpyxl's\n Cell objects, is passed to the generator function as its single argument.\n If not specified, default generator is used, which maps tuples of row and\n column indexes, both starting with 0, to their values.\n \n Args:\n filepath (str): Path to an Excel file.\n range_epxr (str): Range expression, such as \"A1\", \"$G4:$K10\", \n or named range \"NamedRange1\"\n sheet (str): Sheet name (case ignored).\n None if book level defined range name is passed as `range_epxr`.\n dict_generator: A generator function taking a nested tuple of cells \n as a single parameter.\n\n Returns:\n Nested list containing range values."}
{"_id": "q_13172", "text": "Get range from a workbook.\n\n A workbook can contain multiple definitions for a single name,\n as a name can be defined for the entire book or for\n a particular sheet.\n\n If sheet is None, the book-wide def is searched,\n otherwise sheet-local def is looked up.\n\n Args:\n book: An openpyxl workbook object.\n rangename (str): Range expression, such as \"A1\", \"$G4:$K10\",\n named range \"NamedRange1\".\n sheetname (str, optional): None for book-wide name def,\n sheet name for sheet-local named range.\n\n Returns:\n Range object specified by the name."}
{"_id": "q_13173", "text": "Calculate the Method Resolution Order of bases using the C3 algorithm.\n\n Code modified from\n http://code.activestate.com/recipes/577748-calculate-the-mro-of-a-class/\n\n Args:\n bases: sequence of direct base spaces.\n\n Returns:\n mro as a list of bases including node itself"}
{"_id": "q_13174", "text": "Create a new code object by altering some of ``code`` attributes\n\n Args:\n code: code objcect\n attrs: a mapping of names of code object attrs to their values"}
{"_id": "q_13175", "text": "Replace local variables with free variables\n\n Warnings:\n This function does not work."}
{"_id": "q_13176", "text": "Remove the last redundant token from lambda expression\n\n lambda x: return x)\n ^\n Return string without irrelevant tokens\n returned from inspect.getsource on lamda expr returns"}
{"_id": "q_13177", "text": "Find the first FuncDef ast object in source"}
{"_id": "q_13178", "text": "Extract parameters from a function definition"}
{"_id": "q_13179", "text": "Extract names from a function definition\n\n Looks for a function definition in the source.\n Only the first function definition is examined.\n\n Returns:\n a list names(identifiers) used in the body of the function\n excluding function parameters."}
{"_id": "q_13180", "text": "True if src is a function definition"}
{"_id": "q_13181", "text": "Remove decorators from function definition"}
{"_id": "q_13182", "text": "True if only one lambda expression is included"}
{"_id": "q_13183", "text": "Reload the source function from the source module.\n\n **Internal use only**\n Update the source function of the formula.\n This method is used to updated the underlying formula\n when the source code of the module in which the source function\n is read from is modified.\n\n If the formula was not created from a module, an error is raised.\n If ``module_`` is not given, the source module of the formula is\n reloaded. If ``module_`` is given and matches the source module,\n then the module_ is used without being reloaded.\n If ``module_`` is given and does not match the source module of\n the formula, an error is raised.\n\n Args:\n module_: A ``ModuleSource`` object\n\n Returns:\n self"}
{"_id": "q_13184", "text": "Get long description from README."}
{"_id": "q_13185", "text": "Create a cells in the space.\n\n Args:\n name: If omitted, the model is named automatically ``CellsN``,\n where ``N`` is an available number.\n func: The function to define the formula of the cells.\n\n Returns:\n The new cells."}
{"_id": "q_13186", "text": "Create a cells from a module."}
{"_id": "q_13187", "text": "Retrieve an object by a dotted name relative to the space."}
{"_id": "q_13188", "text": "Create a new dynamic root space."}
{"_id": "q_13189", "text": "Create a dynamic root space\n\n Called from interface methods"}
{"_id": "q_13190", "text": "Implementation of attribute setting\n\n ``space.name = value`` by user script\n Called from ``Space.__setattr__``"}
{"_id": "q_13191", "text": "Implementation of attribute deletion\n\n ``del space.name`` by user script\n Called from ``StaticSpace.__delattr__``"}
{"_id": "q_13192", "text": "Implementation of cells deletion\n\n ``del space.name`` where name is a cells, or\n ``del space.cells['name']``"}
{"_id": "q_13193", "text": "Convert multiple cells to a frame.\n\n If args is an empty sequence, all values are included.\n If args is specified, cellsiter must have shareable parameters.\n\n Args:\n cellsiter: A mapping from cells names to CellsImpl objects.\n args: A sequence of arguments"}
{"_id": "q_13194", "text": "Convert a CellImpl into a Series.\n\n `args` must be a sequence of argkeys.\n\n `args` can be longer or shorter then the number of cell's parameters.\n If shorter, then defaults are filled if any, else raise error.\n If longer, then redundant args are ignored."}
{"_id": "q_13195", "text": "Remove all nodes with `obj` and their descendants."}
{"_id": "q_13196", "text": "Return nodes with `obj`."}
{"_id": "q_13197", "text": "Rename self. Must be called only by its system."}
{"_id": "q_13198", "text": "Clear values and nodes of `obj` and their dependants."}
{"_id": "q_13199", "text": "Create of get a base space for a tuple of bases"}
{"_id": "q_13200", "text": "Check if C3 MRO is possible with given bases"}
{"_id": "q_13201", "text": "Get all the boards for this organisation. Returns a list of Board s.\n\n Returns:\n list(Board): The boards attached to this organisation"}
{"_id": "q_13202", "text": "Get all members attached to this organisation. Returns a list of\n Member objects\n\n Returns:\n list(Member): The members attached to this organisation"}
{"_id": "q_13203", "text": "Remove a member from the organisation.Returns JSON of all members if\n successful or raises an Unauthorised exception if not."}
{"_id": "q_13204", "text": "Add a member to the board using the id. Membership type can be\n normal or admin. Returns JSON of all members if successful or raises an\n Unauthorised exception if not."}
{"_id": "q_13205", "text": "Get information for this list. Returns a dictionary of values."}
{"_id": "q_13206", "text": "Returns a list of command names supported"}
{"_id": "q_13207", "text": "Returns a dictionary value"}
{"_id": "q_13208", "text": "Parses an environment config"}
{"_id": "q_13209", "text": "Adds configuration files to an existing archive"}
{"_id": "q_13210", "text": "Swaps cnames for an environment"}
{"_id": "q_13211", "text": "Uploads an application archive version to s3"}
{"_id": "q_13212", "text": "Returns whether or not the given app_name exists"}
{"_id": "q_13213", "text": "Creates a new environment"}
{"_id": "q_13214", "text": "Returns whether or not the given environment exists"}
{"_id": "q_13215", "text": "Returns the environments"}
{"_id": "q_13216", "text": "Creates an application version"}
{"_id": "q_13217", "text": "Deletes unused versions"}
{"_id": "q_13218", "text": "Describes events from the given environment"}
{"_id": "q_13219", "text": "Get all information for this Label. Returns a dictionary of values."}
{"_id": "q_13220", "text": "Update the current label. Returns a new Label object."}
{"_id": "q_13221", "text": "Returns a URL that needs to be opened in a browser to retrieve an\n access token."}
{"_id": "q_13222", "text": "adds arguments for the swap urls command"}
{"_id": "q_13223", "text": "Swaps old and new URLs.\n If old_environment was active, new_environment will become the active environment"}
{"_id": "q_13224", "text": "Get information for this card. Returns a dictionary of values."}
{"_id": "q_13225", "text": "Get board information for this card. Returns a Board object.\n\n Returns:\n Board: The board this card is attached to"}
{"_id": "q_13226", "text": "Get the checklists for this card. Returns a list of Checklist objects.\n\n Returns:\n list(Checklist): The checklists attached to this card"}
{"_id": "q_13227", "text": "Adds a comment to this card by the current user."}
{"_id": "q_13228", "text": "Adds an attachment to this card."}
{"_id": "q_13229", "text": "Add a checklist to this card. Returns a Checklist object."}
{"_id": "q_13230", "text": "Add a label to this card, from a dictionary."}
{"_id": "q_13231", "text": "Add an existing label to this card."}
{"_id": "q_13232", "text": "Add a member to this card. Returns a list of Member objects."}
{"_id": "q_13233", "text": "Get all cards this member is attached to. Return a list of Card\n objects.\n\n Returns:\n list(Card): Return all cards this member is attached to"}
{"_id": "q_13234", "text": "Get all organisations this member is attached to. Return a list of\n Organisation objects.\n\n Returns:\n list(Organisation): Return all organisations this member is\n attached to"}
{"_id": "q_13235", "text": "Create a new board. name is required in query_params. Returns a Board\n object.\n\n Returns:\n Board: Returns the created board"}
{"_id": "q_13236", "text": "Enable singledispatch for class methods.\n\n See http://stackoverflow.com/a/24602374/274318"}
{"_id": "q_13237", "text": "The init command"}
{"_id": "q_13238", "text": "Get all information for this board. Returns a dictionary of values."}
{"_id": "q_13239", "text": "Get the lists attached to this board. Returns a list of List objects.\n\n Returns:\n list(List): The lists attached to this board"}
{"_id": "q_13240", "text": "Get the labels attached to this board. Returns a label of Label\n objects.\n\n Returns:\n list(Label): The labels attached to this board"}
{"_id": "q_13241", "text": "Get a Card for a given card id. Returns a Card object.\n\n Returns:\n Card: The card with the given card_id"}
{"_id": "q_13242", "text": "Get the checklists for this board. Returns a list of Checklist objects."}
{"_id": "q_13243", "text": "Get the Organisation for this board. Returns Organisation object.\n\n Returns:\n list(Organisation): The organisation attached to this board"}
{"_id": "q_13244", "text": "Update this board's information. Returns a new board."}
{"_id": "q_13245", "text": "Create a label for a board. Returns a new Label object."}
{"_id": "q_13246", "text": "Waits for an environment to be healthy"}
{"_id": "q_13247", "text": "Get all information for this Checklist. Returns a dictionary of values."}
{"_id": "q_13248", "text": "Get card this checklist is on."}
{"_id": "q_13249", "text": "Update the current checklist. Returns a new Checklist object."}
{"_id": "q_13250", "text": "Deletes an item from this checklist."}
{"_id": "q_13251", "text": "Rename the current checklist item. Returns a new ChecklistItem object."}
{"_id": "q_13252", "text": "Set the state of the current checklist item. Returns a new ChecklistItem object."}
{"_id": "q_13253", "text": "Args for the init command"}
{"_id": "q_13254", "text": "Create Board object from a JSON object\n\n Returns:\n Board: The board from the given `board_json`."}
{"_id": "q_13255", "text": "Create List object from JSON object\n\n Returns:\n List: The list from the given `list_json`."}
{"_id": "q_13256", "text": "Create a Card object from JSON object\n\n Returns:\n Card: The card from the given `card_json`."}
{"_id": "q_13257", "text": "Create a Checklist object from JSON object\n\n Returns:\n Checklist: The checklist from the given `checklist_json`."}
{"_id": "q_13258", "text": "Create a Member object from JSON object\n\n Returns:\n Member: The member from the given `member_json`."}
{"_id": "q_13259", "text": "Get an organisation\n\n Returns:\n Organisation: The organisation with the given `id`"}
{"_id": "q_13260", "text": "Get a checklist\n\n Returns:\n Checklist: The checklist with the given `id`"}
{"_id": "q_13261", "text": "Get a member or your current member if `id` wasn't given.\n\n Returns:\n Member: The member with the given `id`, defaults to the\n logged in member."}
{"_id": "q_13262", "text": "Lists solution stacks"}
{"_id": "q_13263", "text": "Picks only a coda from a Hangul letter. It returns ``None`` if the\n given letter is not Hangul."}
{"_id": "q_13264", "text": "Get root domain from url.\n Will prune away query strings, url paths, protocol prefix and sub-domains\n Exceptions will be raised on invalid urls"}
{"_id": "q_13265", "text": "Write the password in the file."}
{"_id": "q_13266", "text": "Use an integer list to split the string\n contained in `text`.\n\n Arguments:\n ----------\n text : str, same length as locations.\n locations : list<int>, contains values\n 'SHOULD_SPLIT', 'UNDECIDED', and\n 'SHOULD_NOT_SPLIT'. Will create\n strings between each 'SHOULD_SPLIT'\n locations.\n Returns:\n --------\n Generator<str> : the substrings of text\n corresponding to the slices given\n in locations."}
{"_id": "q_13267", "text": "Regex that adds a 'SHOULD_SPLIT' marker at the end\n location of each matching group of the given regex,\n and adds a 'SHOULD_SPLIT' at the beginning of the\n matching group. Each character within the matching\n group will be marked as 'SHOULD_NOT_SPLIT'.\n\n Arguments\n ---------\n regex : re.Expression\n text : str, same length as split_locations\n split_locations : list<int>, split decisions."}
{"_id": "q_13268", "text": "Main command line interface."}
{"_id": "q_13269", "text": "Return the AES mode, or a list of valid AES modes, if mode == None"}
{"_id": "q_13270", "text": "Applicable for all platforms, where the schemes, that are integrated\n with your environment, does not fit."}
{"_id": "q_13271", "text": "Send a CONNECT control packet."}
{"_id": "q_13272", "text": "Handles CONNACK packet from the server"}
{"_id": "q_13273", "text": "Encode an UTF-8 string into MQTT format. \n Returns a bytearray"}
{"_id": "q_13274", "text": "Decodes an UTF-8 string from an encoded MQTT bytearray.\n Returns the decoded string and renaining bytearray to be parsed"}
{"_id": "q_13275", "text": "Encodes a 16 bit unsigned integer into MQTT format.\n Returns a bytearray"}
{"_id": "q_13276", "text": "Encodes value into a multibyte sequence defined by MQTT protocol.\n Used to encode packet length fields."}
{"_id": "q_13277", "text": "Decodes a variable length value defined in the MQTT protocol.\n This value typically represents remaining field lengths"}
{"_id": "q_13278", "text": "Encode and store a CONNECT control packet. \n @raise e: C{ValueError} if any encoded topic string exceeds 65535 bytes.\n @raise e: C{ValueError} if encoded username string exceeds 65535 bytes."}
{"_id": "q_13279", "text": "Decode a CONNECT control packet."}
{"_id": "q_13280", "text": "Encode and store a CONNACK control packet."}
{"_id": "q_13281", "text": "Encode and store a SUBACK control packet."}
{"_id": "q_13282", "text": "Encode and store an UNSUBCRIBE control packet\n @raise e: C{ValueError} if any encoded topic string exceeds 65535 bytes"}
{"_id": "q_13283", "text": "Decode a UNSUBACK control packet."}
{"_id": "q_13284", "text": "Encode and store an UNSUBACK control packet"}
{"_id": "q_13285", "text": "Encode and store a PUBLISH control packet.\n @raise e: C{ValueError} if encoded topic string exceeds 65535 bytes.\n @raise e: C{ValueError} if encoded packet size exceeds 268435455 bytes.\n @raise e: C{TypeError} if C{data} is not a string, bytearray, int, boolean or float."}
{"_id": "q_13286", "text": "Decode a PUBLISH control packet."}
{"_id": "q_13287", "text": "Decode a PUBREL control packet."}
{"_id": "q_13288", "text": "Return url for call method.\n\n :param method (optional): `str` method name.\n :returns: `str` URL."}
{"_id": "q_13289", "text": "Fetch a deposit identifier.\n\n :param record_uuid: Record UUID.\n :param data: Record content.\n :returns: A :class:`invenio_pidstore.fetchers.FetchedPID` that contains\n data['_deposit']['id'] as pid_value."}
{"_id": "q_13290", "text": "Mint a deposit identifier.\n\n A PID with the following characteristics is created:\n\n .. code-block:: python\n\n {\n \"object_type\": \"rec\",\n \"object_uuid\": record_uuid,\n \"pid_value\": \"<new-pid-value>\",\n \"pid_type\": \"depid\",\n }\n\n The following deposit meta information are updated:\n\n .. code-block:: python\n\n deposit['_deposit'] = {\n \"id\": \"<new-pid-value>\",\n \"status\": \"draft\",\n }\n\n :param record_uuid: Record UUID.\n :param data: Record content.\n :returns: A :class:`invenio_pidstore.models.PersistentIdentifier` object."}
{"_id": "q_13291", "text": "Create Invenio-Deposit-UI blueprint.\n\n See: :data:`invenio_deposit.config.DEPOSIT_RECORDS_UI_ENDPOINTS`.\n\n :param endpoints: List of endpoints configuration.\n :returns: The configured blueprint."}
{"_id": "q_13292", "text": "Default view method.\n\n Sends ``record_viewed`` signal and renders template."}
{"_id": "q_13293", "text": "Create a new deposit identifier.\n\n :param object_type: The object type (Default: ``None``)\n :param object_uuid: The object UUID (Default: ``None``)\n :param kwargs: It contains the pid value."}
{"_id": "q_13294", "text": "Base permission factory that check OAuth2 scope and can_method.\n\n :param can_method: Permission check function that accept a record in input\n and return a boolean.\n :param myscopes: List of scopes required to permit the access.\n :returns: A :class:`flask_principal.Permission` factory."}
{"_id": "q_13295", "text": "Refresh the list of blocks to the disk, collectively"}
{"_id": "q_13296", "text": "Create error handlers on blueprint."}
{"_id": "q_13297", "text": "Handle deposit action.\n\n After the action is executed, a\n :class:`invenio_deposit.signals.post_action` signal is sent.\n\n Permission required: `update_permission_factory`.\n\n :param pid: Pid object (from url).\n :param record: Record object resolved from the pid.\n :param action: The action to execute."}
{"_id": "q_13298", "text": "Get files.\n\n Permission required: `read_permission_factory`.\n\n :param pid: Pid object (from url).\n :param record: Record object resolved from the pid.\n :returns: The files."}
{"_id": "q_13299", "text": "Handle POST deposit files.\n\n Permission required: `update_permission_factory`.\n\n :param pid: Pid object (from url).\n :param record: Record object resolved from the pid."}
{"_id": "q_13300", "text": "Handle the sort of the files through the PUT deposit files.\n\n Expected input in body PUT:\n\n .. code-block:: javascript\n\n [\n {\n \"id\": 1\n },\n {\n \"id\": 2\n },\n ...\n }\n\n Permission required: `update_permission_factory`.\n\n :param pid: Pid object (from url).\n :param record: Record object resolved from the pid.\n :returns: The files."}
{"_id": "q_13301", "text": "Handle the file rename through the PUT deposit file.\n\n Permission required: `update_permission_factory`.\n\n :param pid: Pid object (from url).\n :param record: Record object resolved from the pid.\n :param key: Unique identifier for the file in the deposit."}
{"_id": "q_13302", "text": "Load records."}
{"_id": "q_13303", "text": "Load default location."}
{"_id": "q_13304", "text": "Load deposit JSON schemas."}
{"_id": "q_13305", "text": "Load deposit schema forms."}
{"_id": "q_13306", "text": "Factory for record links generation.\n\n The dictionary is formed as:\n\n .. code-block:: python\n\n {\n 'files': '/url/to/files',\n 'publish': '/url/to/publish',\n 'edit': '/url/to/edit',\n 'discard': '/url/to/discard',\n ...\n }\n\n :param pid: The record PID object.\n :returns: A dictionary that contains all the links."}
{"_id": "q_13307", "text": "Load minter from PIDStore registry based on given value.\n\n :param value: Name of the minter.\n :returns: The minter."}
{"_id": "q_13308", "text": "Load schema from JSONSchema registry based on given value.\n\n :param value: Schema path, relative to the directory when it was\n registered.\n :returns: The schema absolute path."}
{"_id": "q_13309", "text": "Build a JSON Flask response using the given data.\n\n :param pid: The `invenio_pidstore.models.PersistentIdentifier` of the\n record.\n :param data: The record metadata.\n :returns: A Flask response with JSON data.\n :rtype: :py:class:`flask.Response`."}
{"_id": "q_13310", "text": "Serialize a object.\n\n :param obj: A :class:`invenio_files_rest.models.ObjectVersion` instance.\n :returns: A dictionary with the fields to serialize."}
{"_id": "q_13311", "text": "JSON Files Serializer.\n\n :parma objs: A list of:class:`invenio_files_rest.models.ObjectVersion`\n instances.\n :param status: A HTTP Status. (Default: ``None``)\n :returns: A Flask response with JSON data.\n :rtype: :py:class:`flask.Response`."}
{"_id": "q_13312", "text": "Index the record after publishing.\n\n .. note:: if the record is not published, it doesn't index.\n\n :param sender: Who send the signal.\n :param action: Action executed by the sender. (Default: ``None``)\n :param pid: PID object. (Default: ``None``)\n :param deposit: Deposit object. (Default: ``None``)"}
{"_id": "q_13313", "text": "Decorator to update index.\n\n :param method: Function wrapped. (Default: ``None``)\n :param delete: If `True` delete the indexed record. (Default: ``None``)"}
{"_id": "q_13314", "text": "Preserve fields in deposit.\n\n :param method: Function to execute. (Default: ``None``)\n :param result: If `True` returns the result of method execution,\n otherwise `self`. (Default: ``True``)\n :param fields: List of fields to preserve (default: ``('_deposit',)``)."}
{"_id": "q_13315", "text": "Return an instance of deposit PID."}
{"_id": "q_13316", "text": "Convert deposit schema to a valid record schema."}
{"_id": "q_13317", "text": "Convert record schema to a valid deposit schema.\n\n :param record: The record used to build deposit schema.\n :returns: The absolute URL to the schema or `None`."}
{"_id": "q_13318", "text": "Return a tuple with PID and published record."}
{"_id": "q_13319", "text": "Merge changes with latest published version."}
{"_id": "q_13320", "text": "Store changes on current instance in database and index it."}
{"_id": "q_13321", "text": "Create a deposit.\n\n Initialize the follow information inside the deposit:\n\n .. code-block:: python\n\n deposit['_deposit'] = {\n 'id': pid_value,\n 'status': 'draft',\n 'owners': [user_id],\n 'created_by': user_id,\n }\n\n The deposit index is updated.\n\n :param data: Input dictionary to fill the deposit.\n :param id_: Default uuid for the deposit.\n :returns: The new created deposit."}
{"_id": "q_13322", "text": "Publish the deposit after for editing."}
{"_id": "q_13323", "text": "Publish a deposit.\n\n If it's the first time:\n\n * it calls the minter and set the following meta information inside\n the deposit:\n\n .. code-block:: python\n\n deposit['_deposit'] = {\n 'type': pid_type,\n 'value': pid_value,\n 'revision_id': 0,\n }\n\n * A dump of all information inside the deposit is done.\n\n * A snapshot of the files is done.\n\n Otherwise, published the new edited version.\n In this case, if in the mainwhile someone already published a new\n version, it'll try to merge the changes with the latest version.\n\n .. note:: no need for indexing as it calls `self.commit()`.\n\n Status required: ``'draft'``.\n\n :param pid: Force the new pid value. (Default: ``None``)\n :param id_: Force the new uuid value as deposit id. (Default: ``None``)\n :returns: Returns itself."}
{"_id": "q_13324", "text": "Edit deposit.\n\n #. The signal :data:`invenio_records.signals.before_record_update`\n is sent before the edit execution.\n\n #. The following meta information are saved inside the deposit:\n\n .. code-block:: python\n\n deposit['_deposit']['pid'] = record.revision_id\n deposit['_deposit']['status'] = 'draft'\n deposit['$schema'] = deposit_schema_from_record_schema\n\n #. The signal :data:`invenio_records.signals.after_record_update` is\n sent after the edit execution.\n\n #. The deposit index is updated.\n\n Status required: `published`.\n\n .. note:: the process fails if the pid has status\n :attr:`invenio_pidstore.models.PIDStatus.REGISTERED`.\n\n :param pid: Force a pid object. (Default: ``None``)\n :returns: A new Deposit object."}
{"_id": "q_13325", "text": "Clear only drafts.\n\n Status required: ``'draft'``.\n\n Meta information inside `_deposit` are preserved."}
{"_id": "q_13326", "text": "Patch only drafts.\n\n Status required: ``'draft'``.\n\n Meta information inside `_deposit` are preserved."}
{"_id": "q_13327", "text": "List of Files inside the deposit.\n\n Add validation on ``sort_by`` method: if, at the time of files access,\n the record is not a ``'draft'`` then a\n :exc:`invenio_pidstore.errors.PIDInvalidAction` is rised."}
{"_id": "q_13328", "text": "Converts a reStructuredText into its node"}
{"_id": "q_13329", "text": "Hook the directives when Sphinx ask for it."}
{"_id": "q_13330", "text": "API call to get a list of templates"}
{"_id": "q_13331", "text": "API call to get a specific template"}
{"_id": "q_13332", "text": "API call to create a new locale and version of a template"}
{"_id": "q_13333", "text": "API call to update a template version"}
{"_id": "q_13334", "text": "API call to get list of snippets"}
{"_id": "q_13335", "text": "API call to get a specific Snippet"}
{"_id": "q_13336", "text": "API call to create a Snippet"}
{"_id": "q_13337", "text": "API call to send an email"}
{"_id": "q_13338", "text": "Return instances of all other tabs that are members of the tab's\n tab group."}
{"_id": "q_13339", "text": "Process and prepare tabs.\n\n This includes steps like updating references to the current tab,\n filtering out hidden tabs, sorting tabs etc...\n\n Args:\n tabs:\n The list of tabs to process.\n current_tab:\n The reference to the currently loaded tab.\n group_current_tab:\n The reference to the active tab in the current tab group. For\n parent tabs, this is different than for the current tab group.\n\n Returns:\n Processed list of tabs. Note that the method may have side effects."}
{"_id": "q_13340", "text": "Adds tab information to context.\n\n To retrieve a list of all group tab instances, use\n ``{{ tabs }}`` in your template.\n\n The id of the current tab is added as ``current_tab_id`` to the\n template context.\n\n If the current tab has a parent tab the parent's id is added to\n the template context as ``parent_tab_id``. Instances of all tabs\n of the parent level are added as ``parent_tabs`` to the context.\n\n If the current tab has children they are added to the template\n context as ``child_tabs``."}
{"_id": "q_13341", "text": "Convert a string into a valid python attribute name.\n This function is called to convert ASCII strings to something that can pass as\n python attribute name, to be used with namedtuples.\n\n >>> str(normalize_name('class'))\n 'class_'\n >>> str(normalize_name('a-name'))\n 'a_name'\n >>> str(normalize_name('a n\\u00e4me'))\n 'a_name'\n >>> str(normalize_name('Name'))\n 'Name'\n >>> str(normalize_name(''))\n '_'\n >>> str(normalize_name('1'))\n '_1'"}
{"_id": "q_13342", "text": "Function to format data for cluster fitting.\n\n Parameters\n ----------\n data : dict\n A dict of data, containing all elements of\n `analytes` as items.\n\n Returns\n -------\n A data array for initial cluster fitting."}
{"_id": "q_13343", "text": "Fit KMeans clustering algorithm to data.\n\n Parameters\n ----------\n data : array-like\n A dataset formatted by `classifier.fitting_data`.\n n_clusters : int\n The number of clusters in the data.\n **kwargs\n passed to `sklearn.cluster.KMeans`.\n\n Returns\n -------\n Fitted `sklearn.cluster.KMeans` object."}
{"_id": "q_13344", "text": "Translate cluster identity back to original data size.\n\n Parameters\n ----------\n size : int\n size of original dataset\n sampled : array-like\n integer array describing location of finite values\n in original data.\n clusters : array-like\n integer array of cluster identities\n\n Returns\n -------\n list of cluster identities the same length as original\n data. Where original data are non-finite, returns -2."}
{"_id": "q_13345", "text": "Sort clusters by the concentration of a particular analyte.\n\n Parameters\n ----------\n data : dict\n A dataset containing sort_by as a key.\n cs : array-like\n An array of clusters, the same length as values of data.\n sort_by : str\n analyte to sort the clusters by\n\n Returns\n -------\n array of clusters, sorted by mean value of sort_by analyte."}
{"_id": "q_13346", "text": "Returns the total number of data points in values of dict.\n\n Paramters\n ---------\n d : dict"}
{"_id": "q_13347", "text": "Returns total length of analysis."}
{"_id": "q_13348", "text": "Returns formatted element name.\n\n Parameters\n ----------\n s : str\n of format [A-Z][a-z]?[0-9]+\n\n Returns\n -------\n str\n LaTeX formatted string with superscript numbers."}
{"_id": "q_13349", "text": "Converts analytes in format 'Al27' to '27Al'.\n\n Parameters\n ----------\n s : str\n of format [0-9]{1,3}[A-z]{1,3}\n\n Returns\n -------\n str\n Name in format [A-z]{1,3}[0-9]{1,3}"}
{"_id": "q_13350", "text": "Consecutively numbers contiguous booleans in array.\n\n i.e. a boolean sequence, and resulting numbering\n T F T T T F T F F F T T F\n 0-1 1 1 - 2 ---3 3 -\n\n where ' - '\n\n Parameters\n ----------\n bool_array : array_like\n Array of booleans.\n nstart : int\n The number of the first boolean group."}
{"_id": "q_13351", "text": "Generate boolean array from list of limit tuples.\n\n Parameters\n ----------\n tuples : array_like\n [2, n] array of (start, end) values\n x : array_like\n x scale the tuples are mapped to\n\n Returns\n -------\n array_like\n boolean array, True where x is between each pair of tuples."}
{"_id": "q_13352", "text": "Returns rolling - window smooth of a.\n\n Function to efficiently calculate the rolling mean of a numpy\n array using 'stride_tricks' to split up a 1D array into an ndarray of\n sub - sections of the original array, of dimensions [len(a) - win, win].\n\n Parameters\n ----------\n a : array_like\n The 1D array to calculate the rolling gradient of.\n win : int\n The width of the rolling window.\n\n Returns\n -------\n array_like\n Gradient of a, assuming as constant integer x - scale."}
{"_id": "q_13353", "text": "Identify clusters using K - Means algorithm.\n\n Parameters\n ----------\n data : array_like\n array of size [n_samples, n_features].\n n_clusters : int\n The number of clusters expected in the data.\n\n Returns\n -------\n dict\n boolean array for each identified cluster."}
{"_id": "q_13354", "text": "Identify clusters using DBSCAN algorithm.\n\n Parameters\n ----------\n data : array_like\n array of size [n_samples, n_features].\n eps : float\n The minimum 'distance' points must be apart for them to be in the\n same cluster. Defaults to 0.3. Note: If the data are normalised\n (they should be for DBSCAN) this is in terms of total sample\n variance. Normalised data have a mean of 0 and a variance of 1.\n min_samples : int\n The minimum number of samples within distance `eps` required\n to be considered as an independent cluster.\n n_clusters : int\n The number of clusters expected. If specified, `eps` will be\n incrementally reduced until the expected number of clusters is\n found.\n maxiter : int\n The maximum number of iterations DBSCAN will run.\n\n Returns\n -------\n dict\n boolean array for each identified cluster and core samples."}
{"_id": "q_13355", "text": "Returns list of SRMS defined in the SRM database"}
{"_id": "q_13356", "text": "Read LAtools configuration file, and return parameters as dict."}
{"_id": "q_13357", "text": "Adds a new configuration to latools.cfg.\n\n Parameters\n ----------\n config_name : str\n The name of the new configuration. This should be descriptive\n (e.g. UC Davis Foram Group)\n srmfile : str (optional)\n The location of the srm file used for calibration.\n dataformat : str (optional)\n The location of the dataformat definition to use.\n base_on : str\n The name of the existing configuration to base the new one on.\n If either srm_file or dataformat are not specified, the new\n config will copy this information from the base_on config.\n make_default : bool\n Whether or not to make the new configuration the default\n for future analyses. Default = False.\n\n Returns\n -------\n None"}
{"_id": "q_13358", "text": "Change the default configuration."}
{"_id": "q_13359", "text": "Exclude all data after the first excluded portion.\n\n This makes sense for spot measurements where, because\n of the signal mixing inherent in LA-ICPMS, once a\n contaminant is ablated, it will always be present to\n some degree in signals from further down the ablation\n pit.\n\n Parameters\n ----------\n filt : boolean array\n threshold : int\n\n Returns\n -------\n filter : boolean array"}
{"_id": "q_13360", "text": "Plot a detailed autorange report for this sample."}
{"_id": "q_13361", "text": "Transform boolean arrays into list of limit pairs.\n\n Gets Time limits of signal/background boolean arrays and stores them as\n sigrng and bkgrng arrays. These arrays can be saved by 'save_ranges' in\n the analyse object."}
{"_id": "q_13362", "text": "Apply calibration to data.\n\n The `calib_dict` must be calculated at the `analyse` level,\n and passed to this calibrate function.\n\n Parameters\n ----------\n calib_dict : dict\n A dict of calibration values to apply to each analyte.\n\n Returns\n -------\n None"}
{"_id": "q_13363", "text": "Calculate sample statistics\n\n Returns samples, analytes, and arrays of statistics\n of shape (samples, analytes). Statistics are calculated\n from the 'focus' data variable, so output depends on how\n the data have been processed.\n\n Parameters\n ----------\n analytes : array_like\n List of analytes to calculate the statistic on\n filt : bool or str\n The filter to apply to the data when calculating sample statistics.\n bool: True applies filter specified in filt.switches.\n str: logical string specifying a partucular filter\n stat_fns : dict\n Dict of {name: function} pairs. Functions that take a single\n array_like input, and return a single statistic. Function should\n be able to cope with NaN values.\n eachtrace : bool\n True: per - ablation statistics\n False: whole sample statistics\n\n Returns\n -------\n None"}
{"_id": "q_13364", "text": "Function for calculating the ablation time for each\n ablation.\n\n Returns\n -------\n dict of times for each ablation."}
{"_id": "q_13365", "text": "Apply threshold filter.\n\n Generates threshold filters for the given analytes above and below\n the specified threshold.\n\n Two filters are created with prefixes '_above' and '_below'.\n '_above' keeps all the data above the threshold.\n '_below' keeps all the data below the threshold.\n\n i.e. to select data below the threshold value, you should turn the\n '_above' filter off.\n\n Parameters\n ----------\n analyte : TYPE\n Description of `analyte`.\n threshold : TYPE\n Description of `threshold`.\n\n Returns\n -------\n None"}
{"_id": "q_13366", "text": "Apply gradient threshold filter.\n\n Generates threshold filters for the given analytes above and below\n the specified threshold.\n\n Two filters are created with prefixes '_above' and '_below'.\n '_above' keeps all the data above the threshold.\n '_below' keeps all the data below the threshold.\n\n i.e. to select data below the threshold value, you should turn the\n '_above' filter off.\n\n Parameters\n ----------\n analyte : str\n Description of `analyte`.\n threshold : float\n Description of `threshold`.\n win : int\n Window used to calculate gradients (n points)\n recalc : bool\n Whether or not to re-calculate the gradients.\n\n Returns\n -------\n None"}
{"_id": "q_13367", "text": "Calculate local correlation between two analytes.\n\n Parameters\n ----------\n x_analyte, y_analyte : str\n The names of the x and y analytes to correlate.\n window : int, None\n The rolling window used when calculating the correlation.\n filt : bool\n Whether or not to apply existing filters to the data before\n calculating this filter.\n recalc : bool\n If True, the correlation is re-calculated, even if it is already present.\n\n Returns\n -------\n None"}
{"_id": "q_13368", "text": "Make new filter from combination of other filters.\n\n Parameters\n ----------\n name : str\n The name of the new filter. Should be unique.\n filt_str : str\n A logical combination of partial strings which will create\n the new filter. For example, 'Albelow & Mnbelow' will combine\n all filters that partially match 'Albelow' with those that\n partially match 'Mnbelow' using the 'AND' logical operator.\n\n Returns\n -------\n None"}
{"_id": "q_13369", "text": "Plot histograms of all items in dat.\n\n Parameters\n ----------\n dat : dict\n Data in {key: array} pairs.\n keys : arra-like\n The keys in dat that you want to plot. If None,\n all are plotted.\n bins : int\n The number of bins in each histogram (default = 25)\n logy : bool\n If true, y axis is a log scale.\n cmap : dict\n The colours that the different items should be. If None,\n all are grey.\n\n Returns\n -------\n fig, axes"}
{"_id": "q_13370", "text": "Find an instance of the type class `TC` for type `G`.\n Iterates `G`'s parent classes, looking up instances for each,\n checking whether the instance is a subclass of the target type\n class `TC`."}
{"_id": "q_13371", "text": "Convenience factory function for csv reader.\n\n :param lines_or_file: Content to be read. Either a file handle, a file path or a list\\\n of strings.\n :param namedtuples: Yield namedtuples.\n :param dicts: Yield dicts.\n :param encoding: Encoding of the content.\n :param kw: Keyword parameters are passed through to csv.reader.\n :return: A generator over the rows."}
{"_id": "q_13372", "text": "Loads a DataFrame of all elements and isotopes.\n\n Scraped from https://www.webelements.com/\n\n Returns\n -------\n pandas DataFrame with columns (element, atomic_number, isotope, atomic_weight, percent)"}
{"_id": "q_13373", "text": "generate single escape sequence mapping."}
{"_id": "q_13374", "text": "Squash and reduce the input stack.\n Removes the elements of input that match predicate and only keeps the last\n match at the end of the stack."}
{"_id": "q_13375", "text": "Calculate gaussian weigted moving mean, SD and SE.\n\n Parameters\n ----------\n x : array-like\n The independent variable\n yarray : (n,m) array\n Where n = x.size, and m is the number of\n dependent variables to smooth.\n x_new : array-like\n The new x-scale to interpolate the data\n fwhm : int\n FWHM of the gaussian kernel.\n\n Returns\n -------\n (mean, std, se) : tuple"}
{"_id": "q_13376", "text": "Gaussian function.\n\n Parameters\n ----------\n x : array_like\n Independent variable.\n *p : parameters unpacked to A, mu, sigma\n A = amplitude, mu = centre, sigma = width\n\n Return\n ------\n array_like\n gaussian descriped by *p."}
{"_id": "q_13377", "text": "Calculate the standard error of a."}
{"_id": "q_13378", "text": "Despikes data with exponential decay and noise filters.\n\n Parameters\n ----------\n expdecay_despiker : bool\n Whether or not to apply the exponential decay filter.\n exponent : None or float\n The exponent for the exponential decay filter. If None,\n it is determined automatically using `find_expocoef`.\n tstep : None or float\n The timeinterval between measurements. If None, it is\n determined automatically from the Time variable.\n noise_despiker : bool\n Whether or not to apply the standard deviation spike filter.\n win : int\n The rolling window over which the spike filter calculates\n the trace statistics.\n nlim : float\n The number of standard deviations above the rolling mean\n that data are excluded.\n exponentplot : bool\n Whether or not to show a plot of the automatically determined\n exponential decay exponent.\n maxiter : int\n The max number of times that the fitler is applied.\n focus_stage : str\n Which stage of analysis to apply processing to. \n Defaults to 'rawdata'. Can be one of:\n * 'rawdata': raw data, loaded from csv file.\n * 'despiked': despiked data.\n * 'signal'/'background': isolated signal and background data.\n Created by self.separate, after signal and background\n regions have been identified by self.autorange.\n * 'bkgsub': background subtracted data, created by \n self.bkg_correct\n * 'ratios': element ratio data, created by self.ratio.\n * 'calibrated': ratio data calibrated to standards, created by self.calibrate.\n\n Returns\n -------\n None"}
{"_id": "q_13379", "text": "Background calculation using a gaussian weighted mean.\n\n Parameters\n ----------\n analytes : str or iterable\n Which analyte or analytes to calculate.\n weight_fwhm : float\n The full-width-at-half-maximum of the gaussian used\n to calculate the weighted average.\n n_min : int\n Background regions with fewer than n_min points\n will not be included in the fit.\n cstep : float or None\n The interval between calculated background points.\n filter : bool\n If true, apply a rolling filter to the isolated background regions\n to exclude regions with anomalously high values. If True, two parameters\n alter the filter's behaviour:\n f_win : int\n The size of the rolling window\n f_n_lim : float\n The number of standard deviations above the rolling mean\n to set the threshold.\n focus_stage : str\n Which stage of analysis to apply processing to. \n Defaults to 'despiked' if present, or 'rawdata' if not. \n Can be one of:\n * 'rawdata': raw data, loaded from csv file.\n * 'despiked': despiked data.\n * 'signal'/'background': isolated signal and background data.\n Created by self.separate, after signal and background\n regions have been identified by self.autorange.\n * 'bkgsub': background subtracted data, created by \n self.bkg_correct\n * 'ratios': element ratio data, created by self.ratio.\n * 'calibrated': ratio data calibrated to standards, created by self.calibrate."}
{"_id": "q_13380", "text": "Background calculation using a 1D interpolation.\n\n scipy.interpolate.interp1D is used for interpolation.\n\n Parameters\n ----------\n analytes : str or iterable\n Which analyte or analytes to calculate.\n kind : str or int\n Integer specifying the order of the spline interpolation\n used, or string specifying a type of interpolation.\n Passed to `scipy.interpolate.interp1D`\n n_min : int\n Background regions with fewer than n_min points\n will not be included in the fit.\n cstep : float or None\n The interval between calculated background points.\n filter : bool\n If true, apply a rolling filter to the isolated background regions\n to exclude regions with anomalously high values. If True, two parameters\n alter the filter's behaviour:\n f_win : int\n The size of the rolling window\n f_n_lim : float\n The number of standard deviations above the rolling mean\n to set the threshold.\n focus_stage : str\n Which stage of analysis to apply processing to. \n Defaults to 'despiked' if present, or 'rawdata' if not. \n Can be one of:\n * 'rawdata': raw data, loaded from csv file.\n * 'despiked': despiked data.\n * 'signal'/'background': isolated signal and background data.\n Created by self.separate, after signal and background\n regions have been identified by self.autorange.\n * 'bkgsub': background subtracted data, created by \n self.bkg_correct\n * 'ratios': element ratio data, created by self.ratio.\n * 'calibrated': ratio data calibrated to standards, created by self.calibrate."}
{"_id": "q_13381", "text": "Subtract calculated background from data.\n\n Must run bkg_calc first!\n\n Parameters\n ----------\n analytes : str or iterable\n Which analyte(s) to subtract.\n errtype : str\n Which type of error to propagate. default is 'stderr'.\n focus_stage : str\n Which stage of analysis to apply processing to. \n Defaults to 'despiked' if present, or 'rawdata' if not. \n Can be one of:\n * 'rawdata': raw data, loaded from csv file.\n * 'despiked': despiked data.\n * 'signal'/'background': isolated signal and background data.\n Created by self.separate, after signal and background\n regions have been identified by self.autorange.\n * 'bkgsub': background subtracted data, created by \n self.bkg_correct\n * 'ratios': element ratio data, created by self.ratio.\n * 'calibrated': ratio data calibrated to standards, created by self.calibrate."}
{"_id": "q_13382", "text": "Calculates the ratio of all analytes to a single analyte.\n\n Parameters\n ----------\n internal_standard : str\n The name of the analyte to divide all other analytes\n by.\n\n Returns\n -------\n None"}
{"_id": "q_13383", "text": "Creates a subset of samples, which can be treated independently.\n\n Parameters\n ----------\n samples : str or array - like\n Name of sample, or list of sample names.\n name : (optional) str or number\n The name of the sample group. Defaults to n + 1, where n is\n the highest existing group number"}
{"_id": "q_13384", "text": "Calculate a gradient threshold filter to the data.\n\n Generates two filters above and below the threshold value for a\n given analyte.\n\n Parameters\n ----------\n analyte : str\n The analyte that the filter applies to.\n win : int\n The window over which to calculate the moving gradient\n percentiles : float or iterable of len=2\n The percentile values.\n filt : bool\n Whether or not to apply existing filters to the data before\n calculating this filter.\n samples : array_like or None\n Which samples to apply this filter to. If None, applies to all\n samples.\n subset : str or number\n The subset of samples (defined by make_subset) you want to apply\n the filter to.\n\n Returns\n -------\n None"}
{"_id": "q_13385", "text": "Create a clustering classifier based on all samples, or a subset.\n\n Parameters\n ----------\n name : str\n The name of the classifier.\n analytes : str or iterable\n Which analytes the clustering algorithm should consider.\n method : str\n Which clustering algorithm to use. Can be:\n\n 'meanshift'\n The `sklearn.cluster.MeanShift` algorithm.\n Automatically determines number of clusters\n in data based on the `bandwidth` of expected\n variation.\n 'kmeans'\n The `sklearn.cluster.KMeans` algorithm. Determines\n the characteristics of a known number of clusters\n within the data. Must provide `n_clusters` to specify\n the expected number of clusters.\n samples : iterable\n list of samples to consider. Overrides 'subset'.\n subset : str\n The subset of samples used to fit the classifier. Ignored if\n 'samples' is specified.\n sort_by : int\n Which analyte the resulting clusters should be sorted\n by - defaults to 0, which is the first analyte.\n **kwargs :\n method-specific keyword parameters - see below.\n Meanshift Parameters\n bandwidth : str or float\n The bandwith (float) or bandwidth method ('scott' or 'silverman')\n used to estimate the data bandwidth.\n bin_seeding : bool\n Modifies the behaviour of the meanshift algorithm. Refer to\n sklearn.cluster.meanshift documentation.\n K - Means Parameters\n n_clusters : int\n The number of clusters expected in the data.\n\n Returns\n -------\n name : str"}
{"_id": "q_13386", "text": "Apply a clustering classifier based on all samples, or a subset.\n\n Parameters\n ----------\n name : str\n The name of the classifier to apply.\n subset : str\n The subset of samples to apply the classifier to.\n Returns\n -------\n name : str"}
{"_id": "q_13387", "text": "Applies a correlation filter to the data.\n\n Calculates a rolling correlation between every `window` points of\n two analytes, and excludes data where their Pearson's R value is\n above `r_threshold` and statistically significant.\n\n Data will be excluded where their absolute R value is greater than\n `r_threshold` AND the p - value associated with the correlation is\n less than `p_threshold`. i.e. only correlations that are statistically\n significant are considered.\n\n Parameters\n ----------\n x_analyte, y_analyte : str\n The names of the x and y analytes to correlate.\n window : int, None\n The rolling window used when calculating the correlation.\n r_threshold : float\n The correlation index above which to exclude data.\n Note: the absolute pearson R value is considered, so\n negative correlations below -`r_threshold` will also\n be excluded.\n p_threshold : float\n The significant level below which data are excluded.\n filt : bool\n Whether or not to apply existing filters to the data before\n calculating this filter.\n\n Returns\n -------\n None"}
{"_id": "q_13388", "text": "Turns data filters on for particular analytes and samples.\n\n Parameters\n ----------\n filt : optional, str or array_like\n Name, partial name or list of names of filters. Supports\n partial matching. i.e. if 'cluster' is specified, all\n filters with 'cluster' in the name are activated.\n Defaults to all filters.\n analyte : optional, str or array_like\n Name or list of names of analytes. Defaults to all analytes.\n samples : optional, array_like or None\n Which samples to apply this filter to. If None, applies to all\n samples.\n\n Returns\n -------\n None"}
{"_id": "q_13389", "text": "Turns data filters off for particular analytes and samples.\n\n Parameters\n ----------\n filt : optional, str or array_like\n Name, partial name or list of names of filters. Supports\n partial matching. i.e. if 'cluster' is specified, all\n filters with 'cluster' in the name are activated.\n Defaults to all filters.\n analyte : optional, str or array_like\n Name or list of names of analytes. Defaults to all analytes.\n samples : optional, array_like or None\n Which samples to apply this filter to. If None, applies to all\n samples.\n\n Returns\n -------\n None"}
{"_id": "q_13390", "text": "Prints the current status of filters for specified samples.\n\n Parameters\n ----------\n sample : str\n Which sample to print.\n subset : str\n Specify a subset\n stds : bool\n Whether or not to include standards."}
{"_id": "q_13391", "text": "Remove 'fragments' from the calculated filter\n\n Parameters\n ----------\n threshold : int\n Contiguous data regions that contain this number\n or fewer points are considered 'fragments'\n mode : str\n Specifies wither to 'include' or 'exclude' the identified\n fragments.\n filt : bool or filt string\n Which filter to apply the defragmenter to. Defaults to True\n samples : array_like or None\n Which samples to apply this filter to. If None, applies to all\n samples.\n subset : str or number\n The subset of samples (defined by make_subset) you want to apply\n the filter to.\n \n Returns\n -------\n None"}
{"_id": "q_13392", "text": "Plot a histogram of the gradients in all samples.\n\n Parameters\n ----------\n filt : str, dict or bool\n Either logical filter expression contained in a str,\n a dict of expressions specifying the filter string to\n use for each analyte or a boolean. Passed to `grab_filt`.\n bins : None or array-like\n The bins to use in the histogram\n samples : str or list\n which samples to get\n subset : str or int\n which subset to get\n recalc : bool\n Whether to re-calculate the gradients, or use existing gradients.\n\n Returns\n -------\n fig, ax"}
{"_id": "q_13393", "text": "Plot analyte gradients against each other.\n\n Parameters\n ----------\n analytes : optional, array_like or str\n The analyte(s) to plot. Defaults to all analytes.\n lognorm : bool\n Whether or not to log normalise the colour scale\n of the 2D histogram.\n bins : int\n The number of bins in the 2D histogram.\n filt : str, dict or bool\n Either logical filter expression contained in a str,\n a dict of expressions specifying the filter string to\n use for each analyte or a boolean. Passed to `grab_filt`.\n figsize : tuple\n Figure size (width, height) in inches.\n save : bool or str\n If True, plot is saves as 'crossplot.png', if str plot is\n saves as str.\n colourful : bool\n Whether or not the plot should be colourful :).\n mode : str\n 'hist2d' (default) or 'scatter'\n recalc : bool\n Whether to re-calculate the gradients, or use existing gradients.\n\n Returns\n -------\n (fig, axes)"}
{"_id": "q_13394", "text": "Plot histograms of analytes.\n\n Parameters\n ----------\n analytes : optional, array_like or str\n The analyte(s) to plot. Defaults to all analytes.\n bins : int\n The number of bins in each histogram (default = 25)\n logy : bool\n If true, y axis is a log scale.\n filt : str, dict or bool\n Either logical filter expression contained in a str,\n a dict of expressions specifying the filter string to\n use for each analyte or a boolean. Passed to `grab_filt`.\n colourful : bool\n If True, histograms are colourful :)\n\n Returns\n -------\n (fig, axes)"}
{"_id": "q_13395", "text": "Plot filter reports for all filters that contain ``filt_str``\n in the name."}
{"_id": "q_13396", "text": "Calculate sample statistics.\n\n Returns samples, analytes, and arrays of statistics\n of shape (samples, analytes). Statistics are calculated\n from the 'focus' data variable, so output depends on how\n the data have been processed.\n\n Included stat functions:\n\n * :func:`~latools.stat_fns.mean`: arithmetic mean\n * :func:`~latools.stat_fns.std`: arithmetic standard deviation\n * :func:`~latools.stat_fns.se`: arithmetic standard error\n * :func:`~latools.stat_fns.H15_mean`: Huber mean (outlier removal)\n * :func:`~latools.stat_fns.H15_std`: Huber standard deviation (outlier removal)\n * :func:`~latools.stat_fns.H15_se`: Huber standard error (outlier removal)\n\n Parameters\n ----------\n analytes : optional, array_like or str\n The analyte(s) to calculate statistics for. Defaults to\n all analytes.\n filt : str, dict or bool\n Either logical filter expression contained in a str,\n a dict of expressions specifying the filter string to\n use for each analyte or a boolean. Passed to `grab_filt`.\n stats : array_like\n take a single array_like input, and return a single statistic. \n list of functions or names (see above) or functions that\n Function should be able to cope with NaN values.\n eachtrace : bool\n Whether to calculate the statistics for each analysis\n spot individually, or to produce per - sample means.\n Default is True.\n\n Returns\n -------\n None\n Adds dict to analyse object containing samples, analytes and\n functions and data."}
{"_id": "q_13397", "text": "Return pandas dataframe of all sample statistics."}
{"_id": "q_13398", "text": "Used for exporting minimal dataset. DON'T USE."}
{"_id": "q_13399", "text": "Function to export raw data.\n\n Parameters\n ----------\n outdir : str\n directory to save toe traces. Defaults to 'main-dir-name_export'.\n focus_stage : str\n The name of the analysis stage to export.\n\n * 'rawdata': raw data, loaded from csv file.\n * 'despiked': despiked data.\n * 'signal'/'background': isolated signal and background data.\n Created by self.separate, after signal and background\n regions have been identified by self.autorange.\n * 'bkgsub': background subtracted data, created by \n self.bkg_correct\n * 'ratios': element ratio data, created by self.ratio.\n * 'calibrated': ratio data calibrated to standards, created by self.calibrate.\n\n Defaults to the most recent stage of analysis.\n analytes : str or array - like\n Either a single analyte, or list of analytes to export.\n Defaults to all analytes.\n samples : str or array - like\n Either a single sample name, or list of samples to export.\n Defaults to all samples.\n filt : str, dict or bool\n Either logical filter expression contained in a str,\n a dict of expressions specifying the filter string to\n use for each analyte or a boolean. Passed to `grab_filt`."}
{"_id": "q_13400", "text": "Save analysis.lalog in specified location"}
{"_id": "q_13401", "text": "Exports a analysis parameters, standard info and a minimal dataset,\n which can be imported by another user.\n\n Parameters\n ----------\n target_analytes : str or iterable\n Which analytes to include in the export. If specified, the export\n will contain these analytes, and all other analytes used during\n data processing (e.g. during filtering). If not specified, \n all analytes are exported.\n path : str\n Where to save the minimal export. \n If it ends with .zip, a zip file is created.\n If it's a folder, all data are exported to a folder."}
{"_id": "q_13402", "text": "Plot a fitted PCA, and all components."}
{"_id": "q_13403", "text": "Apply standard deviation filter to remove anomalous values.\n\n Parameters\n ----------\n win : int\n The window used to calculate rolling statistics.\n nlim : float\n The number of standard deviations above the rolling\n mean above which data are considered outliers.\n\n Returns\n -------\n None"}
{"_id": "q_13404", "text": "Apply exponential decay filter to remove physically impossible data based on instrumental washout.\n\n The filter is re-applied until no more points are removed, or maxiter is reached.\n\n Parameters\n ----------\n exponent : float\n Exponent used in filter\n tstep : float\n The time increment between data points.\n maxiter : int\n The maximum number of times the filter should be applied.\n\n Returns\n -------\n None"}
{"_id": "q_13405", "text": "Add filter.\n\n Parameters\n ----------\n name : str\n filter name\n filt : array_like\n boolean filter array\n info : str\n informative description of the filter\n params : tuple\n parameters used to make the filter\n\n Returns\n -------\n None"}
{"_id": "q_13406", "text": "Remove filter.\n\n Parameters\n ----------\n name : str\n name of the filter to remove\n setn : int or True\n int: number of set to remove\n True: remove all filters in set that 'name' belongs to\n\n Returns\n -------\n None"}
{"_id": "q_13407", "text": "Clear all filters."}
{"_id": "q_13408", "text": "Remove unused filters."}
{"_id": "q_13409", "text": "Identify a filter by fuzzy string matching.\n\n Partial ('fuzzy') matching performed by `fuzzywuzzy.fuzzy.ratio`\n\n Parameters\n ----------\n fuzzkey : str\n A string that partially matches one filter name more than the others.\n\n Returns\n -------\n The name of the most closely matched filter. : str"}
{"_id": "q_13410", "text": "Flexible access to specific filter using any key format.\n\n Parameters\n ----------\n f : str, dict or bool\n either logical filter expression, dict of expressions,\n or a boolean\n analyte : str\n name of analyte the filter is for.\n\n Returns\n -------\n array_like\n boolean filter"}
{"_id": "q_13411", "text": "Write and analysis log to a file.\n\n Parameters\n ----------\n log : list\n latools.analyse analysis log\n header : list\n File header lines.\n file_name : str\n Destination file. If no file extension\n specified, uses '.lalog'\n\n Returns\n -------\n None"}
{"_id": "q_13412", "text": "Reads an latools analysis.log file, and returns dicts of arguments.\n\n Parameters\n ----------\n log_file : str\n Path to an analysis.log file produced by latools.\n \n Returns\n -------\n runargs, paths : tuple\n Two dictionaries. runargs contains all the arguments required to run each step\n of analysis in the form (function_name, {'args': (), 'kwargs': {}}). paths contains\n the locations of the data directory and the SRM database used for analysis."}
{"_id": "q_13413", "text": "Append the item to the metadata."}
{"_id": "q_13414", "text": "Append the items to the metadata."}
{"_id": "q_13415", "text": "Construct a regular polygon.\n\n Parameters\n ----------\n center : array-like\n radius : float\n n_vertices : int\n start_angle : float, optional\n Where to put the first point, relative to `center`,\n in radians counter-clockwise starting from the horizontal axis.\n kwargs\n Other keyword arguments are passed to the |Shape| constructor."}
{"_id": "q_13416", "text": "Flip the shape in the y direction, in-place.\n\n Parameters\n ----------\n center : array-like, optional\n Point about which to flip.\n If not passed, the center of the shape will be used."}
{"_id": "q_13417", "text": "Flip the shape in an arbitrary direction.\n\n Parameters\n ----------\n angle : array-like\n The angle, in radians counter-clockwise from the horizontal axis,\n defining the angle about which to flip the shape (of a line through `center`).\n center : array-like, optional\n The point about which to flip.\n If not passed, the center of the shape will be used."}
{"_id": "q_13418", "text": "Draw the shape in the current OpenGL context."}
{"_id": "q_13419", "text": "Map the official Haystack timezone list to those recognised by pytz."}
{"_id": "q_13420", "text": "Retrieve the Haystack timezone"}
{"_id": "q_13421", "text": "Detect the version used from the row content, or validate against\n the version if given."}
{"_id": "q_13422", "text": "Assert that the grid version is equal to or above the given value.\n If no version is set, set the version."}
{"_id": "q_13423", "text": "Retrieve the official version nearest the one given."}
{"_id": "q_13424", "text": "Decorator that will try to login and redo an action before failing."}
{"_id": "q_13425", "text": "Example of printing the inbox."}
{"_id": "q_13426", "text": "Example of sending a message."}
{"_id": "q_13427", "text": "The string for creating a code example for the gallery"}
{"_id": "q_13428", "text": "The code example out of the notebook metadata"}
{"_id": "q_13429", "text": "The url on jupyter nbviewer for this notebook or None if unknown"}
{"_id": "q_13430", "text": "get the output file with the specified `ending`"}
{"_id": "q_13431", "text": "Create the python script from the notebook node"}
{"_id": "q_13432", "text": "Scales an image with the same aspect ratio centered in an\n image with a given max_width and max_height\n if in_fname == out_fname the image can only be scaled down"}
{"_id": "q_13433", "text": "Save the thumbnail image"}
{"_id": "q_13434", "text": "The integer of the thumbnail figure"}
{"_id": "q_13435", "text": "Makes parsing arguments a function."}
{"_id": "q_13436", "text": "Uploads selected file to the host, thanks to the fact that\n every pomf.se based site has pretty much the same architecture."}
{"_id": "q_13437", "text": "Creates a Swagger definition from a colander schema.\n\n :param schema_node:\n Colander schema to be transformed into a Swagger definition.\n :param base_name:\n Schema alternative title.\n\n :rtype: dict\n :returns: Swagger schema."}
{"_id": "q_13438", "text": "Creates a list of Swagger params from a colander request schema.\n\n :param schema_node:\n Request schema to be transformed into Swagger.\n :param validators:\n Validators used in colander with the schema.\n\n :rtype: list\n :returns: List of Swagger parameters."}
{"_id": "q_13439", "text": "Create a list of Swagger path params from a cornice service path.\n\n :type path: string\n :rtype: list"}
{"_id": "q_13440", "text": "Creates a Swagger response object from a dict of response schemas.\n\n :param schema_mapping:\n Dict with entries matching ``{status_code: response_schema}``.\n :rtype: dict\n :returns: Response schema."}
{"_id": "q_13441", "text": "Store a response schema and return a reference to it.\n\n :param schema:\n Swagger response definition.\n :param base_name:\n Name that should be used for the reference.\n\n :rtype: dict\n :returns: JSON pointer to the original response definition."}
{"_id": "q_13442", "text": "Generate a Swagger 2.0 documentation. Keyword arguments may be used\n to provide additional information to build methods as such ignores.\n\n :param title:\n The name presented on the swagger document.\n :param version:\n The version of the API presented on the swagger document.\n :param base_path:\n The path that all requests to the API must refer to.\n :param info:\n Swagger info field.\n :param swagger:\n Extra fields that should be provided on the swagger documentation.\n\n :rtype: dict\n :returns: Full OpenAPI/Swagger compliant specification for the application."}
{"_id": "q_13443", "text": "Build the Swagger \"paths\" and \"tags\" attributes from cornice service\n definitions."}
{"_id": "q_13444", "text": "Extract path object and its parameters from service definitions.\n\n :param service:\n Cornice service to extract information from.\n\n :rtype: dict\n :returns: Path definition."}
{"_id": "q_13445", "text": "Convert node schema into a parameter object."}
{"_id": "q_13446", "text": "Merge b into a recursively, without overwriting values.\n\n :param base: the dict that will be altered.\n :param changes: changes to update base."}
{"_id": "q_13447", "text": "get only db changes fields"}
{"_id": "q_13448", "text": "When accessing to the name of the field itself, the value\n in the current language will be returned. Unless it's set,\n the value in the default language will be returned."}
{"_id": "q_13449", "text": "Post processors are functions that receive file objects,\n performs necessary operations and return the results as file objects."}
{"_id": "q_13450", "text": "Process the source image through the defined processors."}
{"_id": "q_13451", "text": "Return all thumbnails in a dict format."}
{"_id": "q_13452", "text": "Creates and return a thumbnail of a given size."}
{"_id": "q_13453", "text": "Deletes a thumbnail of a given size"}
{"_id": "q_13454", "text": "Returns a Thumbnail instance, or None if thumbnail does not yet exist."}
{"_id": "q_13455", "text": "Simulate an incoming message\n\n :type src: str\n :param src: Message source\n :type boby: str | unicode\n :param body: Message body\n :rtype: IncomingMessage"}
{"_id": "q_13456", "text": "Register a virtual subscriber which receives messages to the matching number.\n\n :type number: str\n :param number: Subscriber phone number\n :type callback: callable\n :param callback: A callback(OutgoingMessage) which handles the messages directed to the subscriber.\n The message object is augmented with the .reply(str) method which allows to send a reply easily!\n :rtype: LoopbackProvider"}
{"_id": "q_13457", "text": "Register a provider on the gateway\n\n The first provider defined becomes the default one: used in case the routing function has no better idea.\n\n :type name: str\n :param name: Provider name that will be used to uniquely identify it\n :type Provider: type\n :param Provider: Provider class that inherits from `smsframework.IProvider`\n :param config: Provider configuration. Please refer to the Provider documentation.\n :rtype: IProvider\n :returns: The created provider"}
{"_id": "q_13458", "text": "Send a message object\n\n :type message: data.OutgoingMessage\n :param message: The message to send\n :rtype: data.OutgoingMessage\n :returns: The sent message with populated fields\n :raises AssertionError: wrong provider name encountered (returned by the router, or provided to OutgoingMessage)\n :raises MessageSendError: generic errors\n :raises AuthError: provider authentication failed\n :raises LimitsError: sending limits exceeded\n :raises CreditError: not enough money on the account"}
{"_id": "q_13459", "text": "Get a Flask blueprint for the named provider that handles incoming messages & status reports\n\n Note: this requires Flask microframework.\n\n :rtype: flask.blueprints.Blueprint\n :returns: Flask Blueprint, fully functional\n :raises KeyError: provider not found\n :raises NotImplementedError: Provider does not implement a receiver"}
{"_id": "q_13460", "text": "Get Flask blueprints for every provider that supports it\n\n Note: this requires Flask microframework.\n\n :rtype: dict\n :returns: A dict { provider-name: Blueprint }"}
{"_id": "q_13461", "text": "Incoming message callback\n\n Calls Gateway.onReceive event hook\n\n Providers are required to:\n * Cast phone numbers to digits-only\n * Support both ASCII and Unicode messages\n * Populate `message.msgid` and `message.meta` fields\n * If this method fails with an exception, the provider is required to respond with an error to the service\n\n :type message: IncomingMessage\n :param message: The received message\n :rtype: IncomingMessage"}
{"_id": "q_13462", "text": "Incoming status callback\n\n Calls Gateway.onStatus event hook\n\n Providers are required to:\n * Cast phone numbers to digits-only\n * Use proper MessageStatus subclasses\n * Populate `status.msgid` and `status.meta` fields\n * If this method fails with an exception, the provider is required to respond with an error to the service\n\n :type status: MessageStatus\n :param status: The received status\n :rtype: MessageStatus"}
{"_id": "q_13463", "text": "Create a viewset method for the provided `transition_name`"}
{"_id": "q_13464", "text": "Find all transitions defined on `model`, then create a corresponding\n viewset action method for each and apply it to `Mixin`. Finally, return\n `Mixin`"}
{"_id": "q_13465", "text": "View wrapper for JsonEx responses. Catches exceptions as well"}
{"_id": "q_13466", "text": "Forward an object to clients.\n\n :param obj: The object to be forwarded\n :type obj: smsframework.data.IncomingMessage|smsframework.data.MessageStatus\n :raises Exception: if any of the clients failed"}
{"_id": "q_13467", "text": "Initializer for Sphinx extension API.\n\n See http://www.sphinx-doc.org/en/stable/extdev/index.html#dev-extensions."}
{"_id": "q_13468", "text": "Signed transaction that compatible with `w3.eth.sendRawTransaction`\n Is not used because `pyEthereum` implementation of Transaction was found to be more\n robust regarding invalid signatures"}
{"_id": "q_13469", "text": "Estimate tx gas using web3"}
{"_id": "q_13470", "text": "Return a dict of stats."}
{"_id": "q_13471", "text": "Return a dict of md status define in the line."}
{"_id": "q_13472", "text": "Return a dict of components in the line.\n\n key: device name (ex: 'sdc1')\n value: device role number"}
{"_id": "q_13473", "text": "Returns True if state was successfully changed from idle to scheduled."}
{"_id": "q_13474", "text": "Get statistics."}
{"_id": "q_13475", "text": "Search for the oldest event timestamp."}
{"_id": "q_13476", "text": "Get last aggregation date."}
{"_id": "q_13477", "text": "Format range filter datetime to the closest aggregation interval."}
{"_id": "q_13478", "text": "Aggregate and return dictionary to be indexed in ES."}
{"_id": "q_13479", "text": "Calculate statistics aggregations."}
{"_id": "q_13480", "text": "Delete aggregation documents."}
{"_id": "q_13481", "text": "Appends towrite to the write queue\n\n >>> await test.write(b\"HELLO\")\n # Returns without wait time\n >>> await test.write(b\"HELLO\", await_blocking = True)\n # Returns when the bufer is flushed\n\n :param towrite: Write buffer\n :param await_blocking: wait for everything to be written"}
{"_id": "q_13482", "text": "Return value on success, or raise exception on failure."}
{"_id": "q_13483", "text": "Process stats events."}
{"_id": "q_13484", "text": "Delete computed aggregations."}
{"_id": "q_13485", "text": "Load events configuration."}
{"_id": "q_13486", "text": "Load queries configuration."}
{"_id": "q_13487", "text": "Comsume all pending events."}
{"_id": "q_13488", "text": "Send a message to this actor. Asynchronous fire-and-forget.\n\n :param message: The message to send.\n :type message: Any\n\n :param sender: The sender of the message. If provided it will be made\n available to the receiving actor via the :attr:`Actor.sender` attribute.\n :type sender: :class:`Actor`"}
{"_id": "q_13489", "text": "Get the anonymization salt based on the event timestamp's day."}
{"_id": "q_13490", "text": "Lookup country for IP address."}
{"_id": "q_13491", "text": "User information.\n\n .. note::\n\n **Privacy note** A users IP address, user agent string, and user id\n (if logged in) is sent to a message queue, where it is stored for about\n 5 minutes. The information is used to:\n\n - Detect robot visits from the user agent string.\n - Generate an anonymized visitor id (using a random salt per day).\n - Detect the users host contry based on the IP address.\n\n The information is then discarded."}
{"_id": "q_13492", "text": "Aggregate indexed events."}
{"_id": "q_13493", "text": "Run the actual request"}
{"_id": "q_13494", "text": "Preprocess an event by anonymizing user information.\n\n The anonymization is done by removing fields that can uniquely identify a\n user, such as the user's ID, session ID, IP address and User Agent, and\n hashing them to produce a ``visitor_id`` and ``unique_session_id``. To\n further secure the method, a randomly generated 32-byte salt is used, that\n expires after 24 hours and is discarded. The salt values are stored in\n Redis (or whichever backend Invenio-Cache uses). The ``unique_session_id``\n is calculated in the same way as the ``visitor_id``, with the only\n difference that it also takes into account the hour of the event . All of\n these rules effectively mean that a user can have a unique ``visitor_id``\n for each day and unique ``unique_session_id`` for each hour of a day.\n\n This session ID generation process was designed according to the `Project\n COUNTER Code of Practice <https://www.projectcounter.org/code-of-\n practice-sections/general-information/>`_.\n\n In addition to that the country of the user is extracted from the IP\n address as a ISO 3166-1 alpha-2 two-letter country code (e.g. \"CH\" for\n Switzerland)."}
{"_id": "q_13495", "text": "Generate event id, optimized for ES."}
{"_id": "q_13496", "text": "Reads one line\n\n >>> # Keeps waiting for a linefeed incase there is none in the buffer\n >>> await test.readline()\n\n :returns: bytes forming a line"}
{"_id": "q_13497", "text": "Register sample events."}
{"_id": "q_13498", "text": "Register queries."}
{"_id": "q_13499", "text": "Extract date from string if necessary.\n\n :returns: the extracted date."}
{"_id": "q_13500", "text": "Run the query."}
{"_id": "q_13501", "text": "Build a file-download event."}
{"_id": "q_13502", "text": "Verifies and sends message.\n\n :param message: Message instance.\n :param envelope_from: Email address to be used in MAIL FROM command."}
{"_id": "q_13503", "text": "Checks for bad headers i.e. newlines in subject, sender or recipients."}
{"_id": "q_13504", "text": "Checks if the user is rooted."}
{"_id": "q_13505", "text": "Register resources with the ResourceManager."}
{"_id": "q_13506", "text": "Raises an exception if value for ``key`` is empty."}
{"_id": "q_13507", "text": "Teardown a Resource or Middleware."}
{"_id": "q_13508", "text": "Hook to setup this service with a specific DataManager.\n\n Will recursively setup sub-services."}
{"_id": "q_13509", "text": "Build a DatafFame to show the overlap between different sub-graphs.\n\n Options:\n 1. Total number of edges overlap (intersection)\n 2. Percentage overlap (tanimoto similarity)\n\n :param graph: A BEL graph\n :param annotation: The annotation to group by and compare. Defaults to 'Subgraph'\n :return: {subgraph: set of edges}, {(subgraph 1, subgraph2): set of intersecting edges},\n {(subgraph 1, subgraph2): set of unioned edges}, {(subgraph 1, subgraph2): tanimoto similarity},"}
{"_id": "q_13510", "text": "Calculate the subgraph similarity tanimoto similarity in nodes passing the given filter.\n\n Provides an alternate view on subgraph similarity, from a more node-centric view"}
{"_id": "q_13511", "text": "Remove nodes with the given function and namespace.\n\n This might be useful to exclude information learned about distant species, such as excluding all information\n from MGI and RGD in diseases where mice and rats don't give much insight to the human disease mechanism."}
{"_id": "q_13512", "text": "Preprocess the excel sheet\n\n :param filepath: filepath of the excel data\n :return: df: pandas dataframe with excel data\n :rtype: pandas.DataFrame"}
{"_id": "q_13513", "text": "Find pairs of nodes that have mutual causal edges that are regulating each other such that ``A -> B`` and\n ``B -| A``.\n\n :return: A set of pairs of nodes with mutual causal edges"}
{"_id": "q_13514", "text": "Find pairs of nodes that have mutual causal edges that are increasing each other such that ``A -> B`` and\n ``B -> A``.\n\n :return: A set of pairs of nodes with mutual causal edges"}
{"_id": "q_13515", "text": "Extract an undirected graph of only correlative relationships."}
{"_id": "q_13516", "text": "Return a set of all triangles pointed by the given node."}
{"_id": "q_13517", "text": "Yield all triples of nodes A, B, C such that ``A pos B``, ``A pos C``, and ``B neg C``.\n\n :return: An iterator over triples of unstable graphs, where the second two are negative"}
{"_id": "q_13518", "text": "Summarize the stability of the graph."}
{"_id": "q_13519", "text": "Flattens the complex or composite abundance."}
{"_id": "q_13520", "text": "Flatten list abundances."}
{"_id": "q_13521", "text": "Expand all list abundances to simple subject-predicate-object networks."}
{"_id": "q_13522", "text": "Helper to deal with cartension expansion in unqualified edges."}
{"_id": "q_13523", "text": "Return nodes that are both in reactants and reactions in a reaction."}
{"_id": "q_13524", "text": "Prepare a citation data dictionary from a graph.\n\n :return: A dictionary of dictionaries {citation type: {(source, target): citation reference}"}
{"_id": "q_13525", "text": "Counts the citations in a graph based on a given filter\n\n :param graph: A BEL graph\n :param dict annotations: The annotation filters to use\n :return: A counter from {(citation type, citation reference): frequency}"}
{"_id": "q_13526", "text": "Count the number of publications of each author to the given graph."}
{"_id": "q_13527", "text": "Group the author counters by sub-graphs induced by the annotation.\n\n :param graph: A BEL graph\n :param annotation: The annotation to use to group the graph\n :return: A dictionary of Counters {subgraph name: Counter from {author: frequency}}"}
{"_id": "q_13528", "text": "Get a dictionary from the given PubMed identifiers to the sets of all evidence strings associated with each\n in the graph.\n\n :param graph: A BEL graph\n :param pmids: An iterable of PubMed identifiers, as strings. Is consumed and converted to a set.\n :return: A dictionary of {pmid: set of all evidence strings}\n :rtype: dict"}
{"_id": "q_13529", "text": "Complete the Counter timeline.\n\n :param Counter year_counter: counter dict for each year\n :return: complete timeline"}
{"_id": "q_13530", "text": "Count the confidences in the graph."}
{"_id": "q_13531", "text": "Overwrite all PubMed citations with values from NCBI's eUtils lookup service.\n\n :return: A set of PMIDs for which the eUtils service crashed"}
{"_id": "q_13532", "text": "Update the context of a subgraph from the universe of all knowledge."}
{"_id": "q_13533", "text": "Adds a highlight tag to the given nodes.\n\n :param graph: A BEL graph\n :param nodes: The nodes to add a highlight tag on\n :param color: The color to highlight (use something that works with CSS)"}
{"_id": "q_13534", "text": "Returns if the given node is highlighted.\n\n :param graph: A BEL graph\n :param node: A BEL node\n :type node: tuple\n :return: Does the node contain highlight information?\n :rtype: bool"}
{"_id": "q_13535", "text": "Removes the highlight from the given nodes, or all nodes if none given.\n\n :param graph: A BEL graph\n :param nodes: The list of nodes to un-highlight"}
{"_id": "q_13536", "text": "Remove the highlight from the given edges, or all edges if none given.\n\n :param graph: A BEL graph\n :param edges: The edges (4-tuple of u,v,k,d) to remove the highlight from)\n :type edges: iter[tuple]"}
{"_id": "q_13537", "text": "Get the out-edges to the given node that are causal.\n\n :return: A set of (source, target) pairs where the source is the given node"}
{"_id": "q_13538", "text": "Return a set of all nodes that have an in-degree of 0.\n\n This likely means that it is an external perturbagen and is not known to have any causal origin from within the\n biological system. These nodes are useful to identify because they generally don't provide any mechanistic insight."}
{"_id": "q_13539", "text": "Get a modifications count dictionary."}
{"_id": "q_13540", "text": "Remove all values that are zero."}
{"_id": "q_13541", "text": "Collapse all of the given functions' variants' edges to their parents, in-place."}
{"_id": "q_13542", "text": "Collapse all edges passing the given edge predicates."}
{"_id": "q_13543", "text": "Collapse pairs of nodes with the given namespaces that have orthology relationships.\n\n :param graph: A BEL Graph\n :param victim_namespace: The namespace(s) of the node to collapse\n :param survivor_namespace: The namespace of the node to keep\n\n To collapse all MGI nodes to their HGNC orthologs, use:\n >>> collapse_orthologies_by_namespace('MGI', 'HGNC')\n\n\n To collapse collapse both MGI and RGD nodes to their HGNC orthologs, use:\n >>> collapse_orthologies_by_namespace(['MGI', 'RGD'], 'HGNC')"}
{"_id": "q_13544", "text": "Collapse all equivalence edges away from Entrez. Assumes well formed, 2-way equivalencies."}
{"_id": "q_13545", "text": "Collapse consistent edges together.\n\n .. warning:: This operation doesn't preserve evidences or other annotations"}
{"_id": "q_13546", "text": "Collapse all nodes with the same name, merging namespaces by picking first alphabetical one."}
{"_id": "q_13547", "text": "Output the HBP knowledge graph to the desktop"}
{"_id": "q_13548", "text": "Return if the node is an upstream leaf.\n\n An upstream leaf is defined as a node that has no in-edges, and exactly 1 out-edge."}
{"_id": "q_13549", "text": "Check if the node is both a source and also has an annotation.\n\n :param graph: A BEL graph\n :param node: A BEL node\n :param key: The key in the node data dictionary representing the experimental data"}
{"_id": "q_13550", "text": "Get nodes on the periphery of the sub-graph that do not have a annotation for the given key.\n\n :param graph: A BEL graph\n :param key: The key in the node data dictionary representing the experimental data\n :return: An iterator over BEL nodes that are unannotated and on the periphery of this subgraph"}
{"_id": "q_13551", "text": "Prune unannotated nodes on the periphery of the sub-graph.\n\n :param graph: A BEL graph\n :param key: The key in the node data dictionary representing the experimental data. Defaults to\n :data:`pybel_tools.constants.WEIGHT`."}
{"_id": "q_13552", "text": "Remove all leaves and source nodes that don't have weights.\n\n Is a thin wrapper around :func:`remove_unweighted_leaves` and :func:`remove_unweighted_sources`\n\n :param graph: A BEL graph\n :param key: The key in the node data dictionary representing the experimental data. Defaults to\n :data:`pybel_tools.constants.WEIGHT`.\n\n Equivalent to:\n\n >>> remove_unweighted_leaves(graph)\n >>> remove_unweighted_sources(graph)"}
{"_id": "q_13553", "text": "Generate a mechanistic sub-graph upstream of the given node.\n\n :param graph: A BEL graph\n :param node: A BEL node\n :param key: The key in the node data dictionary representing the experimental data.\n :return: A sub-graph grown around the target BEL node"}
{"_id": "q_13554", "text": "Preprocess the graph, stratify by the given annotation, then run the NeuroMMSig algorithm on each.\n\n :param graph: A BEL graph\n :param genes: A list of gene nodes\n :param annotation: The annotation to use to stratify the graph to subgraphs\n :param ora_weight: The relative weight of the over-enrichment analysis score from\n :py:func:`neurommsig_gene_ora`. Defaults to 1.0.\n :param hub_weight: The relative weight of the hub analysis score from :py:func:`neurommsig_hubs`.\n Defaults to 1.0.\n :param top_percent: The percentage of top genes to use as hubs. Defaults to 5% (0.05).\n :param topology_weight: The relative weight of the topolgical analysis core from\n :py:func:`neurommsig_topology`. Defaults to 1.0.\n :param preprocess: If true, preprocess the graph.\n :return: A dictionary from {annotation value: NeuroMMSig composite score}\n\n Pre-processing steps:\n\n 1. Infer the central dogma with :func:``\n 2. Collapse all proteins, RNAs and miRNAs to genes with :func:``\n 3. Collapse variants to genes with :func:``"}
{"_id": "q_13555", "text": "Takes a graph stratification and runs neurommsig on each\n\n :param subgraphs: A pre-stratified set of graphs\n :param genes: A list of gene nodes\n :param ora_weight: The relative weight of the over-enrichment analysis score from\n :py:func:`neurommsig_gene_ora`. Defaults to 1.0.\n :param hub_weight: The relative weight of the hub analysis score from :py:func:`neurommsig_hubs`.\n Defaults to 1.0.\n :param top_percent: The percentage of top genes to use as hubs. Defaults to 5% (0.05).\n :param topology_weight: The relative weight of the topolgical analysis core from\n :py:func:`neurommsig_topology`. Defaults to 1.0.\n :return: A dictionary from {annotation value: NeuroMMSig composite score}\n\n Pre-processing steps:\n\n 1. Infer the central dogma with :func:``\n 2. Collapse all proteins, RNAs and miRNAs to genes with :func:``\n 3. Collapse variants to genes with :func:``"}
{"_id": "q_13556", "text": "Calculate the composite NeuroMMSig Score for a given list of genes.\n\n :param graph: A BEL graph\n :param genes: A list of gene nodes\n :param ora_weight: The relative weight of the over-enrichment analysis score from\n :py:func:`neurommsig_gene_ora`. Defaults to 1.0.\n :param hub_weight: The relative weight of the hub analysis score from :py:func:`neurommsig_hubs`.\n Defaults to 1.0.\n :param top_percent: The percentage of top genes to use as hubs. Defaults to 5% (0.05).\n :param topology_weight: The relative weight of the topolgical analysis core from\n :py:func:`neurommsig_topology`. Defaults to 1.0.\n :return: The NeuroMMSig composite score"}
{"_id": "q_13557", "text": "Perform a number of runs\n\n The number of runs is the number of seeds\n\n convolution_factors_tasks_iterator needs to be an iterator\n\n We shield the convolution factors tasks from jug value/result mechanism\n by supplying an iterator to the list of tasks for lazy evaluation\n http://github.com/luispedro/jug/blob/43f0d80a78f418fd3aa2b8705eaf7c4a5175fff7/jug/task.py#L100\n http://github.com/luispedro/jug/blob/43f0d80a78f418fd3aa2b8705eaf7c4a5175fff7/jug/task.py#L455"}
{"_id": "q_13558", "text": "Get the set of possible successor edges peripheral to the sub-graph.\n\n The source nodes in this iterable are all inside the sub-graph, while the targets are outside."}
{"_id": "q_13559", "text": "Get the set of possible predecessor edges peripheral to the sub-graph.\n\n The target nodes in this iterable are all inside the sub-graph, while the sources are outside."}
{"_id": "q_13560", "text": "Count the target nodes in an edge iterator with keys and data.\n\n :return: A counter of target nodes in the iterable"}
{"_id": "q_13561", "text": "Gets all edges from a given subgraph whose source and target nodes pass all of the given filters\n\n :param pybel.BELGraph graph: A BEL graph\n :param str annotation: The annotation to search\n :param str value: The annotation value to search by\n :param source_filter: Optional filter for source nodes (graph, node) -> bool\n :param target_filter: Optional filter for target nodes (graph, node) -> bool\n :return: An iterable of (source node, target node, key, data) for all edges that match the annotation/value and\n node filters\n :rtype: iter[tuple]"}
{"_id": "q_13562", "text": "Get a summary dictionary of all peripheral nodes to a given sub-graph.\n\n :return: A dictionary of {external node: {'successor': {internal node: list of (key, dict)},\n 'predecessor': {internal node: list of (key, dict)}}}\n :rtype: dict\n\n For example, it might be useful to quantify the number of predecessors and successors:\n\n >>> from pybel.struct.filters import exclude_pathology_filter\n >>> value = 'Blood vessel dilation subgraph'\n >>> sg = get_subgraph_by_annotation_value(graph, annotation='Subgraph', value=value)\n >>> p = get_subgraph_peripheral_nodes(graph, sg, node_predicates=exclude_pathology_filter)\n >>> for node in sorted(p, key=lambda n: len(set(p[n]['successor']) | set(p[n]['predecessor'])), reverse=True):\n >>> if 1 == len(p[value][node]['successor']) or 1 == len(p[value][node]['predecessor']):\n >>> continue\n >>> print(node,\n >>> len(p[node]['successor']),\n >>> len(p[node]['predecessor']),\n >>> len(set(p[node]['successor']) | set(p[node]['predecessor'])))"}
{"_id": "q_13563", "text": "Adds all of the reactants and products of reactions to the graph."}
{"_id": "q_13564", "text": "Add the reference nodes for all variants of the given function.\n\n :param graph: The target BEL graph to enrich\n :param func: The function by which the subject of each triple is filtered. Defaults to the set of protein, rna,\n mirna, and gene."}
{"_id": "q_13565", "text": "Enrich the sub-graph with the unqualified edges from the graph.\n\n The reason you might want to do this is you induce a sub-graph from the original graph based on an annotation\n filter, but the unqualified edges that don't have annotations that most likely connect elements within your graph\n are not included.\n\n .. seealso::\n\n This function thinly wraps the successive application of the following functions:\n\n - :func:`enrich_complexes`\n - :func:`enrich_composites`\n - :func:`enrich_reactions`\n - :func:`enrich_variants`\n\n Equivalent to:\n\n >>> enrich_complexes(graph)\n >>> enrich_composites(graph)\n >>> enrich_reactions(graph)\n >>> enrich_variants(graph)"}
{"_id": "q_13566", "text": "Edges between entities in the sub-graph that pass the given filters.\n\n :param universe: The full graph\n :param graph: A sub-graph to find the upstream information\n :param edge_predicates: Optional list of edge filter functions (graph, node, node, key, data) -> bool"}
{"_id": "q_13567", "text": "Add causal edges between entities in the sub-graph.\n\n Is an extremely thin wrapper around :func:`expand_internal`.\n\n :param universe: A BEL graph representing the universe of all knowledge\n :param graph: The target BEL graph to enrich with causal relations between contained nodes\n\n Equivalent to:\n\n >>> from pybel_tools.mutation import expand_internal\n >>> from pybel.struct.filters.edge_predicates import is_causal_relation\n >>> expand_internal(universe, graph, edge_predicates=is_causal_relation)"}
{"_id": "q_13568", "text": "Return the dict of the sets of all incorrect names from the given namespace in the graph.\n\n :return: The set of all incorrect names from the given namespace in the graph"}
{"_id": "q_13569", "text": "Group the errors together for analysis of the most frequent error.\n\n :return: A dictionary of {error string: list of line numbers}"}
{"_id": "q_13570", "text": "Count the number of elements in each value of the dictionary."}
{"_id": "q_13571", "text": "What percentage of x is contained within y?\n\n :param set x: A set\n :param set y: Another set\n :return: The percentage of x contained within y"}
{"_id": "q_13572", "text": "Calculate the tanimoto set similarity using the minimum size.\n\n :param set x: A set\n :param set y: Another set\n :return: The similarity between"}
{"_id": "q_13573", "text": "A convenience function for plotting a horizontal bar plot from a Counter"}
{"_id": "q_13574", "text": "Adds an edge while preserving negative keys, and paying no respect to positive ones\n\n :param pybel.BELGraph graph: A BEL Graph\n :param tuple u: The source BEL node\n :param tuple v: The target BEL node\n :param int key: The edge key. If less than zero, corresponds to an unqualified edge, else is disregarded\n :param dict attr_dict: The edge data dictionary\n :param dict attr: Edge data to assign via keyword arguments"}
{"_id": "q_13575", "text": "Prepares C3 JSON for making a bar chart from a Counter\n\n :param data: A dictionary of {str: int} to display as bar chart\n :param y_axis_label: The Y axis label\n :param x_axis_label: X axis internal label. Should be left as default 'x')\n :return: A JSON dictionary for making a C3 bar chart"}
{"_id": "q_13576", "text": "Calculate the betweenness centrality over nodes in the graph.\n\n Tries to do it with a certain number of samples, but then tries a complete approach if it fails."}
{"_id": "q_13577", "text": "Get get a canonical representation of the ordered collection by finding its minimum circulation with the\n given sort key"}
{"_id": "q_13578", "text": "Check if a pair of nodes has any contradictions in their causal relationships.\n\n Assumes both nodes are in the graph."}
{"_id": "q_13579", "text": "Return if the set of BEL relations contains a contradiction."}
{"_id": "q_13580", "text": "r'''\n Compute the average number of runs that have a spanning cluster\n\n Helper function for :func:`microcanonical_averages`\n\n Parameters\n ----------\n\n has_spanning_cluster : 1-D :py:class:`numpy.ndarray` of bool\n Each entry is the ``has_spanning_cluster`` field of the output of\n :func:`sample_states`:\n An entry is ``True`` if there is a spanning cluster in that respective\n run, and ``False`` otherwise.\n\n alpha : float\n Significance level.\n\n Returns\n -------\n\n ret : dict\n Spanning cluster statistics\n\n ret['spanning_cluster'] : float\n The average relative number (Binomial proportion) of runs that have a\n spanning cluster.\n This is the Bayesian point estimate of the posterior mean, with a\n uniform prior.\n\n ret['spanning_cluster_ci'] : 1-D :py:class:`numpy.ndarray` of float, size 2\n The lower and upper bounds of the Binomial proportion confidence\n interval with uniform prior.\n\n See Also\n --------\n\n sample_states : spanning cluster detection\n\n microcanonical_averages : spanning cluster statistics\n\n Notes\n -----\n\n Averages and confidence intervals for Binomial proportions\n\n As Cameron [8]_ puts it, the normal approximation to the confidence\n interval for a Binomial proportion :math:`p` \"suffers a *systematic*\n decline in performance (...) towards extreme values of :math:`p` near\n :math:`0` and :math:`1`, generating binomial [confidence intervals]\n with effective coverage far below the desired level.\" (see also\n References [6]_ and [7]_).\n\n A different approach to quantifying uncertainty is Bayesian inference.\n [5]_\n For :math:`n` independent Bernoulli trails with common success\n probability :math:`p`, the *likelihood* to have :math:`k` successes\n given :math:`p` is the binomial distribution\n\n .. math::\n\n P(k|p) = \\binom{n}{k} p^k (1-p)^{n-k} \\equiv B(a,b),\n\n where :math:`B(a, b)` is the *Beta distribution* with parameters\n :math:`a = k + 1` and :math:`b = n - k + 1`.\n Assuming a uniform prior :math:`P(p) = 1`, the *posterior* is [5]_\n\n .. math::\n\n P(p|k) = P(k|p)=B(a,b).\n\n A point estimate is the posterior mean\n\n .. math::\n\n \\bar{p} = \\frac{k+1}{n+2}\n\n with the :math:`1 - \\alpha` credible interval :math:`(p_l, p_u)` given\n by\n\n .. math::\n\n \\int_0^{p_l} dp B(a,b) = \\int_{p_u}^1 dp B(a,b) = \\frac{\\alpha}{2}.\n\n References\n ----------\n\n .. [5] Wasserman, L. All of Statistics (Springer New York, 2004),\n `doi:10.1007/978-0-387-21736-9 <http://dx.doi.org/10.1007/978-0-387-21736-9>`_.\n\n .. [6] DasGupta, A., Cai, T. T. & Brown, L. D. Interval Estimation for a\n Binomial Proportion. Statistical Science 16, 101-133 (2001).\n `doi:10.1214/ss/1009213286 <http://dx.doi.org/10.1214/ss/1009213286>`_.\n\n .. [7] Agresti, A. & Coull, B. A. Approximate is Better than \"Exact\" for\n Interval Estimation of Binomial Proportions. The American Statistician\n 52, 119-126 (1998),\n `doi:10.2307/2685469 <http://dx.doi.org/10.2307/2685469>`_.\n\n .. [8] Cameron, E. On the Estimation of Confidence Intervals for Binomial\n Population Proportions in Astronomy: The Simplicity and Superiority of\n the Bayesian Approach. Publications of the Astronomical Society of\n Australia 28, 128-139 (2011),\n `doi:10.1071/as10046 <http://dx.doi.org/10.1071/as10046>`_."}
{"_id": "q_13581", "text": "r'''\n Generate successive microcanonical percolation ensemble averages\n\n This is a :ref:`generator function <python:tut-generators>` to successively\n add one edge at a time from the graph to the percolation model for a number\n of independent runs in parallel.\n At each iteration, it calculates and returns the averaged cluster\n statistics.\n\n Parameters\n ----------\n graph : networkx.Graph\n The substrate graph on which percolation is to take place\n\n runs : int, optional\n Number of independent runs.\n Defaults to ``40``.\n\n spanning_cluster : bool, optional\n Defaults to ``True``.\n\n model : str, optional\n The percolation model (either ``'bond'`` or ``'site'``).\n Defaults to ``'bond'``.\n\n .. note:: Other models than ``'bond'`` are not supported yet.\n\n alpha: float, optional\n Significance level.\n Defaults to 1 sigma of the normal distribution.\n ``1 - alpha`` is the confidence level.\n\n copy_result : bool, optional\n Whether to return a copy or a reference to the result dictionary.\n Defaults to ``True``.\n\n Yields\n ------\n ret : dict\n Cluster statistics\n\n ret['n'] : int\n Number of occupied bonds\n\n ret['N'] : int\n Total number of sites\n\n ret['M'] : int\n Total number of bonds\n\n ret['spanning_cluster'] : float\n The average number (Binomial proportion) of runs that have a spanning\n cluster.\n This is the Bayesian point estimate of the posterior mean, with a\n uniform prior.\n Only exists if `spanning_cluster` is set to ``True``.\n\n ret['spanning_cluster_ci'] : 1-D :py:class:`numpy.ndarray` of float, size 2\n The lower and upper bounds of the Binomial proportion confidence\n interval with uniform prior.\n Only exists if `spanning_cluster` is set to ``True``.\n\n ret['max_cluster_size'] : float\n Average size of the largest cluster (absolute number of sites)\n\n ret['max_cluster_size_ci'] : 1-D :py:class:`numpy.ndarray` of float, size 2\n Lower and upper bounds of the normal confidence interval of the average\n size of the largest cluster (absolute number of sites)\n\n ret['moments'] : 1-D :py:class:`numpy.ndarray` of float, size 5\n The ``k``-th entry is the average ``k``-th raw moment of the (absolute)\n cluster size distribution, with ``k`` ranging from ``0`` to ``4``.\n\n ret['moments_ci'] : 2-D :py:class:`numpy.ndarray` of float, shape (5,2)\n ``ret['moments_ci'][k]`` are the lower and upper bounds of the normal\n confidence interval of the average ``k``-th raw moment of the\n (absolute) cluster size distribution, with ``k`` ranging from ``0`` to\n ``4``.\n\n Raises\n ------\n ValueError\n If `runs` is not a positive integer\n\n ValueError\n If `alpha` is not a float in the interval (0, 1)\n\n See also\n --------\n\n sample_states\n\n percolate.percolate._microcanonical_average_spanning_cluster\n\n percolate.percolate._microcanonical_average_max_cluster_size\n\n Notes\n -----\n Iterating through this generator corresponds to several parallel runs of\n the Newman-Ziff algorithm.\n Each iteration yields a microcanonical percolation ensemble for the number\n :math:`n` of occupied bonds. [9]_\n The first iteration yields the trivial microcanonical percolation ensemble\n with :math:`n = 0` occupied bonds.\n\n Spanning cluster\n\n .. seealso:: :py:func:`sample_states`\n\n Raw moments of the cluster size distribution\n\n .. seealso:: :py:func:`sample_states`\n\n\n References\n ----------\n .. [9] Newman, M. E. J. & Ziff, R. M. Fast monte carlo algorithm for site\n or bond percolation. Physical Review E 64, 016706+ (2001),\n `doi:10.1103/physreve.64.016706 <http://dx.doi.org/10.1103/physreve.64.016706>`_."}
{"_id": "q_13582", "text": "Generate a linear chain with auxiliary nodes for spanning cluster detection\n\n Parameters\n ----------\n\n length : int\n Number of nodes in the chain, excluding the auxiliary nodes.\n\n Returns\n -------\n\n networkx.Graph\n A linear chain graph with auxiliary nodes for spanning cluster detection\n\n See Also\n --------\n\n sample_states : spanning cluster detection"}
{"_id": "q_13583", "text": "Compile microcanonical averages over all iteration steps into single arrays\n\n Helper function to aggregate the microcanonical averages over all iteration\n steps into single arrays for further processing\n\n Parameters\n ----------\n\n microcanonical_averages : iterable\n Typically, this is the :func:`microcanonical_averages` generator\n\n Returns\n -------\n\n ret : dict\n Aggregated cluster statistics\n\n ret['N'] : int\n Total number of sites\n\n ret['M'] : int\n Total number of bonds\n\n ret['spanning_cluster'] : 1-D :py:class:`numpy.ndarray` of float\n The percolation probability:\n The normalized average number of runs that have a spanning cluster.\n\n ret['spanning_cluster_ci'] : 2-D :py:class:`numpy.ndarray` of float, size 2\n The lower and upper bounds of the percolation probability.\n\n ret['max_cluster_size'] : 1-D :py:class:`numpy.ndarray` of float\n The percolation strength:\n Average relative size of the largest cluster\n\n ret['max_cluster_size_ci'] : 2-D :py:class:`numpy.ndarray` of float\n Lower and upper bounds of the normal confidence interval of the\n percolation strength.\n\n ret['moments'] : 2-D :py:class:`numpy.ndarray` of float, shape (5, M + 1)\n Average raw moments of the (relative) cluster size distribution.\n\n ret['moments_ci'] : 3-D :py:class:`numpy.ndarray` of float, shape (5, M + 1, 2)\n Lower and upper bounds of the normal confidence interval of the raw\n moments of the (relative) cluster size distribution.\n\n See Also\n --------\n\n microcanonical_averages"}
{"_id": "q_13584", "text": "Compute the binomial PMF according to Newman and Ziff\n\n Helper function for :func:`canonical_averages`\n\n See Also\n --------\n\n canonical_averages\n\n Notes\n -----\n\n See Newman & Ziff, Equation (10) [10]_\n\n References\n ----------\n\n .. [10] Newman, M. E. J. & Ziff, R. M. Fast monte carlo algorithm for site\n or bond percolation. Physical Review E 64, 016706+ (2001),\n `doi:10.1103/physreve.64.016706 <http://dx.doi.org/10.1103/physreve.64.016706>`_."}
{"_id": "q_13585", "text": "Compute the canonical cluster statistics from microcanonical statistics\n\n This is according to Newman and Ziff, Equation (2).\n Note that we also simply average the bounds of the confidence intervals\n according to this formula.\n\n Parameters\n ----------\n\n ps : iterable of float\n Each entry is a probability for which to form the canonical ensemble\n and compute the weighted statistics from the microcanonical statistics\n\n microcanonical_averages_arrays\n Typically the output of :func:`microcanonical_averages_arrays`\n\n Returns\n -------\n\n ret : dict\n Canonical ensemble cluster statistics\n\n ret['ps'] : iterable of float\n The parameter `ps`\n\n ret['N'] : int\n Total number of sites\n\n ret['M'] : int\n Total number of bonds\n\n ret['spanning_cluster'] : 1-D :py:class:`numpy.ndarray` of float\n The percolation probability:\n The normalized average number of runs that have a spanning cluster.\n\n ret['spanning_cluster_ci'] : 2-D :py:class:`numpy.ndarray` of float, size 2\n The lower and upper bounds of the percolation probability.\n\n ret['max_cluster_size'] : 1-D :py:class:`numpy.ndarray` of float\n The percolation strength:\n Average relative size of the largest cluster\n\n ret['max_cluster_size_ci'] : 2-D :py:class:`numpy.ndarray` of float\n Lower and upper bounds of the normal confidence interval of the\n percolation strength.\n\n ret['moments'] : 2-D :py:class:`numpy.ndarray` of float, shape (5, M + 1)\n Average raw moments of the (relative) cluster size distribution.\n\n ret['moments_ci'] : 3-D :py:class:`numpy.ndarray` of float, shape (5, M + 1, 2)\n Lower and upper bounds of the normal confidence interval of the raw\n moments of the (relative) cluster size distribution.\n\n See Also\n --------\n\n microcanonical_averages\n\n microcanonical_averages_arrays"}
{"_id": "q_13586", "text": "Helper function to compute percolation statistics\n\n See Also\n --------\n\n canonical_averages\n\n microcanonical_averages\n\n sample_states"}
{"_id": "q_13587", "text": "Calculate the final effect of the root node to the sink node in the path.\n\n :param pybel.BELGraph graph: A BEL graph\n :param list path: Path from root to sink node\n :param dict relationship_dict: dictionary with relationship effects\n :rtype: Effect"}
{"_id": "q_13588", "text": "Return the highest ranked edge from a multiedge.\n\n :param dict edges: dictionary with all edges between two nodes\n :param dict edge_ranking: A dictionary of {relationship: score}\n :return: Highest ranked edge\n :rtype: tuple: (edge id, relation, score given ranking)"}
{"_id": "q_13589", "text": "Group the nodes occurring in edges by the given annotation, with a node filter applied.\n\n :param graph: A BEL graph\n :param node_predicates: A predicate or list of predicates (graph, node) -> bool\n :param annotation: The annotation to use for grouping\n :return: A dictionary of {annotation value: set of nodes}"}
{"_id": "q_13590", "text": "Make an expand function that's bound to the manager."}
{"_id": "q_13591", "text": "Make a delete function that's bound to the manager."}
{"_id": "q_13592", "text": "Build an adjacency matrix for each KEGG relationship and return in a dictionary.\n\n :param nodes: A set of HGNC gene symbols\n :return: Dictionary of adjacency matrix for each relationship"}
{"_id": "q_13593", "text": "Populate the adjacency matrix."}
{"_id": "q_13594", "text": "Export a SPIA data dictionary into an Excel sheet at the given path.\n\n .. note::\n\n # The R import should add the values:\n # [\"nodes\"] from the columns\n # [\"title\"] from the name of the file\n # [\"NumberOfReactions\"] set to \"0\""}
{"_id": "q_13595", "text": "Export a SPIA data dictionary into a directory as several TSV documents."}
{"_id": "q_13596", "text": "Load and pre-process a differential gene expression data.\n\n :param path: The path to the CSV\n :param gene_symbol_column: The header of the gene symbol column in the data frame\n :param logfc_column: The header of the log-fold-change column in the data frame\n :param aggregator: A function that aggregates a list of differential gene expression values. Defaults to\n :func:`numpy.median`. Could also use: :func:`numpy.mean`, :func:`numpy.average`,\n :func:`numpy.min`, or :func:`numpy.max`\n :return: A dictionary of {gene symbol: log fold change}"}
{"_id": "q_13597", "text": "Merges namespaces from multiple locations to one.\n\n :param iter input_locations: An iterable of URLs or file paths pointing to BEL namespaces.\n :param str output_path: The path to the file to write the merged namespace\n :param str namespace_name: The namespace name\n :param str namespace_keyword: Preferred BEL Keyword, maximum length of 8\n :param str namespace_domain: One of: :data:`pybel.constants.NAMESPACE_DOMAIN_BIOPROCESS`,\n :data:`pybel.constants.NAMESPACE_DOMAIN_CHEMICAL`,\n :data:`pybel.constants.NAMESPACE_DOMAIN_GENE`, or\n :data:`pybel.constants.NAMESPACE_DOMAIN_OTHER`\n :param str author_name: The namespace's authors\n :param str citation_name: The name of the citation\n :param str namespace_query_url: HTTP URL to query for details on namespace values (must be valid URL)\n :param str namespace_description: Namespace description\n :param str namespace_species: Comma-separated list of species taxonomy id's\n :param str namespace_version: Namespace version\n :param str namespace_created: Namespace public timestamp, ISO 8601 datetime\n :param str author_contact: Namespace author's contact info/email address\n :param str author_copyright: Namespace's copyright/license information\n :param str citation_description: Citation description\n :param str citation_url: URL to more citation information\n :param str citation_version: Citation version\n :param str citation_date: Citation publish timestamp, ISO 8601 Date\n :param bool case_sensitive: Should this config file be interpreted as case-sensitive?\n :param str delimiter: The delimiter between names and labels in this config file\n :param bool cacheable: Should this config file be cached?\n :param functions: The encoding for the elements in this namespace\n :type functions: iterable of characters\n :param str value_prefix: a prefix for each name\n :param sort_key: A function to sort the values with :func:`sorted`\n :param bool check_keywords: Should all the keywords be the same? Defaults to ``True``"}
{"_id": "q_13598", "text": "Run the reverse causal reasoning algorithm on a graph.\n\n Steps:\n\n 1. Get all downstream controlled things into map (that have at least 4 downstream things)\n 2. calculate population of all things that are downstream controlled\n\n .. note:: Assumes all nodes have been pre-tagged with data\n\n :param pybel.BELGraph graph:\n :param str tag: The key for the nodes' data dictionaries that corresponds to the integer value for its differential\n expression."}
{"_id": "q_13599", "text": "Exports all names and missing names from the given namespace to its own BEL Namespace files in the given\n directory.\n\n Could be useful during quick and dirty curation, where planned namespace building is not a priority.\n\n :param pybel.BELGraph graph: A BEL graph\n :param str namespace: The namespace to process\n :param str directory: The path to the directory where to output the namespace. Defaults to the current working\n directory returned by :func:`os.getcwd`\n :param bool cacheable: Should the namespace be cacheable? Defaults to ``False`` because, in general, this operation\n will probably be used for evil, and users won't want to reload their entire cache after each\n iteration of curation."}
{"_id": "q_13600", "text": "Helps remove extraneous whitespace from the lines of a file\n\n :param file in_file: A readable file or file-like\n :param file out_file: A writable file or file-like"}
{"_id": "q_13601", "text": "Adds a linted version of each document in the source directory to the target directory\n\n :param str source: Path to directory to lint\n :param str target: Path to directory to output"}
{"_id": "q_13602", "text": "Builds a skeleton for gene summaries\n\n :param entrez_ids: A list of Entrez Gene identifiers to query the PubMed service\n :return: An iterator over statement lines for NCBI Entrez Gene summaries"}
{"_id": "q_13603", "text": "Write a boilerplate BEL document, with standard document metadata, definitions.\n\n :param name: The unique name for this BEL document\n :param contact: The email address of the maintainer\n :param description: A description of the contents of this document\n :param authors: The authors of this document\n :param version: The version. Defaults to current date in format ``YYYYMMDD``.\n :param copyright: Copyright information about this document\n :param licenses: The license applied to this document\n :param disclaimer: The disclaimer for this document\n :param namespace_url: an optional dictionary of {str name: str URL} of namespaces\n :param namespace_patterns: An optional dictionary of {str name: str regex} namespaces\n :param annotation_url: An optional dictionary of {str name: str URL} of annotations\n :param annotation_patterns: An optional dictionary of {str name: str regex} of regex annotations\n :param annotation_list: An optional dictionary of {str name: set of names} of list annotations\n :param pmids: A list of PubMed identifiers to auto-populate with citation and abstract\n :param entrez_ids: A list of Entrez identifiers to autopopulate the gene summary as evidence\n :param file: A writable file or file-like. If None, defaults to :data:`sys.stdout`"}
{"_id": "q_13604", "text": "Get a sub-graph induced over all nodes matching the query string.\n\n :param graph: A BEL Graph\n :param query: A query string or iterable of query strings for node names\n\n Thinly wraps :func:`search_node_names` and :func:`get_subgraph_by_induction`."}
{"_id": "q_13605", "text": "Get the giant component of a graph."}
{"_id": "q_13606", "text": "Get a random graph by inducing over a percentage of the original nodes.\n\n :param graph: A BEL graph\n :param percentage: The percentage of edges to keep"}
{"_id": "q_13607", "text": "Get a random graph by keeping a certain percentage of original edges.\n\n :param graph: A BEL graph\n :param percentage: What percentage of eges to take"}
{"_id": "q_13608", "text": "Shuffle the node's data.\n\n Useful for permutation testing.\n\n :param graph: A BEL graph\n :param key: The node data dictionary key\n :param percentage: What percentage of possible swaps to make"}
{"_id": "q_13609", "text": "Check if all edges between two nodes have the same relation.\n\n :param pybel.BELGraph graph: A BEL Graph\n :param tuple u: The source BEL node\n :param tuple v: The target BEL node\n :return: If all edges from the source to target node have the same relation\n :rtype: bool"}
{"_id": "q_13610", "text": "Rewire a graph's edges' target nodes.\n\n - For BEL graphs, assumes edge consistency (all edges between two given nodes are have the same relation)\n - Doesn't make self-edges\n\n :param pybel.BELGraph graph: A BEL graph\n :param float rewiring_probability: The probability of rewiring (between 0 and 1)\n :return: A rewired BEL graph"}
{"_id": "q_13611", "text": "Check if the source and target nodes are the same."}
{"_id": "q_13612", "text": "Check if the degradation of source causes activity of target."}
{"_id": "q_13613", "text": "Print a summary of the number of edges passing a given set of filters."}
{"_id": "q_13614", "text": "Pass for nodes that have the given namespace."}
{"_id": "q_13615", "text": "Pass for nodes that have one of the given namespaces."}
{"_id": "q_13616", "text": "Assign if a value is greater than or less than a cutoff."}
{"_id": "q_13617", "text": "Returns the results of concordance analysis on each subgraph, stratified by the given annotation.\n\n :param pybel.BELGraph graph: A BEL graph\n :param str annotation: The annotation to group by.\n :param str key: The node data dictionary key storing the logFC\n :param float cutoff: The optional logFC cutoff for significance\n :param int permutations: The number of random permutations to test. Defaults to 500\n :param float percentage: The percentage of the graph's edges to maintain. Defaults to 0.9\n :param bool use_ambiguous: Compare to ambiguous edges as well\n :rtype: dict[str,tuple]"}
{"_id": "q_13618", "text": "Remove all edges between node pairs with inconsistent edges.\n\n This is the all-or-nothing approach. It would be better to do more careful investigation of the evidences during\n curation."}
{"_id": "q_13619", "text": "Gets all walks under a given length starting at a given node\n\n :param networkx.Graph graph: A graph\n :param node: Starting node\n :param int length: The length of walks to get\n :return: A list of paths\n :rtype: list[tuple]"}
{"_id": "q_13620", "text": "Build a database of scores for NeuroMMSig annotated graphs.\n\n 1. Get all networks that use the Subgraph annotation\n 2. run on each"}
{"_id": "q_13621", "text": "Calculate the scores over precomputed candidate mechanisms.\n\n :param subgraphs: A dictionary of keys to their corresponding subgraphs\n :param key: The key in the node data dictionary representing the experimental data. Defaults to\n :data:`pybel_tools.constants.WEIGHT`.\n :param tag: The key for the nodes' data dictionaries where the scores will be put. Defaults to 'score'\n :param default_score: The initial score for all nodes. This number can go up or down.\n :param runs: The number of times to run the heat diffusion workflow. Defaults to 100.\n :param use_tqdm: Should there be a progress bar for runners?\n :return: A dictionary of keys to results tuples\n\n Example Usage:\n\n >>> import pandas as pd\n >>> from pybel_tools.generation import generate_bioprocess_mechanisms\n >>> from pybel_tools.analysis.heat import calculate_average_scores_on_subgraphs\n >>> # load graph and data\n >>> graph = ...\n >>> candidate_mechanisms = generate_bioprocess_mechanisms(graph)\n >>> scores = calculate_average_scores_on_subgraphs(candidate_mechanisms)\n >>> pd.DataFrame.from_items(scores.items(), orient='index', columns=RESULT_LABELS)"}
{"_id": "q_13622", "text": "Generate candidate mechanisms and run the heat diffusion workflow.\n\n :param graph: A BEL graph\n :param node: The BEL node that is the focus of this analysis\n :param key: The key in the node data dictionary representing the experimental data. Defaults to\n :data:`pybel_tools.constants.WEIGHT`.\n :param tag: The key for the nodes' data dictionaries where the scores will be put. Defaults to 'score'\n :param default_score: The initial score for all nodes. This number can go up or down.\n :param runs: The number of times to run the heat diffusion workflow. Defaults to 100.\n :param minimum_nodes: The minimum number of nodes a sub-graph needs to try running heat diffusion\n :return: A list of runners"}
{"_id": "q_13623", "text": "Get the average score over multiple runs.\n\n This function is very simple, and can be copied to do more interesting statistics over the :class:`Runner`\n instances. To iterate over the runners themselves, see :func:`workflow`\n\n :param graph: A BEL graph\n :param node: The BEL node that is the focus of this analysis\n :param key: The key in the node data dictionary representing the experimental data. Defaults to\n :data:`pybel_tools.constants.WEIGHT`.\n :param tag: The key for the nodes' data dictionaries where the scores will be put. Defaults to 'score'\n :param default_score: The initial score for all nodes. This number can go up or down.\n :param runs: The number of times to run the heat diffusion workflow. Defaults to 100.\n :param aggregator: A function that aggregates a list of scores. Defaults to :func:`numpy.average`.\n Could also use: :func:`numpy.mean`, :func:`numpy.median`, :func:`numpy.min`, :func:`numpy.max`\n :return: The average score for the target node"}
{"_id": "q_13624", "text": "Run the heat diffusion workflow and get runners for every possible candidate mechanism\n\n 1. Get all biological processes\n 2. Get candidate mechanism induced two level back from each biological process\n 3. Heat diffusion workflow for each candidate mechanism for multiple runs\n 4. Return all runner results\n\n :param graph: A BEL graph\n :param key: The key in the node data dictionary representing the experimental data. Defaults to\n :data:`pybel_tools.constants.WEIGHT`.\n :param tag: The key for the nodes' data dictionaries where the scores will be put. Defaults to 'score'\n :param default_score: The initial score for all nodes. This number can go up or down.\n :param runs: The number of times to run the heat diffusion workflow. Defaults to 100.\n :return: A dictionary of {node: list of runners}"}
{"_id": "q_13625", "text": "Run the heat diffusion workflow to get average score for every possible candidate mechanism.\n\n 1. Get all biological processes\n 2. Get candidate mechanism induced two level back from each biological process\n 3. Heat diffusion workflow on each candidate mechanism for multiple runs\n 4. Report average scores for each candidate mechanism\n\n :param graph: A BEL graph\n :param key: The key in the node data dictionary representing the experimental data. Defaults to\n :data:`pybel_tools.constants.WEIGHT`.\n :param tag: The key for the nodes' data dictionaries where the scores will be put. Defaults to 'score'\n :param default_score: The initial score for all nodes. This number can go up or down.\n :param runs: The number of times to run the heat diffusion workflow. Defaults to 100.\n :param aggregator: A function that aggregates a list of scores. Defaults to :func:`numpy.average`.\n Could also use: :func:`numpy.mean`, :func:`numpy.median`, :func:`numpy.min`, :func:`numpy.max`\n :return: A dictionary of {node: upstream causal subgraph}"}
{"_id": "q_13626", "text": "For each sub-graph induced over the edges matching the annotation, calculate the average score\n for all of the contained biological processes\n\n Assumes you haven't done anything yet\n\n 1. Generates biological process upstream candidate mechanistic sub-graphs with\n :func:`generate_bioprocess_mechanisms`\n 2. Calculates scores for each sub-graph with :func:`calculate_average_scores_on_sub-graphs`\n 3. Overlays data with pbt.integration.overlay_data\n 4. Calculates averages with pbt.selection.group_nodes.average_node_annotation\n\n :param graph: A BEL graph\n :param annotation: A BEL annotation\n :param key: The key in the node data dictionary representing the experimental data. Defaults to\n :data:`pybel_tools.constants.WEIGHT`.\n :param runs: The number of times to run the heat diffusion workflow. Defaults to 100.\n :param use_tqdm: Should there be a progress bar for runners?\n :return: A dictionary from {str annotation value: tuple scores}\n\n Example Usage:\n\n >>> import pybel\n >>> from pybel_tools.integration import overlay_data\n >>> from pybel_tools.analysis.heat import calculate_average_score_by_annotation\n >>> graph = pybel.from_path(...)\n >>> scores = calculate_average_score_by_annotation(graph, 'subgraph')"}
{"_id": "q_13627", "text": "Return an iterable over all nodes that are leaves.\n\n A node is a leaf if either:\n\n - it doesn't have any predecessors, OR\n - all of its predecessors have a score in their data dictionaries"}
{"_id": "q_13628", "text": "Iterate over all nodes without a score."}
{"_id": "q_13629", "text": "Remove random edges until there is at least one leaf node."}
{"_id": "q_13630", "text": "Calculate the score for all leaves.\n\n :return: The set of leaf nodes that were scored"}
{"_id": "q_13631", "text": "Determines if the algorithm is complete by checking if the target node of this analysis has been scored\n yet. Because the algorithm removes edges when it gets stuck until it is un-stuck, it is always guaranteed to\n finish.\n\n :return: Is the algorithm done running?"}
{"_id": "q_13632", "text": "Return the final score for the target node.\n\n :return: The final score for the target node"}
{"_id": "q_13633", "text": "Calculate the new score of the given node."}
{"_id": "q_13634", "text": "Return the numpy structured array data type for sample states\n\n Helper function\n\n Parameters\n ----------\n spanning_cluster : bool, optional\n Whether to detect a spanning cluster or not.\n Defaults to ``True``.\n\n Returns\n -------\n ret : list of pairs of str\n A list of tuples of field names and data types to be used as ``dtype``\n argument in numpy ndarray constructors\n\n See Also\n --------\n http://docs.scipy.org/doc/numpy/user/basics.rec.html\n canonical_statistics_dtype"}
{"_id": "q_13635", "text": "The NumPy Structured Array type for canonical statistics\n\n Helper function\n\n Parameters\n ----------\n spanning_cluster : bool, optional\n Whether to detect a spanning cluster or not.\n Defaults to ``True``.\n\n Returns\n -------\n ret : list of pairs of str\n A list of tuples of field names and data types to be used as ``dtype``\n argument in numpy ndarray constructors\n\n See Also\n --------\n http://docs.scipy.org/doc/numpy/user/basics.rec.html\n microcanoncial_statistics_dtype\n canonical_averages_dtype"}
{"_id": "q_13636", "text": "The NumPy Structured Array type for canonical averages over several\n runs\n\n Helper function\n\n Parameters\n ----------\n spanning_cluster : bool, optional\n Whether to detect a spanning cluster or not.\n Defaults to ``True``.\n\n Returns\n -------\n ret : list of pairs of str\n A list of tuples of field names and data types to be used as ``dtype``\n argument in numpy ndarray constructors\n\n See Also\n --------\n http://docs.scipy.org/doc/numpy/user/basics.rec.html\n canonical_statistics_dtype\n finalized_canonical_averages_dtype"}
{"_id": "q_13637", "text": "Initialize the canonical averages from a single-run cluster statistics\n\n Parameters\n ----------\n canonical_statistics : 1-D structured ndarray\n Typically contains the canonical statistics for a range of values\n of the occupation probability ``p``.\n The dtype is the result of `canonical_statistics_dtype`.\n\n Returns\n -------\n ret : structured ndarray\n The dype is the result of `canonical_averages_dtype`.\n\n ret['number_of_runs'] : 1-D ndarray of int\n Equals ``1`` (initial run).\n\n ret['percolation_probability_mean'] : 1-D array of float\n Equals ``canonical_statistics['percolation_probability']``\n (if ``percolation_probability`` is present)\n\n ret['percolation_probability_m2'] : 1-D array of float\n Each entry is ``0.0``\n\n ret['max_cluster_size_mean'] : 1-D array of float\n Equals ``canonical_statistics['max_cluster_size']``\n\n ret['max_cluster_size_m2'] : 1-D array of float\n Each entry is ``0.0``\n\n ret['moments_mean'] : 2-D array of float\n Equals ``canonical_statistics['moments']``\n\n ret['moments_m2'] : 2-D array of float\n Each entry is ``0.0``\n\n See Also\n --------\n canonical_averages_dtype\n bond_canonical_statistics"}
{"_id": "q_13638", "text": "Reduce the canonical averages over several runs\n\n This is a \"true\" reducer.\n It is associative and commutative.\n\n This is a wrapper around `simoa.stats.online_variance`.\n\n Parameters\n ----------\n row_a, row_b : structured ndarrays\n Output of this function, or initial input from\n `bond_initialize_canonical_averages`\n\n Returns\n -------\n ret : structured ndarray\n Array is of dtype as returned by `canonical_averages_dtype`\n\n See Also\n --------\n bond_initialize_canonical_averages\n canonical_averages_dtype\n simoa.stats.online_variance"}
{"_id": "q_13639", "text": "The NumPy Structured Array type for finalized canonical averages over\n several runs\n\n Helper function\n\n Parameters\n ----------\n spanning_cluster : bool, optional\n Whether to detect a spanning cluster or not.\n Defaults to ``True``.\n\n Returns\n -------\n ret : list of pairs of str\n A list of tuples of field names and data types to be used as ``dtype``\n argument in numpy ndarray constructors\n\n See Also\n --------\n http://docs.scipy.org/doc/numpy/user/basics.rec.html\n canonical_averages_dtype"}
{"_id": "q_13640", "text": "Print a summary of the number of nodes passing a given set of filters.\n\n :param graph: A BEL graph\n :param node_filters: A node filter or list/tuple of node filters"}
{"_id": "q_13641", "text": "Build a filter that only passes on nodes in the given list.\n\n :param nodes: An iterable of BEL nodes"}
{"_id": "q_13642", "text": "Build a filter function for matching the given BEL function with the given namespace or namespaces.\n\n :param func: A BEL function\n :param namespace: The namespace to search by"}
{"_id": "q_13643", "text": "Build a filter that passes only on nodes that have the given key in their data dictionary.\n\n :param key: A key for the node's data dictionary"}
{"_id": "q_13644", "text": "Get a mapping from variants of the given node to all of its upstream controllers."}
{"_id": "q_13645", "text": "Make a dict that accumulates the values for each key in an iterator of doubles."}
{"_id": "q_13646", "text": "Return a histogram of the different types of relations present in a graph.\n\n Note: this operation only counts each type of edge once for each pair of nodes"}
{"_id": "q_13647", "text": "Count in how many edges each annotation appears in a graph\n\n :param graph: A BEL graph\n :param annotation: The annotation to count\n :return: A Counter from {annotation value: frequency}"}
{"_id": "q_13648", "text": "Add the same edge, but in the opposite direction if not already present.\n\n :type graph: pybel.BELGraph\n :type u: tuple\n :type v: tuple\n :type k: int"}
{"_id": "q_13649", "text": "Build a template BEL document with the given PubMed identifiers."}
{"_id": "q_13650", "text": "The group index with respect to wavelength.\n\n Args:\n wavelength (float, list, None): The wavelength(s) the group\n index will be evaluated at.\n\n Returns:\n float, list: The group index at the target wavelength(s)."}
{"_id": "q_13651", "text": "Helpful function to evaluate Cauchy equations.\n\n Args:\n wavelength (float, list, None): The wavelength(s) the\n Cauchy equation will be evaluated at.\n coefficients (list): A list of the coefficients of\n the Cauchy equation.\n\n Returns:\n float, list: The refractive index at the target wavelength(s)."}
{"_id": "q_13652", "text": "Login on backend with username and password\n\n :return: None"}
{"_id": "q_13653", "text": "Log into the backend and get the token\n\n generate parameter may have following values:\n - enabled: require current token (default)\n - force: force new token generation\n - disabled\n\n if login is:\n - accepted, returns True\n - refused, returns False\n\n In case of any error, raises a BackendException\n\n :param username: login name\n :type username: str\n :param password: password\n :type password: str\n :param generate: Can have these values: enabled | force | disabled\n :type generate: str\n :param proxies: dict of proxy (http and / or https)\n :type proxies: dict\n :return: return True if authentication is successfull, otherwise False\n :rtype: bool"}
{"_id": "q_13654", "text": "Connect to alignak backend and retrieve all available child endpoints of root\n\n If connection is successful, returns a list of all the resources available in the backend:\n Each resource is identified with its title and provides its endpoint relative to backend\n root endpoint.::\n\n [\n {u'href': u'loghost', u'title': u'loghost'},\n {u'href': u'escalation', u'title': u'escalation'},\n ...\n ]\n\n\n If an error occurs a BackendException is raised.\n\n If an exception occurs, it is raised to caller.\n\n :return: list of available resources\n :rtype: list"}
{"_id": "q_13655", "text": "Get all items in the specified endpoint of alignak backend\n\n If an error occurs, a BackendException is raised.\n\n If the max_results parameter is not specified in parameters, it is set to\n BACKEND_PAGINATION_LIMIT (backend maximum value) to limit requests number.\n\n This method builds a response that always contains: _items and _status::\n\n {\n u'_items': [\n ...\n ],\n u'_status': u'OK'\n }\n\n :param endpoint: endpoint (API URL) relative from root endpoint\n :type endpoint: str\n :param params: list of parameters for the backend API\n :type params: dict\n :return: dict of properties\n :rtype: dict"}
{"_id": "q_13656", "text": "Method to delete an item or all items\n\n headers['If-Match'] must contain the _etag identifier of the element to delete\n\n :param endpoint: endpoint (API URL)\n :type endpoint: str\n :param headers: headers (example: Content-Type)\n :type headers: dict\n :return: response (deletion information)\n :rtype: dict"}
{"_id": "q_13657", "text": "Get count of rows in table object.\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob. Or menu heirarchy\n @type object_name: string\n\n @return: Number of rows.\n @rtype: integer"}
{"_id": "q_13658", "text": "Select multiple row\n\n @param window_name: Window name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to type in, either full name,\n LDTP's name convention, or a Unix glob. \n @type object_name: string\n @param row_text_list: Row list with matching text to select\n @type row_text: string\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13659", "text": "Select row partial match\n\n @param window_name: Window name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to type in, either full name,\n LDTP's name convention, or a Unix glob. \n @type object_name: string\n @param row_text: Row text to select\n @type row_text: string\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13660", "text": "Select row index\n\n @param window_name: Window name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to type in, either full name,\n LDTP's name convention, or a Unix glob. \n @type object_name: string\n @param row_index: Row index to select\n @type row_index: integer\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13661", "text": "Get cell value\n\n @param window_name: Window name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to type in, either full name,\n LDTP's name convention, or a Unix glob. \n @type object_name: string\n @param row_index: Row index to get\n @type row_index: integer\n @param column: Column index to get, default value 0\n @type column: integer\n\n @return: cell value on success.\n @rtype: string"}
{"_id": "q_13662", "text": "Get table row index matching given text\n\n @param window_name: Window name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to type in, either full name,\n LDTP's name convention, or a Unix glob. \n @type object_name: string\n @param row_text: Row text to select\n @type row_text: string\n\n @return: row index matching the text on success.\n @rtype: integer"}
{"_id": "q_13663", "text": "Start memory and CPU monitoring, with the time interval between\n each process scan\n\n @param process_name: Process name, ex: firefox-bin.\n @type process_name: string\n @param interval: Time interval between each process scan\n @type interval: double\n\n @return: 1 on success\n @rtype: integer"}
{"_id": "q_13664", "text": "Stop memory and CPU monitoring\n\n @param process_name: Process name, ex: firefox-bin.\n @type process_name: string\n\n @return: 1 on success\n @rtype: integer"}
{"_id": "q_13665", "text": "get CPU stat for the give process name\n\n @param process_name: Process name, ex: firefox-bin.\n @type process_name: string\n\n @return: cpu stat list on success, else empty list\n If same process name, running multiple instance,\n get the stat of all the process CPU usage\n @rtype: list"}
{"_id": "q_13666", "text": "get memory stat\n\n @param process_name: Process name, ex: firefox-bin.\n @type process_name: string\n\n @return: memory stat list on success, else empty list\n If same process name, running multiple instance,\n get the stat of all the process memory usage\n @rtype: list"}
{"_id": "q_13667", "text": "Get object property value.\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n @param prop: property name.\n @type prop: string\n\n @return: property\n @rtype: string"}
{"_id": "q_13668", "text": "Gets the list of object available in the window, which matches\n component name or role name or both.\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param child_name: Child name to search for.\n @type child_name: string\n @param role: role name to search for, or an empty string for wildcard.\n @type role: string\n @param parent: parent name to search for, or an empty string for wildcard.\n @type role: string\n @return: list of matched children names\n @rtype: list"}
{"_id": "q_13669", "text": "Launch application.\n\n @param cmd: Command line string to execute.\n @type cmd: string\n @param args: Arguments to the application\n @type args: list\n @param delay: Delay after the application is launched\n @type delay: int\n @param env: GNOME accessibility environment to be set or not\n @type env: int\n @param lang: Application language to be used\n @type lang: string\n\n @return: 1 on success\n @rtype: integer\n\n @raise LdtpServerException: When command fails"}
{"_id": "q_13670", "text": "Activate window.\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13671", "text": "Click item.\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13672", "text": "Get object size\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob. Or menu heirarchy\n @type object_name: string\n\n @return: x, y, width, height on success.\n @rtype: list"}
{"_id": "q_13673", "text": "Wait till a window or component exists.\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n @param guiTimeOut: Wait timeout in seconds\n @type guiTimeOut: integer\n @param state: Object state used only when object_name is provided.\n @type object_name: string\n\n @return: 1 if GUI was found, 0 if not.\n @rtype: integer"}
{"_id": "q_13674", "text": "Check whether an object state is enabled or not\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n\n @return: 1 on success 0 on failure.\n @rtype: integer"}
{"_id": "q_13675", "text": "Check item.\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13676", "text": "Verify check item.\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n\n @return: 1 on success 0 on failure.\n @rtype: integer"}
{"_id": "q_13677", "text": "Get access key of given object\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob. Or menu heirarchy\n @type object_name: string\n\n @return: access key in string format on success, else LdtpExecutionError on failure.\n @rtype: string"}
{"_id": "q_13678", "text": "Returns True if path1 and path2 refer to the same file."}
{"_id": "q_13679", "text": "Method to test if the general pasteboard is empty or not with respect\n to the type of object you want.\n\n Parameters: datatype (defaults to strings)\n Returns: Boolean True (empty) / False (has contents); Raises\n exception (passes any raised up)"}
{"_id": "q_13680", "text": "Match given string, by escaping regex characters"}
{"_id": "q_13681", "text": "Verify a menu item is enabled\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob. Or menu heirarchy\n @type object_name: string\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13682", "text": "Verify a menu item is checked\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to look for, either full name,\n LDTP's name convention, or a Unix glob. Or menu heirarchy\n @type object_name: string\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13683", "text": "Get a random app that has windows.\n\n Raise a ValueError exception if no GUI applications are found."}
{"_id": "q_13684", "text": "Launch the application with the specified bundle ID"}
{"_id": "q_13685", "text": "Launch app with a given bundle path.\n\n Return True if succeed."}
{"_id": "q_13686", "text": "Terminate app with a given bundle ID.\n Requires 10.6.\n\n Return True if succeed."}
{"_id": "q_13687", "text": "Private method to queue events to run.\n\n Each event in queue is a tuple (event call, args to event call)."}
{"_id": "q_13688", "text": "Add keypress to queue.\n\n Parameters: key character or constant referring to a non-alpha-numeric\n key (e.g. RETURN or TAB)\n modifiers\n global or app specific\n Returns: None or raise ValueError exception."}
{"_id": "q_13689", "text": "Send one character with no modifiers.\n\n Parameters: key character or constant referring to a non-alpha-numeric\n key (e.g. RETURN or TAB)\n modifier flags,\n global or app specific\n Returns: None or raise ValueError exception"}
{"_id": "q_13690", "text": "Private method to handle generic mouse button clicking.\n\n Parameters: coord (x, y) to click, mouseButton (e.g.,\n kCGMouseButtonLeft), modFlags set (int)\n Optional: clickCount (default 1; set to 2 for double-click; 3 for\n triple-click on host)\n Returns: None"}
{"_id": "q_13691", "text": "Perform the specified action."}
{"_id": "q_13692", "text": "Generator which yields all AXChildren of the object."}
{"_id": "q_13693", "text": "Generator which recursively yields all AXChildren of the object."}
{"_id": "q_13694", "text": "Perform _match but on another object, not self."}
{"_id": "q_13695", "text": "Generator which yields matches on AXChildren."}
{"_id": "q_13696", "text": "Generator which yields matches on AXChildren and their children."}
{"_id": "q_13697", "text": "Return a list of all children that match the specified criteria."}
{"_id": "q_13698", "text": "Return the bundle ID of the application."}
{"_id": "q_13699", "text": "Return the specified item in a pop up menu."}
{"_id": "q_13700", "text": "Drag the left mouse button without modifiers pressed.\n\n Parameters: coordinates to click on screen (tuple (x, y))\n dest coordinates to drag to (tuple (x, y))\n interval to send event of btn down, drag and up\n Returns: None"}
{"_id": "q_13701", "text": "Click the right mouse button with modifiers pressed.\n\n Parameters: coordinates to click; modifiers (list)\n Returns: None"}
{"_id": "q_13702", "text": "Click the left mouse button and drag object.\n\n Parameters: stopCoord, the position of dragging stopped\n strCoord, the position of dragging started\n (0,0) will get current position\n speed is mouse moving speed, 0 to unlimited\n Returns: None"}
{"_id": "q_13703", "text": "Triple-click primary mouse button.\n\n Parameters: coordinates to click (assume primary is left button)\n Returns: None"}
{"_id": "q_13704", "text": "Generic wait for a UI event that matches the specified\n criteria to occur.\n\n For customization of the callback, use keyword args labeled\n 'callback', 'args', and 'kwargs' for the callback fn, callback args,\n and callback kwargs, respectively. Also note that on return,\n the observer-returned UI element will be included in the first\n argument if 'args' are given. Note also that if the UI element is\n destroyed, callback should not use it, otherwise the function will\n hang."}
{"_id": "q_13705", "text": "Convenience method to wait for focused window to change\n\n Returns: Boolean"}
{"_id": "q_13706", "text": "Create a junction at link_name pointing to source."}
{"_id": "q_13707", "text": "Remove registered callback on window create\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n\n @return: 1 if registration was successful, 0 if not.\n @rtype: integer"}
{"_id": "q_13708", "text": "Stop the current event loop if possible\n returns True if it expects that it was successful, False otherwise"}
{"_id": "q_13709", "text": "Main entry point. Parse command line options and start up a server."}
{"_id": "q_13710", "text": "Server Bind. Forces reuse of port."}
{"_id": "q_13711", "text": "Logs the message in the root logger with the log level\n @param message: Message to be logged\n @type message: string\n @param level: Log level, defaul DEBUG\n @type level: integer\n \n @return: 1 on success and 0 on error\n @rtype: integer"}
{"_id": "q_13712", "text": "Stop logging.\n \n @return: 1 on success and 0 on error\n @rtype: integer"}
{"_id": "q_13713", "text": "Captures screenshot of the whole desktop or given window\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param x: x co-ordinate value\n @type x: integer\n @param y: y co-ordinate value\n @type y: integer\n @param width: width co-ordinate value\n @type width: integer\n @param height: height co-ordinate value\n @type height: integer\n\n @return: screenshot filename\n @rtype: string"}
{"_id": "q_13714", "text": "On window create, call the function with given arguments\n\n @param window_name: Window name to look for, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param fn_name: Callback function\n @type fn_name: function\n @param *args: arguments to be passed to the callback function\n @type *args: var args\n\n @return: 1 if registration was successful, 0 if not.\n @rtype: integer"}
{"_id": "q_13715", "text": "Register at-spi event\n\n @param event_name: Event name in at-spi format.\n @type event_name: string\n @param fn_name: Callback function\n @type fn_name: function\n @param *args: arguments to be passed to the callback function\n @type *args: var args\n\n @return: 1 if registration was successful, 0 if not.\n @rtype: integer"}
{"_id": "q_13716", "text": "Register keystroke events\n\n @param keys: key to listen\n @type keys: string\n @param modifiers: control / alt combination using gtk MODIFIERS\n @type modifiers: int\n @param fn_name: Callback function\n @type fn_name: function\n @param *args: arguments to be passed to the callback function\n @type *args: var args\n\n @return: 1 if registration was successful, 0 if not.\n @rtype: integer"}
{"_id": "q_13717", "text": "Verify scrollbar is horizontal\n\n @param window_name: Window name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13718", "text": "Set max value\n\n @param window_name: Window name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13719", "text": "Set min value\n\n @param window_name: Window name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13720", "text": "Press scrollbar down with number of iterations\n\n @param window_name: Window name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n @param interations: iterations to perform on slider increase\n @type iterations: integer\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13721", "text": "Press scrollbar up with number of iterations\n\n @param window_name: Window name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type window_name: string\n @param object_name: Object name to type in, either full name,\n LDTP's name convention, or a Unix glob.\n @type object_name: string\n @param interations: iterations to perform on slider increase\n @type iterations: integer\n\n @return: 1 on success.\n @rtype: integer"}
{"_id": "q_13722", "text": "Get playlist information of a user-generated Google Music playlist.\n\n\t\tParameters:\n\t\t\tplaylist (str): Name or ID of Google Music playlist. Names are case-sensitive.\n\t\t\t\tGoogle allows multiple playlists with the same name.\n\t\t\t\tIf multiple playlists have the same name, the first one encountered is used.\n\n\t\tReturns:\n\t\t\tdict: The playlist dict as returned by Mobileclient.get_all_user_playlist_contents."}
{"_id": "q_13723", "text": "Create song list from a user-generated Google Music playlist.\n\n\t\tParameters:\n\t\t\tplaylist (str): Name or ID of Google Music playlist. Names are case-sensitive.\n\t\t\t\tGoogle allows multiple playlists with the same name.\n\t\t\t\tIf multiple playlists have the same name, the first one encountered is used.\n\n\t\t\tinclude_filters (list): A list of ``(field, pattern)`` tuples.\n\t\t\t\tFields are any valid Google Music metadata field available to the Musicmanager client.\n\t\t\t\tPatterns are Python regex patterns.\n\t\t\t\tGoogle Music songs are filtered out if the given metadata field values don't match any of the given patterns.\n\n\t\t\texclude_filters (list): A list of ``(field, pattern)`` tuples.\n\t\t\t\tFields are any valid Google Music metadata field available to the Musicmanager client.\n\t\t\t\tPatterns are Python regex patterns.\n\t\t\t\tGoogle Music songs are filtered out if the given metadata field values match any of the given patterns.\n\n\t\t\tall_includes (bool): If ``True``, all include_filters criteria must match to include a song.\n\n\t\t\tall_excludes (bool): If ``True``, all exclude_filters criteria must match to exclude a song.\n\n\t\tReturns:\n\t\t\tA list of Google Music song dicts in the playlist matching criteria and\n\t\t\ta list of Google Music song dicts in the playlist filtered out using filter criteria."}
{"_id": "q_13724", "text": "Sets command name and formatting for subsequent calls to logger"}
{"_id": "q_13725", "text": "Suppress default exit behavior"}
{"_id": "q_13726", "text": "Recognizes and claims MuTect VCFs form the set of all input VCFs.\n\n Each defined caller has a chance to evaluate and claim all the incoming\n files as something that it can process.\n\n Args:\n file_readers: the collection of currently unclaimed files\n\n Returns:\n A tuple of unclaimed readers and MuTectVcfReaders."}
{"_id": "q_13727", "text": "Returns a standardized column header.\n\n MuTect sample headers include the name of input alignment, which is\n nice, but doesn't match up with the sample names reported in Strelka\n or VarScan. To fix this, we replace with NORMAL and TUMOR using the\n MuTect metadata command line to replace them correctly."}
{"_id": "q_13728", "text": "Recognizes and claims VarScan VCFs form the set of all input VCFs.\n\n Each defined caller has a chance to evaluate and claim all the incoming\n files as something that it can process. Since VarScan can claim\n high-confidence files as well, this process is significantly more\n complex than for other callers.\n\n Args:\n file_readers: the collection of currently unclaimed files\n\n Returns:\n A tuple of unclaimed readers and VarScanVcfReaders."}
{"_id": "q_13729", "text": "Derive mean and stdev.\n\n Adapted from online variance algorithm from Knuth, The Art of Computer \n Programming, volume 2\n\n Returns: mean and stdev when len(values) > 1, otherwise (None, None)\n Values rounded to _MAX_PRECISION to ameliorate discrepancies between\n python versions."}
{"_id": "q_13730", "text": "Send a JSON request.\n\n Returns True if everything went well, otherwise it returns the status\n code of the response."}
{"_id": "q_13731", "text": "Return a list of registered projects.\n\n :param limit: Number of returned items, default 100\n :type limit: integer\n :param offset: Offset for the query, default 0\n :type offset: integer\n :param last_id: id of the last project, used for pagination. If provided, offset is ignored\n :type last_id: integer\n :rtype: list\n :returns: A list of PYBOSSA Projects"}
{"_id": "q_13732", "text": "Return a PYBOSSA Project for the project_id.\n\n :param project_id: PYBOSSA Project ID\n :type project_id: integer\n :rtype: PYBOSSA Project\n :returns: A PYBOSSA Project object"}
{"_id": "q_13733", "text": "Update a project instance.\n\n :param project: PYBOSSA project\n :type project: PYBOSSA Project\n :returns: True -- the response status code"}
{"_id": "q_13734", "text": "Delete a Project with id = project_id.\n\n :param project_id: PYBOSSA Project ID\n :type project_id: integer\n :returns: True -- the response status code"}
{"_id": "q_13735", "text": "Return a list of registered categories.\n\n :param limit: Number of returned items, default 20\n :type limit: integer\n :param offset: Offset for the query, default 0\n :type offset: integer\n :param last_id: id of the last category, used for pagination. If provided, offset is ignored\n :type last_id: integer\n :rtype: list\n :returns: A list of PYBOSSA Categories"}
{"_id": "q_13736", "text": "Return a PYBOSSA Category for the category_id.\n\n :param category_id: PYBOSSA Category ID\n :type category_id: integer\n :rtype: PYBOSSA Category\n :returns: A PYBOSSA Category object"}
{"_id": "q_13737", "text": "Create a Category.\n\n :param name: PYBOSSA Category Name\n :type name: string\n :param description: PYBOSSA Category description\n :type decription: string\n :returns: True -- the response status code"}
{"_id": "q_13738", "text": "Update a Category instance.\n\n :param category: PYBOSSA Category\n :type category: PYBOSSA Category\n :returns: True -- the response status code"}
{"_id": "q_13739", "text": "Delete a Category with id = category_id.\n\n :param category_id: PYBOSSA Category ID\n :type category_id: integer\n :returns: True -- the response status code"}
{"_id": "q_13740", "text": "Return a list of tasks for a given project ID.\n\n :param project_id: PYBOSSA Project ID\n :type project_id: integer\n :param limit: Number of returned items, default 100\n :type limit: integer\n :param offset: Offset for the query, default 0\n :param last_id: id of the last task, used for pagination. If provided, offset is ignored\n :type last_id: integer\n :type offset: integer\n :returns: True -- the response status code"}
{"_id": "q_13741", "text": "Return a list of matched tasks for a given project ID.\n\n :param project_id: PYBOSSA Project ID\n :type project_id: integer\n :param kwargs: PYBOSSA Task members\n :type info: dict\n :rtype: list\n :returns: A list of tasks that match the kwargs"}
{"_id": "q_13742", "text": "Update a task for a given task ID.\n\n :param task: PYBOSSA task"}
{"_id": "q_13743", "text": "Delete a task for a given task ID.\n\n :param task: PYBOSSA task"}
{"_id": "q_13744", "text": "Delete the given taskrun.\n\n :param task: PYBOSSA task"}
{"_id": "q_13745", "text": "Return a list of matched results for a given project ID.\n\n :param project_id: PYBOSSA Project ID\n :type project_id: integer\n :param kwargs: PYBOSSA Results members\n :type info: dict\n :rtype: list\n :returns: A list of results that match the kwargs"}
{"_id": "q_13746", "text": "Update a result for a given result ID.\n\n :param result: PYBOSSA result"}
{"_id": "q_13747", "text": "Return the object without the forbidden attributes."}
{"_id": "q_13748", "text": "Create a helping material for a given project ID.\n\n :param project_id: PYBOSSA Project ID\n :type project_id: integer\n :param info: PYBOSSA Helping Material info JSON field\n :type info: dict\n :param media_url: URL for a media file (image, video or audio)\n :type media_url: string\n :param file_path: File path to the local image, video or sound to upload. \n :type file_path: string\n :returns: True -- the response status code"}
{"_id": "q_13749", "text": "Return a list of helping materials for a given project ID.\n\n :param project_id: PYBOSSA Project ID\n :type project_id: integer\n :param limit: Number of returned items, default 100\n :type limit: integer\n :param offset: Offset for the query, default 0\n :param last_id: id of the last helping material, used for pagination. If provided, offset is ignored\n :type last_id: integer\n :type offset: integer\n :returns: True -- the response status code"}
{"_id": "q_13750", "text": "Return a list of matched helping materials for a given project ID.\n\n :param project_id: PYBOSSA Project ID\n :type project_id: integer\n :param kwargs: PYBOSSA HelpingMaterial members\n :type info: dict\n :rtype: list\n :returns: A list of helping materials that match the kwargs"}
{"_id": "q_13751", "text": "Update a helping material for a given helping material ID.\n\n :param helpingmaterial: PYBOSSA helping material"}
{"_id": "q_13752", "text": "Authenticate the gmusicapi Musicmanager instance.\n\n\t\tParameters:\n\t\t\toauth_filename (str): The filename of the oauth credentials file to use/create for login.\n\t\t\t\tDefault: ``oauth``\n\n\t\t\tuploader_id (str): A unique id as a MAC address (e.g. ``'00:11:22:33:AA:BB'``).\n\t\t\t\tThis should only be provided in cases where the default (host MAC address incremented by 1) won't work.\n\n\t\tReturns:\n\t\t\t``True`` on successful login, ``False`` on unsuccessful login."}
{"_id": "q_13753", "text": "Download Google Music songs.\n\n\t\tParameters:\n\t\t\tsongs (list or dict): Google Music song dict(s).\n\n\t\t\ttemplate (str): A filepath which can include template patterns.\n\n\t\tReturns:\n\t\t\tA list of result dictionaries.\n\t\t\t::\n\n\t\t\t\t[\n\t\t\t\t\t{'result': 'downloaded', 'id': song_id, 'filepath': downloaded[song_id]}, # downloaded\n\t\t\t\t\t{'result': 'error', 'id': song_id, 'message': error[song_id]} # error\n\t\t\t\t]"}
{"_id": "q_13754", "text": "Split data into lines where lines are separated by LINE_TERMINATORS.\n\n :param data: Any chunk of binary data.\n :return: List of lines without any characters at LINE_TERMINATORS."}
{"_id": "q_13755", "text": "Return line terminator data begins with or None."}
{"_id": "q_13756", "text": "Return line terminator data ends with or None."}
{"_id": "q_13757", "text": "Seek next line relative to the current file position.\n\n :return: Position of the line or -1 if next line was not found."}
{"_id": "q_13758", "text": "Seek previous line relative to the current file position.\n\n :return: Position of the line or -1 if previous line was not found."}
{"_id": "q_13759", "text": "Return the last lines of the file."}
{"_id": "q_13760", "text": "Return the top lines of the file."}
{"_id": "q_13761", "text": "Convert Unix path from Cygwin to Windows path."}
{"_id": "q_13762", "text": "Get mutagen metadata dict from a file."}
{"_id": "q_13763", "text": "Compare two song collections to find missing songs.\n\n\tParameters:\n\t\tsrc_songs (list): Google Music song dicts or filepaths of local songs.\n\n\t\tdest_songs (list): Google Music song dicts or filepaths of local songs.\n\n\tReturns:\n\t\tA list of Google Music song dicts or local song filepaths from source missing in destination."}
{"_id": "q_13764", "text": "Exclude file paths based on regex patterns.\n\n\tParameters:\n\t\tfilepaths (list or str): Filepath(s) to check.\n\n\t\texclude_patterns (list): Python regex patterns to check filepaths against.\n\n\tReturns:\n\t\tA list of filepaths to include and a list of filepaths to exclude."}
{"_id": "q_13765", "text": "Check a song metadata field value for a pattern."}
{"_id": "q_13766", "text": "Check a song metadata dict against a set of metadata filters."}
{"_id": "q_13767", "text": "Match a Google Music song dict against a set of metadata filters.\n\n\tParameters:\n\t\tsongs (list): Google Music song dicts to filter.\n\n\t\tinclude_filters (list): A list of ``(field, pattern)`` tuples.\n\t\t\tFields are any valid Google Music metadata field available to the Musicmanager client.\n\t\t\tPatterns are Python regex patterns.\n\t\t\tGoogle Music songs are filtered out if the given metadata field values don't match any of the given patterns.\n\n\t\texclude_filters (list): A list of ``(field, pattern)`` tuples.\n\t\t\tFields are any valid Google Music metadata field available to the Musicmanager client.\n\t\t\tPatterns are Python regex patterns.\n\t\t\tGoogle Music songs are filtered out if the given metadata field values match any of the given patterns.\n\n\t\tall_includes (bool): If ``True``, all include_filters criteria must match to include a song.\n\n\t\tall_excludes (bool): If ``True``, all exclude_filters criteria must match to exclude a song.\n\n\tReturns:\n\t\tA list of Google Music song dicts matching criteria and\n\t\ta list of Google Music song dicts filtered out using filter criteria.\n\t\t::\n\n\t\t\t(matched, filtered)"}
{"_id": "q_13768", "text": "Match a local file against a set of metadata filters.\n\n\tParameters:\n\t\tfilepaths (list): Filepaths to filter.\n\n\t\tinclude_filters (list): A list of ``(field, pattern)`` tuples.\n\t\t\tFields are any valid mutagen metadata fields.\n\t\t\tPatterns are Python regex patterns.\n\t\t\tLocal songs are filtered out if the given metadata field values don't match any of the given patterns.\n\n\t\texclude_filters (list): A list of ``(field, pattern)`` tuples.\n\t\t\tFields are any valid mutagen metadata fields.\n\t\t\tPatterns are Python regex patterns.\n\t\t\tLocal songs are filtered out if the given metadata field values match any of the given patterns.\n\n\t\tall_includes (bool): If ``True``, all include_filters criteria must match to include a song.\n\n\t\tall_excludes (bool): If ``True``, all exclude_filters criteria must match to exclude a song.\n\n\tReturns:\n\t\tA list of local song filepaths matching criteria and\n\t\ta list of local song filepaths filtered out using filter criteria.\n\t\tInvalid music files are also filtered out.\n\t\t::\n\n\t\t\t(matched, filtered)"}
{"_id": "q_13769", "text": "Generate a filename for a song based on metadata.\n\n\tParameters:\n\t\tmetadata (dict): A metadata dict.\n\n\tReturns:\n\t\tA filename."}
{"_id": "q_13770", "text": "Create directory structure and file name based on metadata template.\n\n\tParameters:\n\t\ttemplate (str): A filepath which can include template patterns as defined by :param template_patterns:.\n\n\t\tmetadata (dict): A metadata dict.\n\n\t\ttemplate_patterns (dict): A dict of ``pattern: field`` pairs used to replace patterns with metadata field values.\n\t\t\tDefault: :const TEMPLATE_PATTERNS:\n\n\tReturns:\n\t\tA filepath."}
{"_id": "q_13771", "text": "Alternative constructor that parses VcfRecord from VCF string.\n\n Aspire to parse/represent the data such that it could be reliably\n round-tripped. (This nicety means INFO fields and FORMAT tags should be\n treated as ordered to avoid shuffling.)\n\n Args:\n vcf_line: the VCF variant record as a string; tab separated fields,\n trailing newlines are ignored. Must have at least 8 fixed fields\n (through INFO)\n sample_names: a list of sample name strings; these should match\n the VCF header column\n Returns:\n A mutable VcfRecord."}
{"_id": "q_13772", "text": "Returns string representation of format field."}
{"_id": "q_13773", "text": "Appends a new format tag-value for all samples.\n\n Args:\n tag_name: string tag name; must not already exist\n new_sample\n\n Raises:\n KeyError: if tag_name to be added already exists"}
{"_id": "q_13774", "text": "Replaces null or blank filter or adds filter to existing list."}
{"_id": "q_13775", "text": "Extract an alphabetically sorted list of elements from the compounds of\n the material.\n\n :returns: An alphabetically sorted list of elements."}
{"_id": "q_13776", "text": "Get the masses of elements in the package.\n\n :returns: [kg] An array of element masses. The sequence of the elements\n in the result corresponds with the sequence of elements in the\n element list of the material."}
{"_id": "q_13777", "text": "Calculate the density at the specified temperature.\n\n :param T: [K] temperature\n\n :returns: [kg/m3] density\n\n The **state parameter contains the keyword argument(s) specified above\\\n that are used to describe the state of the material."}
{"_id": "q_13778", "text": "Calculate the density at the specified temperature, pressure, and\n composition.\n\n :param T: [K] temperature\n :param P: [Pa] pressure\n :param x: [mole fraction] dictionary of compounds and mole fractions\n\n :returns: [kg/m3] density\n\n The **state parameter contains the keyword argument(s) specified above\\\n that are used to describe the state of the material."}
{"_id": "q_13779", "text": "Returns the categories available to the user. Specify `products` if\n you want to restrict to just the categories that hold the specified\n products, otherwise it'll do all."}
{"_id": "q_13780", "text": "Produces an appropriate _ProductsForm subclass for the given render\n type."}
{"_id": "q_13781", "text": "Creates a StaffProductsForm that restricts the available products to\n those that are available to a user."}
{"_id": "q_13782", "text": "Adds an error to the given product's field"}
{"_id": "q_13783", "text": "Set the parent path and the path from the new parent path.\n\n :param value: The path to the object's parent"}
{"_id": "q_13784", "text": "Create a sub account in the account.\n\n :param name: The account name.\n :param description: The account description.\n :param number: The account number.\n\n :returns: The created account."}
{"_id": "q_13785", "text": "Remove an account from the account's sub accounts.\n\n :param name: The name of the account to remove."}
{"_id": "q_13786", "text": "Retrieves a child account.\n This could be a descendant nested at any level.\n\n :param account_name: The name of the account to retrieve.\n\n :returns: The child account, if found, else None."}
{"_id": "q_13787", "text": "Create an account in the general ledger structure.\n\n :param name: The account name.\n :param number: The account number.\n :param account_type: The account type.\n\n :returns: The created account."}
{"_id": "q_13788", "text": "Returns the account and all of it's sub accounts.\n\n :param account: The account.\n :param result: The list to add all the accounts to."}
{"_id": "q_13789", "text": "Validates whether the accounts in a list of account names exists.\n\n :param names: The names of the accounts.\n\n :returns: The descendants of the account."}
{"_id": "q_13790", "text": "Create a transaction in the general ledger.\n\n :param name: The transaction's name.\n :param description: The transaction's description.\n :param tx_date: The date of the transaction.\n :param cr_account: The transaction's credit account's name.\n :param dt_account: The transaction's debit account's name.\n :param source: The name of source the transaction originated from.\n :param amount: The transaction amount.\n\n :returns: The created transaction."}
{"_id": "q_13791", "text": "Calculate a path relative to the specified module file.\n\n :param module_file_path: The file path to the module."}
{"_id": "q_13792", "text": "Get the date from a value that could be a date object or a string.\n\n :param date: The date object or string.\n\n :returns: The date object."}
{"_id": "q_13793", "text": "Calculate the local Nusselt number.\n\n :param L: [m] characteristic length of the heat transfer surface\n :param theta: [\u00b0] angle of the surface with the vertical\n :param Ts: [K] heat transfer surface temperature\n :param Tf: [K] bulk fluid temperature\n\n :returns: float"}
{"_id": "q_13794", "text": "Calculate the average Nusselt number.\n\n :param L: [m] characteristic length of the heat transfer surface\n :param theta: [\u00b0] angle of the surface with the vertical\n :param Ts: [K] heat transfer surface temperature\n :param **statef: [K] bulk fluid temperature\n\n :returns: float"}
{"_id": "q_13795", "text": "Calculate the local heat transfer coefficient.\n\n :param L: [m] characteristic length of the heat transfer surface\n :param theta: [\u00b0] angle of the surface with the vertical\n :param Ts: [K] heat transfer surface temperature\n :param Tf: [K] bulk fluid temperature\n\n :returns: [W/m2/K] float"}
{"_id": "q_13796", "text": "Calculate the average heat transfer coefficient.\n\n :param L: [m] characteristic length of the heat transfer surface\n :param theta: [\u00b0] angle of the surface with the vertical\n :param Ts: [K] heat transfer surface temperature\n :param Tf: [K] bulk fluid temperature\n\n :returns: [W/m2/K] float"}
{"_id": "q_13797", "text": "Generate URL on the modularized endpoints and url parameters"}
{"_id": "q_13798", "text": "Creates a form for specifying fields from a model to display."}
{"_id": "q_13799", "text": "Sends an e-mail to the given address.\n\n to: The address\n kind: the ID for an e-mail kind; it should point to a subdirectory of\n self.template_prefix containing subject.txt and message.html, which\n are django templates for the subject and HTML message respectively.\n\n context: a context for rendering the e-mail."}
{"_id": "q_13800", "text": "Start processing an OSM diff stream and yield one changeset at a time to\n the caller."}
{"_id": "q_13801", "text": "Parse a file-like containing OSM XML into memory and return an object with\n the nodes, ways, and relations it contains."}
{"_id": "q_13802", "text": "Parses the global OSM Notes feed and yields as much Note information as possible."}
{"_id": "q_13803", "text": "Calculate the enthalpy at the specified temperature and composition\n using equation 9 in Merrick1983b.\n\n :param T: [K] temperature\n :param y_C: Carbon mass fraction\n :param y_H: Hydrogen mass fraction\n :param y_O: Oxygen mass fraction\n :param y_N: Nitrogen mass fraction\n :param y_S: Sulphur mass fraction\n\n :returns: [J/kg] enthalpy\n\n The **state parameter contains the keyword argument(s) specified above\n that are used to describe the state of the material."}
{"_id": "q_13804", "text": "Returns true if the condition passes the filter"}
{"_id": "q_13805", "text": "Returns all of the items from queryset where the user has a\n product from a category invoking that item's condition in one of their\n carts."}
{"_id": "q_13806", "text": "Returns all of the items from queryset where the date falls into\n any specified range, but not yet where the stock limit is not yet\n reached."}
{"_id": "q_13807", "text": "Returns all of the items from queryset which are enabled by a user\n being a presenter or copresenter of a non-cancelled proposal."}
{"_id": "q_13808", "text": "Returns all of the items from conditions which are enabled by a\n user being member of a Django Auth Group."}
{"_id": "q_13809", "text": "Create an entity and add it to the model.\n\n :param name: The entity name.\n :param gl_structure: The entity's general ledger structure.\n :param description: The entity description.\n\n :returns: The created entity."}
{"_id": "q_13810", "text": "Remove an entity from the model.\n\n :param name: The name of the entity to remove."}
{"_id": "q_13811", "text": "Prepare the model for execution."}
{"_id": "q_13812", "text": "Execute the model."}
{"_id": "q_13813", "text": "Create a MaterialStream based on the specified parameters.\n\n :param assay: Name of the assay to be used to create the stream.\n :param mfr: Stream mass flow rate. [kg/h]\n :param P: Stream pressure. [atm]\n :param T: Stream temperature. [\u00b0C]\n :param normalise: Indicates whether the assay must be normalised\n before creating the Stream.\n\n :returns: MaterialStream object."}
{"_id": "q_13814", "text": "Calculate the enthalpy of the package at the specified temperature, in\n case the material is coal.\n\n :param T: [\u00b0C] temperature\n\n :returns: [kWh] enthalpy"}
{"_id": "q_13815", "text": "Set the temperature of the package to the specified value, and\n recalculate it's enthalpy.\n\n :param T: Temperature. [\u00b0C]"}
{"_id": "q_13816", "text": "Create a complete copy of the package.\n\n :returns: A new MaterialPackage object."}
{"_id": "q_13817", "text": "Set all the compound masses in the package to zero.\n Set the pressure to 1, the temperature to 25 and the enthalpy to zero."}
{"_id": "q_13818", "text": "Determine the mass of the specified compound in the package.\n\n :param compound: Formula and phase of a compound, e.g. \"Fe2O3[S1]\".\n\n :returns: Mass. [kg]"}
{"_id": "q_13819", "text": "Determine the mole amounts of all the compounds.\n\n :returns: List of amounts. [kmol]"}
{"_id": "q_13820", "text": "Determine the mole amount of the specified compound.\n\n :returns: Amount. [kmol]"}
{"_id": "q_13821", "text": "Determine the sum of mole amounts of all the compounds.\n\n :returns: Amount. [kmol]"}
{"_id": "q_13822", "text": "Determine the masses of elements in the package and return as a\n dictionary.\n\n :returns: Dictionary of element symbols and masses. [kg]"}
{"_id": "q_13823", "text": "Determine the mass of the specified elements in the package.\n\n :returns: Masses. [kg]"}
{"_id": "q_13824", "text": "Extract 'other' from this package, modifying this package and\n returning the extracted material as a new package.\n\n :param other: Can be one of the following:\n\n * float: A mass equal to other is extracted from self. Self is\n reduced by other and the extracted package is returned as\n a new package.\n * tuple (compound, mass): The other tuple specifies the mass\n of a compound to be extracted. It is extracted from self and\n the extracted mass is returned as a new package.\n * string: The 'other' string specifies the compound to be\n extracted. All of the mass of that compound will be removed\n from self and a new package created with it.\n * Material: The 'other' material specifies the list of\n compounds to extract.\n\n\n :returns: New MaterialPackage object."}
{"_id": "q_13825", "text": "Calculate the temperature of the stream given the specified\n enthalpy flow rate using a secant algorithm.\n\n :param H: Enthalpy flow rate. [kWh/h]\n\n :returns: Temperature. [\u00b0C]"}
{"_id": "q_13826", "text": "Set the enthalpy flow rate of the stream to the specified value, and\n recalculate it's temperature.\n\n :param H: The new enthalpy flow rate value. [kWh/h]"}
{"_id": "q_13827", "text": "Set the temperature of the stream to the specified value, and\n recalculate it's enthalpy.\n\n :param T: Temperature. [\u00b0C]"}
{"_id": "q_13828", "text": "Set the higher heating value of the stream to the specified value, and\n recalculate the formation enthalpy of the daf coal.\n\n :param HHV: MJ/kg coal, higher heating value"}
{"_id": "q_13829", "text": "Create a complete copy of the stream.\n\n :returns: A new MaterialStream object."}
{"_id": "q_13830", "text": "Set all the compound mass flow rates in the stream to zero.\n Set the pressure to 1, the temperature to 25 and the enthalpy to zero."}
{"_id": "q_13831", "text": "Determine the mass flow rate of the specified compound in the stream.\n\n :param compound: Formula and phase of a compound, e.g. \"Fe2O3[S1]\".\n\n :returns: Mass flow rate. [kg/h]"}
{"_id": "q_13832", "text": "Determine the amount flow rate of the specified compound.\n\n :returns: Amount flow rate. [kmol/h]"}
{"_id": "q_13833", "text": "Determine the sum of amount flow rates of all the compounds.\n\n :returns: Amount flow rate. [kmol/h]"}
{"_id": "q_13834", "text": "Determine the mass flow rates of elements in the stream and return as\n a dictionary.\n\n :returns: Dictionary of element symbols and mass flow rates. [kg/h]"}
{"_id": "q_13835", "text": "Determine the mass flow rate of the specified elements in the stream.\n\n :returns: Mass flow rates. [kg/h]"}
{"_id": "q_13836", "text": "Extract 'other' from this stream, modifying this stream and returning\n the extracted material as a new stream.\n\n :param other: Can be one of the following:\n\n * float: A mass flow rate equal to other is extracted from self. Self\n is reduced by other and the extracted stream is returned as\n a new stream.\n * tuple (compound, mass): The other tuple specifies the mass flow\n rate of a compound to be extracted. It is extracted from self and\n the extracted mass flow rate is returned as a new stream.\n * string: The 'other' string specifies the compound to be\n extracted. All of the mass flow rate of that compound will be\n removed from self and a new stream created with it.\n * Material: The 'other' material specifies the list of\n compounds to extract.\n\n\n :returns: New MaterialStream object."}
{"_id": "q_13837", "text": "Calculate the Grashof number.\n\n :param L: [m] heat transfer surface characteristic length.\n :param Ts: [K] heat transfer surface temperature.\n :param Tf: [K] bulk fluid temperature.\n :param beta: [1/K] fluid coefficient of thermal expansion.\n :param nu: [m2/s] fluid kinematic viscosity.\n\n :returns: float\n\n .. math::\n \\\\mathrm{Gr} = \\\\frac{g \\\\beta (Ts - Tinf ) L^3}{\\\\nu ^2}\n\n Characteristic dimensions:\n * vertical plate: vertical length\n * pipe: diameter\n * bluff body: diameter"}
{"_id": "q_13838", "text": "Calculate the Nusselt number.\n\n :param L: [m] heat transfer surface characteristic length.\n :param h: [W/K/m2] convective heat transfer coefficient.\n :param k: [W/K/m] fluid thermal conductivity.\n\n :returns: float"}
{"_id": "q_13839", "text": "Calculate the Sherwood number.\n\n :param L: [m] mass transfer surface characteristic length.\n :param h: [m/s] mass transfer coefficient.\n :param D: [m2/s] fluid mass diffusivity.\n\n :returns: float"}
{"_id": "q_13840", "text": "Decorator that makes the wrapped function raise ValidationError\n if we're doing something that could modify the cart.\n\n It also wraps the execution of this function in a database transaction,\n and marks the boundaries of a cart operations batch."}
{"_id": "q_13841", "text": "Returns the user's current cart, or creates a new cart\n if there isn't one ready yet."}
{"_id": "q_13842", "text": "Updates the cart's time last updated value, which is used to\n determine whether the cart has reserved the items and discounts it\n holds."}
{"_id": "q_13843", "text": "Applies the voucher with the given code to this cart."}
{"_id": "q_13844", "text": "Calculates all of the discounts available for this product."}
{"_id": "q_13845", "text": "Applies the best discounts on the given product, from the given\n discounts."}
{"_id": "q_13846", "text": "Create a model object from the data set for the property specified by\n the supplied symbol, using the specified polynomial degree.\n\n :param dataset: a DataSet object\n :param symbol: the symbol of the property to be described, e.g. 'rho'\n :param degree: the polynomial degree to use\n\n :returns: a new PolynomialModelT object"}
{"_id": "q_13847", "text": "Calculate the material physical property at the specified temperature\n in the units specified by the object's 'property_units' property.\n\n :param T: [K] temperature\n\n :returns: physical property value"}
{"_id": "q_13848", "text": "Returns the data rows for the table."}
{"_id": "q_13849", "text": "Renders the reports based on data.content_type's value.\n\n Arguments:\n data (ReportViewRequestData): The report data. data.content_type\n is used to determine how the reports are rendered.\n\n Returns:\n HTTPResponse: The rendered version of the report."}
{"_id": "q_13850", "text": "Lists all of the reports currently available."}
{"_id": "q_13851", "text": "Summarises the items sold and discounts granted for a given set of\n products, or products from categories."}
{"_id": "q_13852", "text": "Summarises paid items and payments."}
{"_id": "q_13853", "text": "Shows the history of payments into the system"}
{"_id": "q_13854", "text": "Summarises the usage of a given discount."}
{"_id": "q_13855", "text": "Shows each product line item from invoices, including their date and\n purchashing customer."}
{"_id": "q_13856", "text": "Shows the number of paid invoices containing given products or\n categories per day."}
{"_id": "q_13857", "text": "Shows all of the invoices in the system."}
{"_id": "q_13858", "text": "Shows registration status for speakers with a given proposal kind."}
{"_id": "q_13859", "text": "Create a sub component in the business component.\n\n :param name: The new component's name.\n :param description: The new component's description.\n\n :returns: The created component."}
{"_id": "q_13860", "text": "Remove a sub component from the component.\n\n :param name: The name of the component to remove."}
{"_id": "q_13861", "text": "Retrieve a child component given its name.\n\n :param name: The name of the component.\n\n :returns: The component."}
{"_id": "q_13862", "text": "Execute the entity at the current clock cycle.\n\n :param clock: The clock containing the current execution time and\n period information."}
{"_id": "q_13863", "text": "Update group counts with multiplier\n\n This is for handling atom counts on groups like (OH)2\n\n :param groups: iterable of Group/Element\n :param multiplier: the number to multiply by"}
{"_id": "q_13864", "text": "Calculate the mole fractions from the specified compound masses.\n\n :param masses: [kg] dictionary, e.g. {'SiO2': 3.0, 'FeO': 1.5}\n\n :returns: [mole fractions] dictionary"}
{"_id": "q_13865", "text": "Calculate the masses from the specified compound amounts.\n\n :param masses: [kmol] dictionary, e.g. {'SiO2': 3.0, 'FeO': 1.5}\n\n :returns: [kg] dictionary"}
{"_id": "q_13866", "text": "Calculate the mole fractions from the specified compound amounts.\n\n :param amounts: [kmol] dictionary, e.g. {'SiO2': 3.0, 'FeO': 1.5}\n\n :returns: [mass fractions] dictionary"}
{"_id": "q_13867", "text": "Convert the specified mass of the source compound to the target using\n element as basis.\n\n :param mass: Mass of from_compound. [kg]\n :param source: Formula and phase of the original compound, e.g.\n 'Fe2O3[S1]'.\n :param target: Formula and phase of the target compound, e.g. 'Fe[S1]'.\n :param element: Element to use as basis for the conversion, e.g. 'Fe' or\n 'O'.\n\n :returns: Mass of target. [kg]"}
{"_id": "q_13868", "text": "Determine the molar mass of a chemical compound.\n\n The molar mass is usually the mass of one mole of the substance, but here\n it is the mass of 1000 moles, since the mass unit used in auxi is kg.\n\n :param compound: Formula of a chemical compound, e.g. 'Fe2O3'.\n\n :returns: Molar mass. [kg/kmol]"}
{"_id": "q_13869", "text": "Determine the stoichiometry coefficient of an element in a chemical\n compound.\n\n :param compound: Formula of a chemical compound, e.g. 'SiO2'.\n :param element: Element, e.g. 'Si'.\n\n :returns: Stoichiometry coefficient."}
{"_id": "q_13870", "text": "Determine the stoichiometry coefficients of the specified elements in\n the specified chemical compound.\n\n :param compound: Formula of a chemical compound, e.g. 'SiO2'.\n :param elements: List of elements, e.g. ['Si', 'O', 'C'].\n\n :returns: List of stoichiometry coefficients."}
{"_id": "q_13871", "text": "Add another psd material package to this material package.\n\n :param other: The other material package."}
{"_id": "q_13872", "text": "Calculates the sum of unclaimed credit from this user's credit notes.\n\n Returns:\n Decimal: the sum of the values of unclaimed credit notes for the\n current user."}
{"_id": "q_13873", "text": "If the current user is unregistered, returns True if there are no\n products in the TICKET_PRODUCT_CATEGORY that are available to that user.\n\n If there *are* products available, the return False.\n\n If the current user *is* registered, then return None (it's not a\n pertinent question for people who already have a ticket)."}
{"_id": "q_13874", "text": "Goes through the registration process in order, making sure user sees\n all valid categories.\n\n The user must be logged in to see this view.\n\n Parameter:\n page_number:\n 1) Profile form (and e-mail address?)\n 2) Ticket type\n 3) Remaining products\n 4) Mark registration as complete\n\n Returns:\n render: Renders ``registrasion/guided_registration.html``,\n with the following data::\n\n {\n \"current_step\": int(), # The current step in the\n # registration\n \"sections\": sections, # A list of\n # GuidedRegistrationSections\n \"title\": str(), # The title of the page\n \"total_steps\": int(), # The total number of steps\n }"}
{"_id": "q_13875", "text": "View for editing an attendee's profile\n\n The user must be logged in to edit their profile.\n\n Returns:\n redirect or render:\n In the case of a ``POST`` request, it'll redirect to ``dashboard``,\n or otherwise, it will render ``registrasion/profile_form.html``\n with data::\n\n {\n \"form\": form, # Instance of ATTENDEE_PROFILE_FORM.\n }"}
{"_id": "q_13876", "text": "Handles a voucher form in the given request. Returns the voucher\n form instance, and whether the voucher code was handled."}
{"_id": "q_13877", "text": "Redirects to an invoice for the attendee that matches the given access\n code, if any.\n\n If the attendee has multiple invoices, we use the following tie-break:\n\n - If there's an unpaid invoice, show that, otherwise\n - If there's a paid invoice, show the most recent one, otherwise\n - Show the most recent invoid of all\n\n Arguments:\n\n access_code (castable to int): The access code for the user whose\n invoice you want to see.\n\n Returns:\n redirect:\n Redirect to the selected invoice for that user.\n\n Raises:\n Http404: If the user has no invoices."}
{"_id": "q_13878", "text": "Displays an invoice.\n\n This view is not authenticated, but it will only allow access to either:\n the user the invoice belongs to; staff; or a request made with the correct\n access code.\n\n Arguments:\n\n invoice_id (castable to int): The invoice_id for the invoice you want\n to view.\n\n access_code (Optional[str]): The access code for the user who owns\n this invoice.\n\n Returns:\n render:\n Renders ``registrasion/invoice.html``, with the following\n data::\n\n {\n \"invoice\": models.commerce.Invoice(),\n }\n\n Raises:\n Http404: if the current user cannot view this invoice and the correct\n access_code is not provided."}
{"_id": "q_13879", "text": "Marks an invoice as refunded and requests a credit note for the\n full amount paid against the invoice.\n\n This view requires a login, and the logged in user must be staff.\n\n Arguments:\n invoice_id (castable to int): The ID of the invoice to refund.\n\n Returns:\n redirect:\n Redirects to ``invoice``."}
{"_id": "q_13880", "text": "Displays a credit note.\n\n If ``request`` is a ``POST`` request, forms for applying or refunding\n a credit note will be processed.\n\n This view requires a login, and the logged in user must be staff.\n\n Arguments:\n note_id (castable to int): The ID of the credit note to view.\n\n Returns:\n render or redirect:\n If the \"apply to invoice\" form is correctly processed, redirect to\n that invoice, otherwise, render ``registration/credit_note.html``\n with the following data::\n\n {\n \"credit_note\": models.commerce.CreditNote(),\n \"apply_form\": form, # A form for applying credit note\n # to an invoice.\n \"refund_form\": form, # A form for applying a *manual*\n # refund of the credit note.\n \"cancellation_fee_form\" : form, # A form for generating an\n # invoice with a\n # cancellation fee\n }"}
{"_id": "q_13881", "text": "Allows staff to amend a user's current registration cart, and etc etc."}
{"_id": "q_13882", "text": "Allows staff to extend the reservation on a given user's cart."}
{"_id": "q_13883", "text": "Allows staff to send emails to users based on their invoice status."}
{"_id": "q_13884", "text": "Either displays a form containing a list of users with badges to\n render, or returns a .zip file containing their badges."}
{"_id": "q_13885", "text": "Renders a single user's badge."}
{"_id": "q_13886", "text": "Annotates the queryset with a usage count for that discount claus\n by the given user."}
{"_id": "q_13887", "text": "Returns a list of all of the products that are available per\n flag conditions from the given categories."}
{"_id": "q_13888", "text": "Split a compound's combined formula and phase into separate strings for\n the formula and phase.\n\n :param compound_string: Formula and phase of a chemical compound, e.g.\n 'SiO2[S1]'.\n\n :returns: Formula of chemical compound.\n :returns: Phase of chemical compound."}
{"_id": "q_13889", "text": "Writes a compound to an auxi file at the specified directory.\n\n :param dir: The directory.\n :param compound: The compound."}
{"_id": "q_13890", "text": "Load all the thermochemical data factsage files located at a path.\n\n :param path: Path at which the data files are located."}
{"_id": "q_13891", "text": "Load all the thermochemical data auxi files located at a path.\n\n :param path: Path at which the data files are located."}
{"_id": "q_13892", "text": "List all compounds that are currently loaded in the thermo module, and\n their phases."}
{"_id": "q_13893", "text": "Calculate the heat capacity of the compound for the specified temperature\n and mass.\n\n :param compound_string: Formula and phase of chemical compound, e.g.\n 'Fe2O3[S1]'.\n :param T: [\u00b0C] temperature\n :param mass: [kg]\n\n :returns: [kWh/K] Heat capacity."}
{"_id": "q_13894", "text": "Calculate the heat capacity of the compound phase.\n\n :param T: [K] temperature\n\n :returns: [J/mol/K] Heat capacity."}
{"_id": "q_13895", "text": "Calculate the portion of entropy of the compound phase covered by this\n Cp record.\n\n :param T: [K] temperature\n\n :returns: Entropy. [J/mol/K]"}
{"_id": "q_13896", "text": "Calculate the enthalpy of the compound phase at the specified\n temperature.\n\n :param T: [K] temperature\n\n :returns: [J/mol] The enthalpy of the compound phase."}
{"_id": "q_13897", "text": "Calculate the phase's magnetic contribution to enthalpy at the\n specified temperature.\n\n :param T: [K] temperature\n\n :returns: [J/mol] The magnetic enthalpy of the compound phase.\n\n Dinsdale, A. T. (1991). SGTE data for pure elements. Calphad, 15(4),\n 317\u2013425. http://doi.org/10.1016/0364-5916(91)90030-N"}
{"_id": "q_13898", "text": "Calculate the entropy of the compound phase at the specified\n temperature.\n\n :param T: [K] temperature\n\n :returns: [J/mol/K] The entropy of the compound phase."}
{"_id": "q_13899", "text": "Calculate the phase's magnetic contribution to entropy at the\n specified temperature.\n\n :param T: [K] temperature\n\n :returns: [J/mol/K] The magnetic entropy of the compound phase.\n\n Dinsdale, A. T. (1991). SGTE data for pure elements. Calphad, 15(4),\n 317\u2013425. http://doi.org/10.1016/0364-5916(91)90030-N"}
{"_id": "q_13900", "text": "Calculate the phase's magnetic contribution to Gibbs energy at the\n specified temperature.\n\n :param T: [K] temperature\n\n :returns: [J/mol] The magnetic Gibbs energy of the compound phase.\n\n Dinsdale, A. T. (1991). SGTE data for pure elements. Calphad, 15(4),\n 317\u2013425. http://doi.org/10.1016/0364-5916(91)90030-N"}
{"_id": "q_13901", "text": "Calculate the enthalpy of a phase of the compound at a specified\n temperature.\n\n :param phase: A phase of the compound, e.g. 'S', 'L', 'G'.\n :param T: [K] temperature\n\n :returns: [J/mol] Enthalpy."}
{"_id": "q_13902", "text": "Applies the total value of this credit note to the specified\n invoice. If this credit note overpays the invoice, a new credit note\n containing the residual value will be created.\n\n Raises ValidationError if the given invoice is not allowed to be\n paid."}
{"_id": "q_13903", "text": "Generates an invoice with a cancellation fee, and applies\n credit to the invoice.\n\n percentage (Decimal): The percentage of the credit note to turn into\n a cancellation fee. Must be 0 <= percentage <= 100."}
{"_id": "q_13904", "text": "Create a polynomial model to describe the specified property based on the\n specified data set, and save it to a .json file.\n\n :param name: material name.\n :param symbol: property symbol.\n :param degree: polynomial degree.\n :param ds: the source data set.\n :param dss: dictionary of all datasets."}
{"_id": "q_13905", "text": "Generates an access code for users' payments as well as their\n fulfilment code for check-in.\n The access code will 4 characters long, which allows for 1,500,625\n unique codes, which really should be enough for anyone."}
{"_id": "q_13906", "text": "Generates an invoice for the given cart."}
{"_id": "q_13907", "text": "Applies the user's credit notes to the given invoice on creation."}
{"_id": "q_13908", "text": "Returns true if the accessing user is allowed to view this invoice,\n or if the given access code matches this invoice's user's access code."}
{"_id": "q_13909", "text": "Refreshes the underlying invoice and cart objects."}
{"_id": "q_13910", "text": "Passes cleanly if we're allowed to pay, otherwise raise\n a ValidationError."}
{"_id": "q_13911", "text": "Updates the status of this invoice based upon the total\n payments."}
{"_id": "q_13912", "text": "Marks the invoice as paid, and updates the attached cart if\n necessary."}
{"_id": "q_13913", "text": "Returns true if there is no cart, or if the revision of this\n invoice matches the current revision of the cart."}
{"_id": "q_13914", "text": "Voids this invoice if the attached cart is no longer valid because\n the cart revision has changed, or the reservations have expired."}
{"_id": "q_13915", "text": "Voids the invoice if it is valid to do so."}
{"_id": "q_13916", "text": "Refunds the invoice by generating a CreditNote for the value of\n all of the payments against the cart.\n\n The invoice is marked as refunded, and the underlying cart is marked\n as released."}
{"_id": "q_13917", "text": "Convert an RGB color representation to a HEX color representation.\n\n (r, g, b) :: r -> [0, 255]\n g -> [0, 255]\n b -> [0, 255]\n\n :param rgb: A tuple of three numeric values corresponding to the red, green, and blue value.\n :return: HEX representation of the input RGB value.\n :rtype: str"}
{"_id": "q_13918", "text": "Convert an RGB color representation to a YIQ color representation.\n\n (r, g, b) :: r -> [0, 255]\n g -> [0, 255]\n b -> [0, 255]\n\n :param rgb: A tuple of three numeric values corresponding to the red, green, and blue value.\n :return: YIQ representation of the input RGB value.\n :rtype: tuple"}
{"_id": "q_13919", "text": "Convert an RGB color representation to an HSV color representation.\n\n (r, g, b) :: r -> [0, 255]\n g -> [0, 255]\n b -> [0, 255]\n\n :param rgb: A tuple of three numeric values corresponding to the red, green, and blue value.\n :return: HSV representation of the input RGB value.\n :rtype: tuple"}
{"_id": "q_13920", "text": "Convert a HEX color representation to an RGB color representation.\n\n hex :: hex -> [000000, FFFFFF]\n\n :param _hex: The 3- or 6-char hexadecimal string representing the color value.\n :return: RGB representation of the input HEX value.\n :rtype: tuple"}
{"_id": "q_13921", "text": "Convert a YIQ color representation to an RGB color representation.\n\n (y, i, q) :: y -> [0, 1]\n i -> [-0.5957, 0.5957]\n q -> [-0.5226, 0.5226]\n\n :param yiq: A tuple of three numeric values corresponding to the luma and chrominance.\n :return: RGB representation of the input YIQ value.\n :rtype: tuple"}
{"_id": "q_13922", "text": "Convert an HSV color representation to an RGB color representation.\n\n (h, s, v) :: h -> [0, 360)\n s -> [0, 1]\n v -> [0, 1]\n\n :param hsv: A tuple of three numeric values corresponding to the hue, saturation, and value.\n :return: RGB representation of the input HSV value.\n :rtype: tuple"}
{"_id": "q_13923", "text": "Given a start color, end color, and a number of steps, returns a list of colors which represent a 'scale' between\n the start and end color.\n\n :param start_color: The color starting the run\n :param end_color: The color ending the run\n :param step_count: The number of colors to have between the start and end color\n :param inclusive: Flag determining whether to include start and end values in run (default True)\n :param to_color: Flag indicating return values should be Color objects (default True)\n :return: List of colors between the start and end color\n :rtype: list"}
{"_id": "q_13924", "text": "Get key value, return default if key doesn't exist"}
{"_id": "q_13925", "text": "Print file fields to standard output."}
{"_id": "q_13926", "text": "Download a file.\n\n :param field: file field to download\n :type field: string\n :rtype: a file handle"}
{"_id": "q_13927", "text": "POST JSON data object to server"}
{"_id": "q_13928", "text": "Upload files and data objects.\n\n :param project_id: ObjectId of Genesis project\n :type project_id: string\n :param processor_name: Processor object name\n :type processor_name: string\n :param fields: Processor field-value pairs\n :type fields: args\n :rtype: HTTP Response object"}
{"_id": "q_13929", "text": "Download files of data objects.\n\n :param data_objects: Data object ids\n :type data_objects: list of UUID strings\n :param field: Download field name\n :type field: string\n :rtype: generator of requests.Response objects"}
{"_id": "q_13930", "text": "Gets the subclasses of a class."}
{"_id": "q_13931", "text": "Returns repository and project."}
{"_id": "q_13932", "text": "for each variant, yields evidence and associated phenotypes, both current and suggested"}
{"_id": "q_13933", "text": "Search the cache for variants matching provided coordinates using the corresponding search mode.\n\n :param coordinate_query: A civic CoordinateQuery object\n start: the genomic start coordinate of the query\n stop: the genomic end coordinate of the query\n chr: the GRCh37 chromosome of the query (e.g. \"7\", \"X\")\n alt: the alternate allele at the coordinate [optional]\n\n :param search_mode: ['any', 'include_smaller', 'include_larger', 'exact']\n any: any overlap between a query and a variant is a match\n include_smaller: variants must fit within the coordinates of the query\n include_larger: variants must encompass the coordinates of the query\n exact: variants must match coordinates precisely, as well as alternate\n allele, if provided\n search_mode is 'exact' by default\n\n :return: Returns a list of variant hashes matching the coordinates and search_mode"}
{"_id": "q_13934", "text": "An interator to search the cache for variants matching the set of sorted coordinates and yield\n matches corresponding to the search mode.\n\n :param sorted_queries: A list of civic CoordinateQuery objects, sorted by coordinate.\n start: the genomic start coordinate of the query\n stop: the genomic end coordinate of the query\n chr: the GRCh37 chromosome of the query (e.g. \"7\", \"X\")\n alt: the alternate allele at the coordinate [optional]\n\n :param search_mode: ['any', 'include_smaller', 'include_larger', 'exact']\n any: any overlap between a query and a variant is a match\n include_smaller: variants must fit within the coordinates of the query\n include_larger: variants must encompass the coordinates of the query\n exact: variants must match coordinates precisely, as well as alternate\n allele, if provided\n search_mode is 'exact' by default\n\n :yield: Yields (query, match) tuples for each identified match"}
{"_id": "q_13935", "text": "Returns a unique list of seq"}
{"_id": "q_13936", "text": "Connects to Github and Asana and authenticates via OAuth."}
{"_id": "q_13937", "text": "Given a list of values and names, accepts the index value or name."}
{"_id": "q_13938", "text": "Returns issue data from local data.\n\n Args:\n issue:\n `int`. Github issue number.\n namespace:\n `str`. Namespace for storing this issue."}
{"_id": "q_13939", "text": "Retrieves a task from asana."}
{"_id": "q_13940", "text": "Save data."}
{"_id": "q_13941", "text": "Applies a setting value to a key, if the value is not `None`.\n\n Returns without prompting if either of the following:\n * `value` is not `None`\n * already present in the dictionary\n\n Args:\n prompt:\n May either be a string to prompt via `raw_input` or a\n method (callable) that returns the value.\n\n on_load:\n lambda. Value is passed through here after loaded.\n\n on_save:\n lambda. Value is saved as this value."}
{"_id": "q_13942", "text": "Decorator for retrying tasks with special cases."}
{"_id": "q_13943", "text": "Waits until queue is empty."}
{"_id": "q_13944", "text": "Creates a task"}
{"_id": "q_13945", "text": "Returns formatting for the tasks section of asana."}
{"_id": "q_13946", "text": "Creates a missing task."}
{"_id": "q_13947", "text": "Return a list of data types."}
{"_id": "q_13948", "text": "Send string to module level log\n\n Args:\n logstr (str): string to print.\n priority (int): priority, supports 3 (default) and 4 (special)."}
{"_id": "q_13949", "text": "Required initialization call, wraps pyserial constructor."}
{"_id": "q_13950", "text": "Optional polling loop control\n\n Args:\n max_waits (int): waits\n wait_sleep (int): ms per wait"}
{"_id": "q_13951", "text": "Use the serial block definitions in V3 and V4 to create one field list."}
{"_id": "q_13952", "text": "Simple since Time_Stamp query returned as JSON records.\n\n Args:\n timestamp (int): Epoch time in seconds.\n meter (str): 12 character meter address to query\n\n Returns:\n str: JSON rendered read records."}
{"_id": "q_13953", "text": "Simple wrap to calc legacy PF value\n\n Args:\n pf: meter power factor reading\n\n Returns:\n int: legacy push pf"}
{"_id": "q_13954", "text": "Move data from raw tuple into scaled and conveted values.\n\n Args:\n contents (tuple): Breakout of passed block from unpackStruct().\n def_buf (): Read buffer destination.\n kwh_scale (int): :class:`~ekmmeters.ScaleKWH` as int, from Field.kWhScale`\n\n Returns:\n bool: True on completion."}
{"_id": "q_13955", "text": "Internal read CRC wrapper.\n\n Args:\n raw_read (str): Bytes with implicit string cast from serial read\n def_buf (SerialBlock): Populated read buffer.\n\n Returns:\n bool: True if passed CRC equals calculated CRC."}
{"_id": "q_13956", "text": "Get the months tariff SerialBlock for meter.\n\n Args:\n direction (int): A :class:`~ekmmeters.ReadMonths` value.\n\n Returns:\n SerialBlock: Requested months tariffs buffer."}
{"_id": "q_13957", "text": "Serial call to set CT ratio for attached inductive pickup.\n\n Args:\n new_ct (int): A :class:`~ekmmeters.CTRatio` value, a legal amperage setting.\n password (str): Optional password.\n\n Returns:\n bool: True on completion with ACK."}
{"_id": "q_13958", "text": "Assign one schedule tariff period to meter bufffer.\n\n Args:\n schedule (int): A :class:`~ekmmeters.Schedules` value or in range(Extents.Schedules).\n tariff (int): :class:`~ekmmeters.Tariffs` value or in range(Extents.Tariffs).\n hour (int): Hour from 0-23.\n minute (int): Minute from 0-59.\n tariff (int): Rate value.\n\n Returns:\n bool: True on completed assignment."}
{"_id": "q_13959", "text": "Serial call to read schedule tariffs buffer\n\n Args:\n tableset (int): :class:`~ekmmeters.ReadSchedules` buffer to return.\n\n Returns:\n bool: True on completion and ACK."}
{"_id": "q_13960", "text": "Read a single schedule tariff from meter object buffer.\n\n Args:\n schedule (int): A :class:`~ekmmeters.Schedules` value or in range(Extent.Schedules).\n tariff (int): A :class:`~ekmmeters.Tariffs` value or in range(Extent.Tariffs).\n\n Returns:\n bool: True on completion."}
{"_id": "q_13961", "text": "Extract the tariff for a single month from the meter object buffer.\n\n Args:\n month (int): A :class:`~ekmmeters.Months` value or range(Extents.Months).\n\n Returns:\n tuple: The eight tariff period totals for month. The return tuple breaks out as follows:\n\n ================= ======================================\n kWh_Tariff_1 kWh for tariff period 1 over month.\n kWh_Tariff_2 kWh for tariff period 2 over month\n kWh_Tariff_3 kWh for tariff period 3 over month\n kWh_Tariff_4 kWh for tariff period 4 over month\n kWh_Tot Total kWh over requested month\n Rev_kWh_Tariff_1 Rev kWh for tariff period 1 over month\n Rev_kWh_Tariff_3 Rev kWh for tariff period 2 over month\n Rev_kWh_Tariff_3 Rev kWh for tariff period 3 over month\n Rev_kWh_Tariff_4 Rev kWh for tariff period 4 over month\n Rev_kWh_Tot Total Rev kWh over requested month\n ================= ======================================"}
{"_id": "q_13962", "text": "Serial call to read holiday dates into meter object buffer.\n\n Returns:\n bool: True on completion."}
{"_id": "q_13963", "text": "Recommended call to read all meter settings at once.\n\n Returns:\n bool: True if all subsequent serial calls completed with ACK."}
{"_id": "q_13964", "text": "Internal method to set the command result string.\n\n Args:\n msg (str): Message built during command."}
{"_id": "q_13965", "text": "Fire update method in all attached observers in order of attachment."}
{"_id": "q_13966", "text": "Combined A and B read for V4 meter.\n\n Args:\n send_terminator (bool): Send termination string at end of read.\n\n Returns:\n bool: True on completion."}
{"_id": "q_13967", "text": "Issue an A read on V4 meter.\n\n Returns:\n bool: True if CRC match at end of call."}
{"_id": "q_13968", "text": "Munge A and B reads into single serial block with only unique fields."}
{"_id": "q_13969", "text": "Write calculated fields for read buffer."}
{"_id": "q_13970", "text": "Single call wrapper for LCD set.\"\n\n Wraps :func:`~ekmmeters.V4Meter.setLcd` and associated init and add methods.\n\n Args:\n display_list (list): List composed of :class:`~ekmmeters.LCDItems`\n password (str): Optional password.\n\n Returns:\n bool: Passthrough from :func:`~ekmmeters.V4Meter.setLcd`"}
{"_id": "q_13971", "text": "Serial call to set relay.\n\n Args:\n seconds (int): Seconds to hold, ero is hold forever. See :class:`~ekmmeters.RelayInterval`.\n relay (int): Selected relay, see :class:`~ekmmeters.Relay`.\n status (int): Status to set, see :class:`~ekmmeters.RelayState`\n password (str): Optional password\n\n Returns:\n bool: True on completion and ACK."}
{"_id": "q_13972", "text": "Send termination string to implicit current meter."}
{"_id": "q_13973", "text": "Serial call to set pulse input ratio on a line.\n\n Args:\n line_in (int): Member of :class:`~ekmmeters.Pulse`\n new_cnst (int): New pulse input ratio\n password (str): Optional password\n\n Returns:"}
{"_id": "q_13974", "text": "Serial call to zero resettable kWh registers.\n\n Args:\n password (str): Optional password.\n\n Returns:\n bool: True on completion and ACK."}
{"_id": "q_13975", "text": "Recursively iterate over all DictField sub-fields.\n\n :param fields: Field instance (e.g. input)\n :type fields: dict\n :param schema: Schema instance (e.g. input_schema)\n :type schema: dict"}
{"_id": "q_13976", "text": "Recursively iterate over all schema sub-fields.\n\n :param fields: Field instance (e.g. input)\n :type fields: dict\n :param schema: Schema instance (e.g. input_schema)\n :type schema: dict\n :path schema: Field path\n :path schema: string"}
{"_id": "q_13977", "text": "Random paragraphs."}
{"_id": "q_13978", "text": "Random text.\n\n If `length` is present the text will be exactly this chars long. Else the\n text will be something between `at_least` and `at_most` chars long."}
{"_id": "q_13979", "text": "Color some text in the given ANSI color."}
{"_id": "q_13980", "text": "Return a summary of the results."}
{"_id": "q_13981", "text": "Parse some arguments using the parser."}
{"_id": "q_13982", "text": "Setup the environment for an example run."}
{"_id": "q_13983", "text": "Run in transform mode."}
{"_id": "q_13984", "text": "Transform a describe node into a ``TestCase``.\n\n ``node`` is the node object.\n ``describes`` is the name of the object being described.\n ``context_variable`` is the name bound in the context manager (usually\n \"it\")."}
{"_id": "q_13985", "text": "Transform the body of an ``ExampleGroup``.\n\n ``body`` is the body.\n ``group_var`` is the name bound to the example group in the context\n manager (usually \"it\")."}
{"_id": "q_13986", "text": "Transform an example node into a test method.\n\n Returns the unchanged node if it wasn't an ``Example``.\n\n ``node`` is the node object.\n ``name`` is the name of the example being described.\n ``context_variable`` is the name bound in the context manager (usually\n \"test\").\n ``group_variable`` is the name bound in the surrounding example group's\n context manager (usually \"it\")."}
{"_id": "q_13987", "text": "Transform the body of an ``Example`` into the body of a method.\n\n Replaces instances of ``context_variable`` to refer to ``self``.\n\n ``body`` is the body.\n ``context_variable`` is the name bound in the surrounding context\n manager to the example (usually \"test\")."}
{"_id": "q_13988", "text": "Register the path hook."}
{"_id": "q_13989", "text": "Transform the source code, then return the code object."}
{"_id": "q_13990", "text": "Load a spec from either a file path or a fully qualified name."}
{"_id": "q_13991", "text": "Load a spec from a given path, discovering specs if a directory is given."}
{"_id": "q_13992", "text": "Discover all of the specs recursively inside ``path``.\n\n Successively yields the (full) relative paths to each spec."}
{"_id": "q_13993", "text": "Construct a function that checks a directory for messages\n\n The function checks for new messages and\n calls the appropriate method on the receiver. Sent messages are\n deleted.\n\n :param location: string, the directory to monitor\n :param receiver: IEventReceiver\n :returns: a function with no parameters"}
{"_id": "q_13994", "text": "Create an A-HREF tag that points to another page usable in paginate."}
{"_id": "q_13995", "text": "Add a process.\n\n :param places: a Places instance\n :param name: string, the logical name of the process\n :param cmd: string, executable\n :param args: list of strings, command-line arguments\n :param env: dictionary mapping strings to strings\n (will be environment in subprocess)\n :param uid: integer, uid to run the new process as\n :param gid: integer, gid to run the new process as\n :param extras: a dictionary with additional parameters\n :param env_inherit: a list of environment variables to inherit\n :returns: None"}
{"_id": "q_13996", "text": "Restart a process\n\n :params places: a Places instance\n :params name: string, the logical name of the process\n :returns: None"}
{"_id": "q_13997", "text": "Return a service which monitors processes based on directory contents\n\n Construct and return a service that, when started, will run processes\n based on the contents of the 'config' directory, restarting them\n if file contents change and stopping them if the file is removed.\n\n It also listens for restart and restart-all messages on the 'messages'\n directory.\n\n :param config: string, location of configuration directory\n :param messages: string, location of messages directory\n :param freq: number, frequency to check for new messages and configuration\n updates\n :param pidDir: {twisted.python.filepath.FilePath} or None,\n location to keep pid files\n :param reactor: something implementing the interfaces\n {twisted.internet.interfaces.IReactorTime} and\n {twisted.internet.interfaces.IReactorProcess} and\n :returns: service, {twisted.application.interfaces.IService}"}
{"_id": "q_13998", "text": "Return a service based on parsed command-line options\n\n :param opt: dict-like object. Relevant keys are config, messages,\n pid, frequency, threshold, killtime, minrestartdelay\n and maxrestartdelay\n :returns: service, {twisted.application.interfaces.IService}"}
{"_id": "q_13999", "text": "Removes all expired nodes from the nodelist. If a set of node_ids is\n passed in, those ids are checked to ensure they haven't been refreshed\n prior to a lock being acquired.\n\n Should only be run with a lock.\n\n :param list node_ids: optional, a list of node_ids to remove. They\n will be verified to ensure they haven't been refreshed."}
{"_id": "q_14000", "text": "Removes a particular node from the nodelist.\n\n :param string node_id: optional, the process id of the node to remove"}
{"_id": "q_14001", "text": "Returns the time a particular node has been last refreshed.\n\n :param string node_id: optional, the connection id of the node to retrieve\n\n :rtype: int\n :returns: Returns a unix timestamp if it exists, otherwise None"}
{"_id": "q_14002", "text": "Returns all nodes in the hash with the time they were last refreshed\n as a dictionary.\n\n :rtype: dict(string, int)\n :returns: A dictionary of strings and corresponding timestamps"}
{"_id": "q_14003", "text": "Update the session for this node. Specifically; lock on the reflist,\n then update the time this node acquired the reference.\n\n This method should only be called while the reference is locked."}
{"_id": "q_14004", "text": "Set a devices state."}
{"_id": "q_14005", "text": "Pull a water heater's modes from the API."}
{"_id": "q_14006", "text": "Pull a water heater's usage report from the API."}
{"_id": "q_14007", "text": "Pull the accounts locations."}
{"_id": "q_14008", "text": "Pull the accounts vacations."}
{"_id": "q_14009", "text": "Delete a vacation by ID."}
{"_id": "q_14010", "text": "Authenticate with the API and return an authentication token."}
{"_id": "q_14011", "text": "Return a list of water heater devices.\n\n Parses the response from the locations endpoint in to a pyeconet.WaterHeater."}
{"_id": "q_14012", "text": "Returns a list of tokens interleaved with the delimiter."}
{"_id": "q_14013", "text": "check which processes need to be restarted\n\n :params path: a twisted.python.filepath.FilePath with configurations\n :params start: when the checker started running\n :params now: current time\n :returns: list of strings"}
{"_id": "q_14014", "text": "Merge the failure message from another status into this one.\n\n Whichever status represents parsing that has gone the farthest is\n retained. If both statuses have gone the same distance, then the\n expected values from both are retained.\n\n Args:\n status: The status to merge into this one.\n\n Returns:\n This ``Status`` which may have ``farthest`` and ``expected``\n updated accordingly."}
{"_id": "q_14015", "text": "Query to get the value."}
{"_id": "q_14016", "text": "Convert a function taking a single iterable argument into a function taking multiple arguments.\n\n Args:\n f: Any function taking a single iterable argument\n\n Returns:\n A function that accepts multiple arguments. Each argument of this function is passed as an element of an\n iterable to ``f``.\n\n Example:\n $ def f(a):\n $ return a[0] + a[1] + a[2]\n $\n $ f([1, 2, 3]) # 6\n $ g = unsplat(f)\n $ g(1, 2, 3) # 6"}
{"_id": "q_14017", "text": "Wrap all sqlalchemy model in settings."}
{"_id": "q_14018", "text": "Optionally match a parser.\n\n An ``OptionalParser`` attempts to match ``parser``. If it succeeds, it\n returns a list of length one with the value returned by the parser as the\n only element. If it fails, it returns an empty list.\n\n Args:\n parser: Parser or literal"}
{"_id": "q_14019", "text": "Match a parser one or more times repeatedly.\n\n This matches ``parser`` multiple times in a row. If it matches as least\n once, it returns a list of values from each time ``parser`` matched. If it\n does not match ``parser`` at all, it fails.\n\n Args:\n parser: Parser or literal"}
{"_id": "q_14020", "text": "Match a parser one or more times separated by another parser.\n\n This matches repeated sequences of ``parser`` separated by ``separator``.\n If there is at least one match, a list containing the values of the\n ``parser`` matches is returned. The values from ``separator`` are discarded.\n If it does not match ``parser`` at all, it fails.\n\n Args:\n parser: Parser or literal\n separator: Parser or literal"}
{"_id": "q_14021", "text": "Match a parser zero or more times separated by another parser.\n\n This matches repeated sequences of ``parser`` separated by ``separator``. A\n list is returned containing the value from each match of ``parser``. The\n values from ``separator`` are discarded. If there are no matches, an empty\n list is returned.\n\n Args:\n parser: Parser or literal\n separator: Parser or literal"}
{"_id": "q_14022", "text": "Check all processes"}
{"_id": "q_14023", "text": "Discard data and cancel all calls.\n\n Instance cannot be reused after closing."}
{"_id": "q_14024", "text": "Check the state of HTTP"}
{"_id": "q_14025", "text": "Wrap a service in a MultiService with a heart"}
{"_id": "q_14026", "text": "Freeze and shrink the graph based on a checkpoint and the output node names."}
{"_id": "q_14027", "text": "Save a small version of the graph based on a checkpoint and the output node names."}
{"_id": "q_14028", "text": "Return a TensorFlow saver from a checkpoint containing the metagraph."}
{"_id": "q_14029", "text": "Parse the tag, instantiate the class.\n\n :type parser: django.template.base.Parser\n :type token: django.template.base.Token"}
{"_id": "q_14030", "text": "Validate the syntax of the template tag."}
{"_id": "q_14031", "text": "Return the context data for the included template."}
{"_id": "q_14032", "text": "Parse the \"as var\" syntax."}
{"_id": "q_14033", "text": "Return the context data for the inclusion tag.\n\n Returns ``{'value': self.get_value(parent_context, *tag_args, **tag_kwargs)}`` by default."}
{"_id": "q_14034", "text": "Create a TensorFlow Session from a Caffe model."}
{"_id": "q_14035", "text": "Freeze and shrink the graph based on a Caffe model, the input tensors and the output node names."}
{"_id": "q_14036", "text": "Send X10 command to ??? unit.\n\n @param house_code (A-P) - example='A'\n @param unit_number (1-16)- example=1 (or None to impact entire house code)\n @param state - Mochad command/state, See\n https://sourceforge.net/p/mochad/code/ci/master/tree/README\n examples=OFF, 'OFF', 'ON', ALL_OFF, 'all_units_off', 'xdim 128', etc.\n\n Examples:\n x10_command('A', '1', ON)\n x10_command('A', '1', OFF)\n x10_command('A', '1', 'ON')\n x10_command('A', '1', 'OFF')\n x10_command('A', None, ON)\n x10_command('A', None, OFF)\n x10_command('A', None, 'all_lights_off')\n x10_command('A', None, 'all_units_off')\n x10_command('A', None, ALL_OFF)\n x10_command('A', None, 'all_lights_on')\n x10_command('A', 1, 'xdim 128')"}
{"_id": "q_14037", "text": "Generate an appropriate parser.\n\n :returns: an argument parser\n :rtype: `ArgumentParser`"}
{"_id": "q_14038", "text": "Get the pylint command for these arguments.\n\n :param `Namespace` namespace: the namespace"}
{"_id": "q_14039", "text": "Make a sequence into rows of num_columns columns.\n\n\t>>> tuple(make_rows(2, [1, 2, 3, 4, 5]))\n\t((1, 4), (2, 5), (3, None))\n\t>>> tuple(make_rows(3, [1, 2, 3, 4, 5]))\n\t((1, 3, 5), (2, 4, None))"}
{"_id": "q_14040", "text": "Take a sequence and break it up into chunks of the specified size.\n\tThe last chunk may be smaller than size.\n\n\tThis works very similar to grouper_nofill, except\n\tit works with strings as well.\n\n\t>>> tuple(grouper_nofill_str(3, 'foobarbaz'))\n\t('foo', 'bar', 'baz')\n\n\tYou can still use it on non-strings too if you like.\n\n\t>>> tuple(grouper_nofill_str(42, []))\n\t()\n\n\t>>> tuple(grouper_nofill_str(3, list(range(10))))\n\t([0, 1, 2], [3, 4, 5], [6, 7, 8], [9])"}
{"_id": "q_14041", "text": "Yield every other item from the iterable\n\n\t>>> ' '.join(every_other('abcdefg'))\n\t'a c e g'"}
{"_id": "q_14042", "text": "Given an iterable with items that may come in as sequential duplicates,\n\tremove those duplicates.\n\n\tUnlike unique_justseen, this function does not remove triplicates.\n\n\t>>> ' '.join(remove_duplicates('abcaabbccaaabbbcccbcbc'))\n\t'a b c a b c a a b b c c b c b c'\n\t>>> ' '.join(remove_duplicates('aaaabbbbb'))\n\t'a a b b b'"}
{"_id": "q_14043", "text": "Get the next value from an iterable, but also return an iterable\n\tthat will subsequently return that value and the rest of the\n\toriginal iterable.\n\n\t>>> l = iter([1,2,3])\n\t>>> val, l = peek(l)\n\t>>> val\n\t1\n\t>>> list(l)\n\t[1, 2, 3]"}
{"_id": "q_14044", "text": "Yield duplicate items from any number of sorted iterables of items\n\n\t>>> items_a = [1, 2, 3]\n\t>>> items_b = [0, 3, 4, 5, 6]\n\t>>> list(duplicates(items_a, items_b))\n\t[(3, 3)]\n\n\tIt won't behave as you expect if the iterables aren't ordered\n\n\t>>> items_b.append(1)\n\t>>> list(duplicates(items_a, items_b))\n\t[(3, 3)]\n\t>>> list(duplicates(items_a, sorted(items_b)))\n\t[(1, 1), (3, 3)]\n\n\tThis function is most interesting when it's operating on a key\n\tof more complex objects.\n\n\t>>> items_a = [dict(email='joe@example.com', id=1)]\n\t>>> items_b = [dict(email='joe@example.com', id=2), dict(email='other')]\n\t>>> dupe, = duplicates(items_a, items_b, key=operator.itemgetter('email'))\n\t>>> dupe[0]['email'] == dupe[1]['email'] == 'joe@example.com'\n\tTrue\n\t>>> dupe[0]['id']\n\t1\n\t>>> dupe[1]['id']\n\t2"}
{"_id": "q_14045", "text": "Given a partition_dict result, if the partition missed, swap\n\tthe before and after."}
{"_id": "q_14046", "text": "Run through the sequence until n queues are created and return\n\t\tthem. If fewer are created, return those plus empty iterables to\n\t\tcompensate."}
{"_id": "q_14047", "text": "Parse the remainder of the token, to find a \"as varname\" statement.\n\n :param parser: The \"parser\" object that ``@register.tag`` provides.\n :type parser: :class:`~django.template.Parser`\n :param token: The \"token\" object that ``@register.tag`` provides.\n :type token: :class:`~django.template.Token` or splitted bits"}
{"_id": "q_14048", "text": "Generate the correct function for a variant signature.\n\n :returns: function that returns an appropriate value\n :rtype: ((str * object) or list)-> object"}
{"_id": "q_14049", "text": "Generate the correct function for an array signature.\n\n :param toks: the list of parsed tokens\n :returns: function that returns an Array or Dictionary value\n :rtype: ((or list dict) -> ((or Array Dictionary) * int)) * str"}
{"_id": "q_14050", "text": "Generate the correct function for a struct signature.\n\n :param toks: the list of parsed tokens\n :returns: function that returns an Array or Dictionary value\n :rtype: ((list or tuple) -> (Struct * int)) * str"}
{"_id": "q_14051", "text": "Handle a base case.\n\n :param type klass: the class constructor\n :param str symbol: the type code"}
{"_id": "q_14052", "text": "Decorator to register class tags\n\n :param library: The template tag library, typically instantiated as ``register = Library()``.\n :type library: :class:`~django.template.Library`\n :param name: The name of the template tag\n :type name: str\n\n Example:\n\n .. code-block:: python\n\n @template_tag(register, 'my_tag')\n class MyTag(BaseNode):\n pass"}
{"_id": "q_14053", "text": "Get the signature of a dbus object.\n\n :param dbus_object: the object\n :type dbus_object: a dbus object\n :param bool unpack: if True, unpack from enclosing variant type\n :returns: the corresponding signature\n :rtype: str"}
{"_id": "q_14054", "text": "Enforces lower case options and option values where appropriate"}
{"_id": "q_14055", "text": "Converts string values to floats when appropriate"}
{"_id": "q_14056", "text": "Create fork and store it in current instance"}
{"_id": "q_14057", "text": "Convert list of dictionaries to dictionary of lists"}
{"_id": "q_14058", "text": "Subdivide list into N lists"}
{"_id": "q_14059", "text": "turns keyword pairs into path or filename \n \n if `into=='path'`, then keywords are separted by underscores, else keywords are used to create a directory hierarchy"}
{"_id": "q_14060", "text": "Decorator that logs function calls in their self.log"}
{"_id": "q_14061", "text": "Decorator that adds a runtime profile object to the output"}
{"_id": "q_14062", "text": "Decorator that prints memory and runtime information at each call of the function"}
{"_id": "q_14063", "text": "Declare abstract function. \n \n Requires function to be empty except for docstring describing semantics.\n To apply function, first argument must come with implementation of semantics."}
{"_id": "q_14064", "text": "Decorator that prints running time information at each call of the function"}
{"_id": "q_14065", "text": "A descendant is a child many steps down."}
{"_id": "q_14066", "text": "Verify if the file is already downloaded and complete. If they don't\n exists or if are not complete, use homura download function to fetch\n files. Return a list with the path of the downloaded file and the size\n of the remote file."}
{"_id": "q_14067", "text": "Validate bands parameter."}
{"_id": "q_14068", "text": "Check scene name and whether remote file exists. Raises\n WrongSceneNameError if the scene name is wrong."}
{"_id": "q_14069", "text": "Download remote .tar.bz file."}
{"_id": "q_14070", "text": "Check whether sceneInfo is valid to download from AWS Storage."}
{"_id": "q_14071", "text": "Download each specified band and metadata."}
{"_id": "q_14072", "text": "Open an archive on a filesystem.\n\n This function tries to mimick the behaviour of `fs.open_fs` as closely\n as possible: it accepts either a FS URL or a filesystem instance, and\n will close all resources it had to open.\n\n Arguments:\n fs_url (FS or text_type): a FS URL, or a filesystem\n instance, where the archive file is located.\n archive (text_type): the path to the archive file on the\n given filesystem.\n\n Raises:\n `fs.opener._errors.Unsupported`: when the archive type is not supported\n (either the file extension is unknown or the opener requires unmet\n dependencies).\n\n Example:\n >>> from fs.archive import open_archive\n >>> with open_archive('mem://', 'test.tar.gz') as archive_fs:\n ... type(archive_fs)\n <class 'fs.archive.tarfs.TarFS'>\n\n Hint:\n This function finds the entry points defined in group\n ``fs.archive.open_archive``, using the names of the entry point\n as the registered extension."}
{"_id": "q_14073", "text": "Slugify a name in the ISO-9660 way.\n\n Example:\n >>> slugify('\u00e9patant')\n \"_patant\""}
{"_id": "q_14074", "text": "Increment an ISO name to avoid name collision.\n\n Example:\n >>> iso_name_increment('foo.txt')\n 'foo1.txt'\n >>> iso_name_increment('bar10')\n 'bar11'\n >>> iso_name_increment('bar99', max_length=5)\n 'ba100'"}
{"_id": "q_14075", "text": "Slugify a path, maintaining a map with the previously slugified paths.\n\n The path table is used to prevent slugified names from collisioning,\n using the `iso_name_increment` function to deduplicate slugs.\n\n Example:\n >>> path_table = {'/': '/'}\n >>> iso_path_slugify('/\u00e9bc.txt', path_table)\n '/_BC.TXT'\n >>> iso_path_slugify('/\u00e0bc.txt', path_table)\n '/_BC2.TXT'"}
{"_id": "q_14076", "text": "Get sqlite_master table information as a list of dictionaries.\n\n :return: sqlite_master table information.\n :rtype: list\n\n :Sample Code:\n .. code:: python\n\n from sqliteschema import SQLiteSchemaExtractor\n\n print(json.dumps(SQLiteSchemaExtractor(\"sample.sqlite\").fetch_sqlite_master(), indent=4))\n\n :Output:\n .. code-block:: json\n\n [\n {\n \"tbl_name\": \"sample_table\",\n \"sql\": \"CREATE TABLE 'sample_table' ('a' INTEGER, 'b' REAL, 'c' TEXT, 'd' REAL, 'e' TEXT)\",\n \"type\": \"table\",\n \"name\": \"sample_table\",\n \"rootpage\": 2\n },\n {\n \"tbl_name\": \"sample_table\",\n \"sql\": \"CREATE INDEX sample_table_a_index ON sample_table('a')\",\n \"type\": \"index\",\n \"name\": \"sample_table_a_index\",\n \"rootpage\": 3\n }\n ]"}
{"_id": "q_14077", "text": "Construct a contour generator from a curvilinear grid.\n\n Note\n ----\n This is an alias for the default constructor.\n\n Parameters\n ----------\n x : array_like\n x coordinates of each point in `z`. Must be the same size as `z`.\n y : array_like\n y coordinates of each point in `z`. Must be the same size as `z`.\n z : array_like\n The 2-dimensional curvilinear grid of data to compute\n contours for. Masked arrays are supported.\n formatter : callable\n A conversion function to convert from the internal `Matplotlib`_\n contour format to an external format. See :ref:`formatters` for\n more information.\n\n Returns\n -------\n : :class:`QuadContourGenerator`\n Initialized contour generator."}
{"_id": "q_14078", "text": "Construct a contour generator from a rectilinear grid.\n\n Parameters\n ----------\n x : array_like\n x coordinates of each column of `z`. Must be the same length as\n the number of columns in `z`. (len(x) == z.shape[1])\n y : array_like\n y coordinates of each row of `z`. Must be the same length as the\n number of columns in `z`. (len(y) == z.shape[0])\n z : array_like\n The 2-dimensional rectilinear grid of data to compute contours for.\n Masked arrays are supported.\n formatter : callable\n A conversion function to convert from the internal `Matplotlib`_\n contour format to an external format. See :ref:`formatters` for\n more information.\n\n Returns\n -------\n : :class:`QuadContourGenerator`\n Initialized contour generator."}
{"_id": "q_14079", "text": "Construct a contour generator from a uniform grid.\n\n NOTE\n ----\n The default `origin` and `step` values is equivalent to calling\n :meth:`matplotlib.axes.Axes.contour` with only the `z` argument.\n\n Parameters\n ----------\n z : array_like\n The 2-dimensional uniform grid of data to compute contours for.\n Masked arrays are supported.\n origin : (number.Number, number.Number)\n The (x, y) coordinate of data point `z[0,0]`.\n step : (number.Number, number.Number)\n The (x, y) distance between data points in `z`.\n formatter : callable\n A conversion function to convert from the internal `Matplotlib`_\n contour format to an external format. See :ref:`formatters` for\n more information.\n\n Returns\n -------\n : :class:`QuadContourGenerator`\n Initialized contour generator."}
{"_id": "q_14080", "text": "Appy selector to obj and return matching nodes.\n\n If only one node is found, return it, otherwise return a list of matches.\n Returns False on syntax error. None if no results found."}
{"_id": "q_14081", "text": "Accept a list of tokens. Returns matched nodes of self.obj."}
{"_id": "q_14082", "text": "Production for a full selector."}
{"_id": "q_14083", "text": "Find nodes in rhs having common parents in lhs."}
{"_id": "q_14084", "text": "Apply each validator in validators to each node in obj.\n\n Return each node in obj which matches all validators."}
{"_id": "q_14085", "text": "Get a unique token for usage in differentiating test runs that need to\n run in parallel."}
{"_id": "q_14086", "text": "Returns the url of the poll. If the poll has not been submitted yet,\n an empty string is returned instead."}
{"_id": "q_14087", "text": "Retrieves a poll from strawpoll.\n\n :param arg: Either the ID of the poll or its strawpoll url.\n :param request_policy: Overrides :attr:`API.requests_policy` for that \\\n request.\n :type request_policy: Optional[:class:`RequestsPolicy`]\n\n :raises HTTPException: Requesting the poll failed.\n\n :returns: A poll constructed with the requested data.\n :rtype: :class:`Poll`"}
{"_id": "q_14088", "text": "Submits a poll on strawpoll.\n\n :param poll: The poll to submit.\n :type poll: :class:`Poll`\n :param request_policy: Overrides :attr:`API.requests_policy` for that \\\n request.\n :type request_policy: Optional[:class:`RequestsPolicy`]\n\n :raises ExistingPoll: This poll instance has already been submitted.\n :raises HTTPException: The submission failed.\n\n :returns: The given poll updated with the data sent back from the submission.\n :rtype: :class:`Poll`\n\n .. note::\n Only polls that have a non empty title and between 2 and 30 options\n can be submitted."}
{"_id": "q_14089", "text": "`NumPy`_ style contour formatter.\n\n Contours are returned as a list of Nx2 arrays containing the x and y\n vertices of the contour line.\n\n For filled contours the direction of vertices matters:\n\n * CCW (ACW): The vertices give the exterior of a contour polygon.\n * CW: The vertices give a hole of a contour polygon. This hole will\n always be inside the exterior of the last contour exterior.\n\n .. note:: This is the fastest format.\n\n .. _NumPy: http://www.numpy.org"}
{"_id": "q_14090", "text": "`MATLAB`_ style contour formatter.\n\n Contours are returned as a single Nx2, `MATLAB`_ style, contour array.\n There are two types of rows in this format:\n\n * Header: The first element of a header row is the level of the contour\n (the lower level for filled contours) and the second element is the\n number of vertices (to follow) belonging to this contour line.\n * Vertex: x,y coordinate pairs of the vertex.\n\n A header row is always followed by the coresponding number of vertices.\n Another header row may follow if there are more contour lines.\n\n For filled contours the direction of vertices matters:\n\n * CCW (ACW): The vertices give the exterior of a contour polygon.\n * CW: The vertices give a hole of a contour polygon. This hole will\n always be inside the exterior of the last contour exterior.\n\n For further explanation of this format see the `Mathworks documentation\n <https://www.mathworks.com/help/matlab/ref/contour-properties.html#prop_ContourMatrix>`_\n noting that the MATLAB format used in the `contours` package is the\n transpose of that used by `MATLAB`_ (since `MATLAB`_ is column-major\n and `NumPy`_ is row-major by default).\n\n .. _NumPy: http://www.numpy.org\n\n .. _MATLAB: https://www.mathworks.com/products/matlab.html"}
{"_id": "q_14091", "text": "`Shapely`_ style contour formatter.\n\n Contours are returned as a list of :class:`shapely.geometry.LineString`,\n :class:`shapely.geometry.LinearRing`, and :class:`shapely.geometry.Point`\n geometry elements.\n\n Filled contours return a list of :class:`shapely.geometry.Polygon`\n elements instead.\n\n .. note:: If possible, `Shapely speedups`_ will be enabled.\n\n .. _Shapely: http://toblerity.org/shapely/manual.html\n\n .. _Shapely speedups: http://toblerity.org/shapely/manual.html#performance\n\n\n See Also\n --------\n `descartes <https://bitbucket.org/sgillies/descartes/>`_ : Use `Shapely`_\n or GeoJSON-like geometric objects as matplotlib paths and patches."}
{"_id": "q_14092", "text": "Get contour lines at the given level.\n\n Parameters\n ----------\n level : numbers.Number\n The data level to calculate the contour lines for.\n\n Returns\n -------\n :\n The result of the :attr:`formatter` called on the contour at the\n given `level`."}
{"_id": "q_14093", "text": "Get contour polygons between the given levels.\n\n Parameters\n ----------\n min : numbers.Number or None\n The minimum data level of the contour polygon. If :obj:`None`,\n ``numpy.finfo(numpy.float64).min`` will be used.\n max : numbers.Number or None\n The maximum data level of the contour polygon. If :obj:`None`,\n ``numpy.finfo(numpy.float64).max`` will be used.\n\n Returns\n -------\n :\n The result of the :attr:`formatter` called on the filled contour\n between `min` and `max`."}
{"_id": "q_14094", "text": "Add a Node object to nodes dictionary, calculating its coordinates using offset\n\n Parameters\n ----------\n node : a Node object\n offset : float \n number between 0 and 1 that sets the distance\n from the start point at which the node will be placed"}
{"_id": "q_14095", "text": "Determine if the given test supports transaction management for\n database rollback test isolation and also whether or not the test has\n opted out of that support.\n\n Transactions make database rollback much quicker when supported, with\n the caveat that any tests that are explicitly testing transactions\n won't work properly and any tests that depend on external access to the\n test database won't be able to view data created/altered during the\n test."}
{"_id": "q_14096", "text": "Clean up any created database and schema."}
{"_id": "q_14097", "text": "Makes a `matplotlib.pyplot.Figure` without tooltips or keybindings\n\n Parameters\n ----------\n figsize : tuple\n Figsize as passed to `matplotlib.pyplot.figure`\n remove_tooltips, remove_keybindings : bool\n Set to True to remove the tooltips bar or any key bindings,\n respectively. Default is False\n\n Returns\n -------\n fig : `matplotlib.pyplot.Figure`"}
{"_id": "q_14098", "text": "updates self.field"}
{"_id": "q_14099", "text": "removes the closest particle in self.pos to ``p``"}
{"_id": "q_14100", "text": "See `diffusion_correlated` for information related to units, etc"}
{"_id": "q_14101", "text": "Makes a guess at particle positions using heuristic centroid methods.\n\n Parameters\n ----------\n st : :class:`peri.states.State`\n The state to check adding particles to.\n rad : Float\n The feature size for featuring.\n invert : {'guess', True, False}, optional\n Whether to invert the image; set to True for there are dark\n particles on a bright background, False for bright particles.\n The default is to guess from the state's current particles.\n minmass : Float or None, optional\n The minimum mass/masscut of a particle. Default is ``None`` =\n calculated internally.\n use_tp : Bool, optional\n Whether or not to use trackpy. Default is ``False``, since trackpy\n cuts out particles at the edge.\n trim_edge : Bool, optional\n Whether to trim particles at the edge pixels of the image. Can be\n useful for initial featuring but is bad for adding missing particles\n as they are frequently at the edge. Default is ``False``.\n\n Returns\n -------\n guess : [N,3] numpy.ndarray\n The featured positions of the particles, sorted in order of decreasing\n feature mass.\n npart : Int\n The number of added particles."}
{"_id": "q_14102", "text": "Workhorse of feature_guess"}
{"_id": "q_14103", "text": "Checks whether to add particles at a given position by seeing if adding\n the particle improves the fit of the state.\n\n Parameters\n ----------\n st : :class:`peri.states.State`\n The state to check adding particles to.\n guess : [N,3] list-like\n The positions of particles to check to add.\n rad : {Float, ``'calc'``}, optional.\n The radius of the newly-added particles. Default is ``'calc'``,\n which uses the states current radii's median.\n do_opt : Bool, optional\n Whether to optimize the particle position before checking if it\n should be kept. Default is True (optimizes position).\n im_change_frac : Float\n How good the change in error needs to be relative to the change in\n the difference image. Default is 0.2; i.e. if the error does not\n decrease by 20% of the change in the difference image, do not add\n the particle.\n min_derr : Float or '3sig'\n The minimal improvement in error to add a particle. Default\n is ``'3sig' = 3*st.sigma``.\n\n Returns\n -------\n accepts : Int\n The number of added particles\n new_poses : [N,3] list\n List of the positions of the added particles. If ``do_opt==True``,\n then these positions will differ from the input 'guess'."}
{"_id": "q_14104", "text": "Checks whether or not adding a particle should be present.\n\n Parameters\n ----------\n absent_err : Float\n The state error without the particle.\n present_err : Float\n The state error with the particle.\n absent_d : numpy.ndarray\n The state residuals without the particle.\n present_d : numpy.ndarray\n The state residuals with the particle.\n im_change_frac : Float, optional\n How good the change in error needs to be relative to the change in\n the residuals. Default is 0.2; i.e. return False if the error does\n not decrease by 0.2 x the change in the residuals.\n min_derr : Float, optional\n The minimal improvement in error. Default is 0.1\n\n Returns\n -------\n Bool\n True if the errors is better with the particle present."}
{"_id": "q_14105", "text": "Attempts to add missing particles to the state.\n\n Operates by:\n (1) featuring the difference image using feature_guess,\n (2) attempting to add the featured positions using check_add_particles.\n\n Parameters\n ----------\n st : :class:`peri.states.State`\n The state to check adding particles to.\n rad : Float or 'calc', optional\n The radius of the newly-added particles and of the feature size for\n featuring. Default is 'calc', which uses the median of the state's\n current radii.\n tries : Int, optional\n How many particles to attempt to add. Only tries to add the first\n ``tries`` particles, in order of mass. Default is 50.\n\n Other Parameters\n ----------------\n invert : Bool, optional\n Whether to invert the image. Default is ``True``, i.e. dark particles\n minmass : Float or None, optionals\n The minimum mass/masscut of a particle. Default is ``None``=calcualted\n by ``feature_guess``.\n use_tp : Bool, optional\n Whether to use trackpy in feature_guess. Default is False, since\n trackpy cuts out particles at the edge.\n\n do_opt : Bool, optional\n Whether to optimize the particle position before checking if it\n should be kept. Default is True (optimizes position).\n im_change_frac : Float, optional\n How good the change in error needs to be relative to the change\n in the difference image. Default is 0.2; i.e. if the error does\n not decrease by 20% of the change in the difference image, do\n not add the particle.\n\n min_derr : Float or '3sig', optional\n The minimal improvement in error to add a particle. Default\n is ``'3sig' = 3*st.sigma``.\n\n Returns\n -------\n accepts : Int\n The number of added particles\n new_poses : [N,3] list\n List of the positions of the added particles. If ``do_opt==True``,\n then these positions will differ from the input 'guess'."}
{"_id": "q_14106", "text": "Automatically adds and subtracts missing & extra particles in a region\n of poor fit.\n\n Parameters\n ----------\n st: :class:`peri.states.State`\n The state to add and subtract particles to.\n tile : :class:`peri.util.Tile`\n The poorly-fit region to examine.\n rad : Float or 'calc', optional\n The initial radius for added particles; added particles radii are\n not fit until the end of add_subtract. Default is ``'calc'``, which\n uses the median radii of active particles.\n max_iter : Int, optional\n The maximum number of loops for attempted adds at one tile location.\n Default is 3.\n invert : {'guess', True, False}, optional\n Whether to invert the image for feature_guess -- True for dark\n particles on a bright background, False for bright particles. The\n default is to guess from the state's current particles.\n max_allowed_remove : Int, optional\n The maximum number of particles to remove. If the misfeatured tile\n contains more than this many particles, raises an error. If it\n contains more than half as many particles, logs a warning. If more\n than this many particles are added, they are optimized in blocks of\n ``max_allowed_remove``. Default is 20.\n\n Other Parameters\n ----------------\n im_change_frac : Float on [0, 1], optional.\n If adding or removing a particle decreases the error less than\n ``im_change_frac``*the change in the image, the particle is deleted.\n Default is 0.2.\n\n min_derr : {Float, ``'3sig'``}, optional\n The minimum change in the state's error to keep a particle in the\n image. Default is ``'3sig'`` which uses ``3*st.sigma``.\n\n do_opt : Bool, optional\n Set to False to avoid optimizing particle positions after adding\n them. Default is True.\n\n minmass : Float, optional\n The minimum mass for a particle to be identified as a feature, as\n used by trackpy. Defaults to a decent guess.\n\n use_tp : Bool, optional\n Set to True to use trackpy to find missing particles inside the\n image. Not recommended since trackpy deliberately cuts out particles\n at the edge of the image. Default is False.\n\n Outputs\n -------\n n_added : Int\n The change in the number of particles, i.e. ``n_added-n_subtracted``\n ainds: List of ints\n The indices of the added particles.\n\n Notes\n --------\n The added/removed positions returned are whether or not the\n position has been added or removed ever. It's possible/probably that\n a position is added, then removed during a later iteration.\n\n Algorithm is:\n 1. Remove all particles within the tile.\n 2. Feature and add particles to the tile.\n 3. Optimize the added particles positions only.\n 4. Run 2-3 until no particles have been added.\n 5. Optimize added particle radii\n Because all the particles are removed within a tile, it is important\n to set max_allowed_remove to a reasonable value. Otherwise, if the\n tile is the size of the image it can take a long time to remove all\n the particles and re-add them."}
{"_id": "q_14107", "text": "Automatically adds and subtracts missing particles based on local\n regions of poor fit.\n\n Calls identify_misfeatured_regions to identify regions, then\n add_subtract_misfeatured_tile on the tiles in order of size until\n region_depth tiles have been checked without adding any particles.\n\n Parameters\n ----------\n st: :class:`peri.states.State`\n The state to add and subtract particles to.\n region_depth : Int\n The minimum amount of regions to try; the algorithm terminates if\n region_depth regions have been tried without adding particles.\n\n Other Parameters\n ----------------\n filter_size : Int, optional\n The size of the filter for calculating the local standard deviation;\n should approximately be the size of a poorly featured region in each\n dimension. Best if odd. Default is 5.\n sigma_cutoff : Float, optional\n The max allowed deviation of the residuals from what is expected,\n in units of the residuals' standard deviation. Lower means more\n sensitive, higher = less sensitive. Default is 8.0, i.e. one pixel\n out of every ``7*10^11`` is mis-identified randomly. In practice the\n noise is not Gaussian so there are still some regions mis-\n identified as improperly featured.\n rad : Float or 'calc', optional\n The initial radius for added particles; added particles radii are\n not fit until the end of add_subtract. Default is ``'calc'``, which\n uses the median radii of active particles.\n max_iter : Int, optional\n The maximum number of loops for attempted adds at one tile location.\n Default is 3.\n invert : Bool, optional\n Whether to invert the image for feature_guess. Default is ``True``,\n i.e. dark particles on bright background.\n max_allowed_remove : Int, optional\n The maximum number of particles to remove. If the misfeatured tile\n contains more than this many particles, raises an error. If it\n contains more than half as many particles, throws a warning. If more\n than this many particles are added, they are optimized in blocks of\n ``max_allowed_remove``. Default is 20.\n im_change_frac : Float, between 0 and 1.\n If adding or removing a particle decreases the error less than\n ``im_change_frac *`` the change in the image, the particle is deleted.\n Default is 0.2.\n min_derr : Float\n The minimum change in the state's error to keep a particle in the\n image. Default is ``'3sig'`` which uses ``3*st.sigma``.\n do_opt : Bool, optional\n Set to False to avoid optimizing particle positions after adding\n them. Default is True\n minmass : Float, optional\n The minimum mass for a particle to be identified as a feature, as\n used by trackpy. Defaults to a decent guess.\n use_tp : Bool, optional\n Set to True to use trackpy to find missing particles inside the\n image. Not recommended since trackpy deliberately cuts out\n particles at the edge of the image. Default is False.\n max_allowed_remove : Int, optional\n The maximum number of particles to remove. If the misfeatured tile\n contains more than this many particles, raises an error. If it\n contains more than half as many particles, throws a warning. If more\n than this many particles are added, they are optimized in blocks of\n ``max_allowed_remove``. Default is 20.\n\n Returns\n -------\n n_added : Int\n The change in the number of particles; i.e the number added - number\n removed.\n new_poses : List\n [N,3] element list of the added particle positions.\n\n Notes\n -----\n Algorithm Description\n\n 1. Identify mis-featured regions by how much the local residuals\n deviate from the global residuals, as measured by the standard\n deviation of both.\n 2. Loop over each of those regions, and:\n\n a. Remove every particle in the current region.\n b. Try to add particles in the current region until no more\n can be added while adequately decreasing the error.\n c. Terminate if at least region_depth regions have been\n checked without successfully adding a particle.\n\n Because this algorithm is more judicious about chooosing regions to\n check, and more aggressive about removing particles in those regions,\n it runs faster and does a better job than the (global) add_subtract.\n However, this function usually does not work better as an initial add-\n subtract on an image, since (1) it doesn't check for removing small/big\n particles per se, and (2) when the poorly-featured regions of the image\n are large or when the fit is bad, it will remove essentially all of the\n particles, taking a long time. As a result, it's usually best to do a\n normal add_subtract first and using this function for tough missing or\n double-featured particles."}
{"_id": "q_14108", "text": "Guesses whether particles are bright on a dark bkg or vice-versa\n\n Works by checking whether the intensity at the particle centers is\n brighter or darker than the average intensity of the image, by\n comparing the median intensities of each.\n\n Parameters\n ----------\n st : :class:`peri.states.ImageState`\n\n Returns\n -------\n invert : bool\n Whether to invert the image for featuring."}
{"_id": "q_14109", "text": "Save the acquired 'wisdom' generated by FFTW to file so that future\n initializations of FFTW will be faster."}
{"_id": "q_14110", "text": "Take a platonic image and position and create a state which we can\n use to sample the error for peri. Also return the blurred platonic\n image so we can vary the noise on it later"}
{"_id": "q_14111", "text": "Create a perfect platonic sphere of a given radius R by supersampling by a\n factor scale on a grid of size N. Scale must be odd.\n\n We are able to perfectly position these particles up to 1/scale. Therefore,\n let's only allow those types of shifts for now, but return the actual position\n used for the placement."}
{"_id": "q_14112", "text": "Translate an image in fourier-space with plane waves"}
{"_id": "q_14113", "text": "Load default users and groups."}
{"_id": "q_14114", "text": "weighting function for Barnes"}
{"_id": "q_14115", "text": "The first-order Barnes approximation"}
{"_id": "q_14116", "text": "Correct, normalized version of Barnes"}
{"_id": "q_14117", "text": "Convert cheb coordinates to windowdow coordinates"}
{"_id": "q_14118", "text": "Determine admin type."}
{"_id": "q_14119", "text": "Validate privacy policy value."}
{"_id": "q_14120", "text": "Validate state value."}
{"_id": "q_14121", "text": "Delete a group and all associated memberships."}
{"_id": "q_14122", "text": "Update group.\n\n :param name: Name of group.\n :param description: Description of group.\n :param privacy_policy: PrivacyPolicy\n :param subscription_policy: SubscriptionPolicy\n :returns: Updated group"}
{"_id": "q_14123", "text": "Query group by a list of group names.\n\n :param list names: List of the group names.\n :returns: Query object."}
{"_id": "q_14124", "text": "Query group by user.\n\n :param user: User object.\n :param bool with_pending: Whether to include pending users.\n :param bool eager: Eagerly fetch group members.\n :returns: Query object."}
{"_id": "q_14125", "text": "Modify query as so include only specific group names.\n\n :param query: Query object.\n :param str q: Search string.\n :returs: Query object."}
{"_id": "q_14126", "text": "Invite users to a group by emails.\n\n :param list emails: Emails of users that shall be invited.\n :returns list: Newly created Memberships or Nones."}
{"_id": "q_14127", "text": "Verify if given user is a group member.\n\n :param user: User to be checked.\n :param bool with_pending: Whether to include pending users or not.\n :returns: True or False."}
{"_id": "q_14128", "text": "Determine if given user can see other group members.\n\n :param user: User to be checked.\n :returns: True or False."}
{"_id": "q_14129", "text": "Determine if user can invite people to a group.\n\n Be aware that this check is independent from the people (users) which\n are going to be invited. The checked user is the one who invites\n someone, NOT who is going to be invited.\n\n :param user: User to be checked.\n :returns: True or False."}
{"_id": "q_14130", "text": "Get membership for given user and group.\n\n :param group: Group object.\n :param user: User object.\n :returns: Membership or None."}
{"_id": "q_14131", "text": "Filter a query result."}
{"_id": "q_14132", "text": "Get a user's memberships."}
{"_id": "q_14133", "text": "Get all invitations for given user."}
{"_id": "q_14134", "text": "Get all pending group requests."}
{"_id": "q_14135", "text": "Get a group's members."}
{"_id": "q_14136", "text": "Modify query as so to order the results.\n\n :param query: Query object.\n :param str s: Orderinig: ``asc`` or ``desc``.\n :returs: Query object."}
{"_id": "q_14137", "text": "Delete membership."}
{"_id": "q_14138", "text": "Get specific GroupAdmin object."}
{"_id": "q_14139", "text": "Get all groups for for a specific admin."}
{"_id": "q_14140", "text": "Get count of admins per group."}
{"_id": "q_14141", "text": "Get all social newtworks profiles"}
{"_id": "q_14142", "text": "Returns the detailed information on individual interactions with the social\n media update such as favorites, retweets and likes."}
{"_id": "q_14143", "text": "Edit an existing, individual status update."}
{"_id": "q_14144", "text": "Immediately shares a single pending update and recalculates times for\n updates remaining in the queue."}
{"_id": "q_14145", "text": "Permanently delete an existing status update."}
{"_id": "q_14146", "text": "Move an existing status update to the top of the queue and recalculate\n times for all updates in the queue. Returns the update with its new\n posting time."}
{"_id": "q_14147", "text": "Returns an array of updates that are currently in the buffer for an\n individual social media profile."}
{"_id": "q_14148", "text": "Edit the order at which statuses for the specified social media profile will\n be sent out of the buffer."}
{"_id": "q_14149", "text": "Create one or more new status updates."}
{"_id": "q_14150", "text": "Temporarily do not use any formatter so that text printed is raw"}
{"_id": "q_14151", "text": "Set the verbosity level of a certain log handler or of all handlers.\n\n Parameters\n ----------\n verbosity : 'v' to 'vvvvv'\n the level of verbosity, more v's is more verbose\n\n handlers : string, or list of strings\n handler names can be found in ``peri.logger.types.keys()``\n Current set is::\n\n ['console-bw', 'console-color', 'rotating-log']"}
{"_id": "q_14152", "text": "Generates a centered boolean mask of a 3D sphere"}
{"_id": "q_14153", "text": "Local max featuring to identify bright spherical particles on a\n dark background.\n\n Parameters\n ----------\n im : numpy.ndarray\n The image to identify particles in.\n radius : Float > 0, optional\n Featuring radius of the particles. Default is 2.5\n noise_size : Float, optional\n Size of Gaussian kernel for smoothing out noise. Default is 1.\n bkg_size : Float or None, optional\n Size of the Gaussian kernel for removing long-wavelength\n background. Default is None, which gives `2 * radius`\n minmass : Float, optional\n Return only particles with a ``mass > minmass``. Default is 1.\n trim_edge : Bool, optional\n Set to True to omit particles identified exactly at the edge\n of the image. False-positive features frequently occur here\n because of the reflected bandpass featuring. Default is\n False, i.e. find particles at the edge of the image.\n\n Returns\n -------\n pos, mass : numpy.ndarray\n Particle positions and masses"}
{"_id": "q_14154", "text": "Cumulative distribution function for the traingle distribution"}
{"_id": "q_14155", "text": "Get the update tile surrounding particle `n`"}
{"_id": "q_14156", "text": "Translate index info to parameter name"}
{"_id": "q_14157", "text": "Get the amount of support size required for a particular update."}
{"_id": "q_14158", "text": "Update the particles field given new parameter values"}
{"_id": "q_14159", "text": "Get position and radius of one or more particles"}
{"_id": "q_14160", "text": "Get position of one or more particles"}
{"_id": "q_14161", "text": "Get radius of one or more particles"}
{"_id": "q_14162", "text": "Add a particle or list of particles given by a list of positions and\n radii, both need to be array-like.\n\n Parameters\n ----------\n pos : array-like [N, 3]\n Positions of all new particles\n\n rad : array-like [N]\n Corresponding radii of new particles\n\n Returns\n -------\n inds : N-element numpy.ndarray.\n Indices of the added particles."}
{"_id": "q_14163", "text": "Get the tile surrounding particle `n`"}
{"_id": "q_14164", "text": "A fast j2 defined in terms of other special functions"}
{"_id": "q_14165", "text": "Returns the wavefront aberration for an aberrated, defocused lens.\n\n Calculates the portions of the wavefront distortion due to z, theta\n only, for a lens with defocus and spherical aberration induced by\n coverslip mismatch. (The rho portion can be analytically integrated\n to Bessels.)\n\n Parameters\n ----------\n cos_theta : numpy.ndarray.\n The N values of cos(theta) at which to compute f_theta.\n zint : Float\n The position of the lens relative to the interface.\n z : numpy.ndarray\n The M z-values to compute f_theta at. `z.size` is unrelated\n to `cos_theta.size`\n n2n1: Float, optional\n The ratio of the index of the immersed medium to the optics.\n Default is 0.95\n sph6_ab : Float or None, optional\n Set sph6_ab to a nonzero value to add residual 6th-order\n spherical aberration that is proportional to sph6_ab. Default\n is None (i.e. doesn't calculate).\n\n Returns\n -------\n wvfront : numpy.ndarray\n The aberrated wavefront, as a function of theta and z.\n Shape is [z.size, cos_theta.size]"}
{"_id": "q_14166", "text": "Returns a prefactor in the electric field integral.\n\n This is an internal function called by get_K. The returned prefactor\n in the integrand is independent of which integral is being called;\n it is a combination of the exp(1j*phase) and apodization.\n\n Parameters\n ----------\n z : numpy.ndarray\n The values of z (distance along optical axis) at which to\n calculate the prefactor. Size is unrelated to the size of\n `cos_theta`\n cos_theta : numpy.ndarray\n The values of cos(theta) (i.e. position on the incoming\n focal spherical wavefront) at which to calculate the\n prefactor. Size is unrelated to the size of `z`\n zint : Float, optional\n The position of the optical interface, in units of 1/k.\n Default is 100.\n n2n1 : Float, optional\n The ratio of the index mismatch between the optics (n1) and\n the sample (n2). Default is 0.95\n get_hdet : Bool, optional\n Set to True to calculate the detection prefactor vs the\n illumination prefactor (i.e. False to include apodization).\n Default is False\n\n Returns\n -------\n numpy.ndarray\n The prefactor, of size [`z.size`, `cos_theta.size`], sampled\n at the values [`z`, `cos_theta`]"}
{"_id": "q_14167", "text": "Calculates one of three electric field integrals.\n\n Internal function for calculating point spread functions. Returns\n one of three electric field integrals that describe the electric\n field near the focus of a lens; these integrals appear in Hell's psf\n calculation.\n\n Parameters\n ----------\n rho : numpy.ndarray\n Rho in cylindrical coordinates, in units of 1/k.\n z : numpy.ndarray\n Z in cylindrical coordinates, in units of 1/k. `rho` and\n `z` must be the same shape\n\n alpha : Float, optional\n The acceptance angle of the lens, on (0,pi/2). Default is 1.\n zint : Float, optional\n The distance of the len's unaberrated focal point from the\n optical interface, in units of 1/k. Default is 100.\n n2n1 : Float, optional\n The ratio n2/n1 of the index mismatch between the sample\n (index n2) and the optical train (index n1). Must be on\n [0,inf) but should be near 1. Default is 0.95\n get_hdet : Bool, optional\n Set to True to get the detection portion of the psf; False\n to get the illumination portion of the psf. Default is True\n K : {1, 2, 3}, optional\n Which of the 3 integrals to evaluate. Default is 1\n Kprefactor : numpy.ndarray or None\n This array is calculated internally and optionally returned;\n pass it back to avoid recalculation and increase speed. Default\n is None, i.e. calculate it internally.\n return_Kprefactor : Bool, optional\n Set to True to also return the Kprefactor (parameter above)\n to speed up the calculation for the next values of K. Default\n is False\n npts : Int, optional\n The number of points to use for Gauss-Legendre quadrature of\n the integral. Default is 20, which is a good number for x,y,z\n less than 100 or so.\n\n Returns\n -------\n kint : numpy.ndarray\n The integral K_i; rho.shape numpy.array\n [, Kprefactor] : numpy.ndarray\n The prefactor that is independent of which integral is being\n calculated but does depend on the parameters; can be passed\n back to the function for speed.\n\n Notes\n -----\n npts=20 gives double precision (no difference between 20, 30, and\n doing all the integrals with scipy.quad). The integrals are only\n over the acceptance angle of the lens, so for moderate x,y,z they\n don't vary too rapidly. For x,y,z, zint large compared to 100, a\n higher npts might be necessary."}
{"_id": "q_14168", "text": "Calculates the symmetric and asymmetric portions of a confocal PSF.\n\n Parameters\n ----------\n rho : numpy.ndarray\n Rho in cylindrical coordinates, in units of 1/k.\n z : numpy.ndarray\n Z in cylindrical coordinates, in units of 1/k. Must be the\n same shape as `rho`\n get_hdet : Bool, optional\n Set to True to get the detection portion of the psf; False\n to get the illumination portion of the psf. Default is True\n include_K3_det : Bool, optional.\n Flag to not calculate the `K3' component for the detection\n PSF, corresponding to (I think) a low-aperature focusing\n lens and no z-polarization of the focused light. Default\n is True, i.e. calculates the K3 component as if the focusing\n lens is high-aperture\n\n Other Parameters\n ----------------\n alpha : Float, optional\n The acceptance angle of the lens, on (0,pi/2). Default is 1.\n zint : Float, optional\n The distance of the len's unaberrated focal point from the\n optical interface, in units of 1/k. Default is 100.\n n2n1 : Float, optional\n The ratio n2/n1 of the index mismatch between the sample\n (index n2) and the optical train (index n1). Must be on\n [0,inf) but should be near 1. Default is 0.95\n\n Returns\n -------\n hsym : numpy.ndarray\n `rho`.shape numpy.array of the symmetric portion of the PSF\n hasym : numpy.ndarray\n `rho`.shape numpy.array of the asymmetric portion of the PSF"}
{"_id": "q_14169", "text": "Calculates a set of Gauss quadrature points & weights for polydisperse\n light.\n\n Returns a list of points and weights of the final wavevector's distri-\n bution, in units of the initial wavevector.\n\n Parameters\n ----------\n kfki : Float\n The mean of the polydisperse outgoing wavevectors.\n sigkf : Float\n The standard dev. of the polydisperse outgoing wavevectors.\n dist_type : {`gaussian`, `gamma`}, optional\n The distribution, gaussian or gamma, of the wavevectors.\n Default is `gaussian`\n nkpts : Int, optional\n The number of quadrature points to use. Default is 3\n Returns\n -------\n kfkipts : numpy.ndarray\n The Gauss quadrature points at which to calculate kfki.\n wts : numpy.ndarray\n The associated Gauss quadrature weights."}
{"_id": "q_14170", "text": "Calculates the illumination PSF for a line-scanning confocal with the\n confocal line oriented along the x direction.\n\n Parameters\n ----------\n y : numpy.ndarray\n The y points (in-plane, perpendicular to the line direction)\n at which to evaluate the illumination PSF, in units of 1/k.\n Arbitrary shape.\n z : numpy.ndarray\n The z points (optical axis) at which to evaluate the illum-\n ination PSF, in units of 1/k. Must be the same shape as `y`\n polar_angle : Float, optional\n The angle of the illuminating light's polarization with\n respect to the line's orientation along x. Default is 0.\n pinhole_width : Float, optional\n The width of the geometric image of the line projected onto\n the sample, in units of 1/k. Default is 1. The perfect line\n image is assumed to be a Gaussian. If `nlpts` is set to 1,\n the line will always be of zero width.\n nlpts : Int, optional\n The number of points to use for Hermite-gauss quadrature over\n the line's width. Default is 1, corresponding to a zero-width\n line.\n use_laggauss : Bool, optional\n Set to True to use a more-accurate sinh'd Laguerre-Gauss\n quadrature for integration over the line's length (more accurate\n in the same amount of time). Default is False for backwards\n compatibility. FIXME what did we do here?\n\n Other Parameters\n ----------------\n alpha : Float, optional\n The acceptance angle of the lens, on (0,pi/2). Default is 1.\n zint : Float, optional\n The distance of the len's unaberrated focal point from the\n optical interface, in units of 1/k. Default is 100.\n n2n1 : Float, optional\n The ratio n2/n1 of the index mismatch between the sample\n (index n2) and the optical train (index n1). Must be on\n [0,inf) but should be near 1. Default is 0.95\n\n Returns\n -------\n hilm : numpy.ndarray\n The line illumination, of the same shape as y and z."}
{"_id": "q_14171", "text": "Calculates the point spread function of a line-scanning confocal with\n polydisperse dye emission.\n\n Make x,y,z __1D__ numpy.arrays, with x the direction along the\n scan line. (to make the calculation faster since I dont' need the line\n ilm for each x).\n\n Parameters\n ----------\n x : numpy.ndarray\n _One_dimensional_ array of the x grid points (along the line\n illumination) at which to evaluate the psf. In units of\n 1/k_incoming.\n y : numpy.ndarray\n _One_dimensional_ array of the y grid points (in plane,\n perpendicular to the line illumination) at which to evaluate\n the psf. In units of 1/k_incoming.\n z : numpy.ndarray\n _One_dimensional_ array of the z grid points (along the\n optical axis) at which to evaluate the psf. In units of\n 1/k_incoming.\n normalize : Bool, optional\n Set to True to include the effects of PSF normalization on\n the image intensity. Default is False.\n kfki : Float, optional\n The mean of the ratio of the final light's wavevector to the\n incoming. Default is 0.889\n sigkf : Float, optional\n The standard deviation of the ratio of the final light's\n wavevector to the incoming. Default is 0.1\n zint : Float, optional\n The position of the optical interface, in units of 1/k_incoming\n Default is 100.\n dist_type : {`gaussian`, `gamma`}, optional\n The distribution of the outgoing light. If 'gaussian' the\n resulting k-values are taken in absolute value. Default\n is `gaussian`\n wrap : Bool, optional\n If True, wraps the psf calculation for speed, assuming that\n the input x, y are regularly-spaced points. If x,y are not\n regularly spaced then `wrap` must be set to False. Default is True.\n\n Other Parameters\n ----------------\n polar_angle : Float, optional\n The polarization angle of the light (radians) with respect to\n the line direction (x). Default is 0.\n alpha : Float\n The opening angle of the lens. Default is 1.\n n2n1 : Float\n The ratio of the index in the 2nd medium to that in the first.\n Default is 0.95\n\n Returns\n -------\n numpy.ndarray\n A 3D- numpy.array of the point-spread function. Indexing is\n psf[x,y,z]; shape is [x.size, y,size, z.size]\n Notes\n -----\n Neither distribution type is perfect. If sigkf/k0 is big (>0.5ish)\n then part of the Gaussian is negative. To avoid issues an abs() is\n taken, but then the actual mean and variance are not what is\n supplied. Conversely, if sigkf/k0 is small (<0.0815), then the\n requisite associated Laguerre quadrature becomes unstable. To\n prevent this sigkf/k0 is effectively clipped to be > 0.0815."}
{"_id": "q_14172", "text": "Wraps a point-spread function in x and y.\n\n Speeds up psf calculations by a factor of 4 for free / some broadcasting\n by exploiting the x->-x, y->-y symmetry of a psf function. Pass x and y\n as the positive (say) values of the coordinates at which to evaluate func,\n and it will return the function sampled at [x[::-1]] + x. Note it is not\n wrapped in z.\n\n Parameters\n ----------\n xpts : numpy.ndarray\n 1D N-element numpy.array of the x-points to evaluate func at.\n ypts : numpy.ndarray\n y-points to evaluate func at.\n zpts : numpy.ndarray\n z-points to evaluate func at.\n func : function\n The function to evaluate and wrap around. Syntax must be\n func(x,y,z, **kwargs)\n **kwargs : Any parameters passed to the function.\n\n Outputs\n -------\n to_return : numpy.ndarray\n The wrapped and calculated psf, of shape\n [2*x.size - x0, 2*y.size - y0, z.size], where x0=1 if x[0]=0, etc.\n\n Notes\n -----\n The coordinates should be something like numpy.arange(start, stop, diff),\n with start near 0. If x[0]==0, all of x is calcualted but only x[1:]\n is wrapped (i.e. it works whether or not x[0]=0).\n\n This doesn't work directly for a linescan psf because the illumination\n portion is not like a grid. However, the illumination and detection\n are already combined with wrap_and_calc in calculate_linescan_psf etc."}
{"_id": "q_14173", "text": "Convert a scalar ``a`` to a list and all iterables to list as well.\n\n Examples\n --------\n >>> listify(0)\n [0]\n\n >>> listify([1,2,3])\n [1, 2, 3]\n\n >>> listify('a')\n ['a']\n\n >>> listify(np.array([1,2,3]))\n [1, 2, 3]\n\n >>> listify('string')\n ['string']"}
{"_id": "q_14174", "text": "If a single element list, extract the element as an object, otherwise\n leave as it is.\n\n Examples\n --------\n >>> delistify('string')\n 'string'\n\n >>> delistify(['string'])\n 'string'\n\n >>> delistify(['string', 'other'])\n ['string', 'other']\n\n >>> delistify(np.array([1.0]))\n 1.0\n\n >>> delistify([1, 2, 3])\n [1, 2, 3]"}
{"_id": "q_14175", "text": "Convert an integer or iterable list to numpy array of length dim. This func\n is used to allow other methods to take both scalars non-numpy arrays with\n flexibility.\n\n Parameters\n ----------\n a : number, iterable, array-like\n The object to convert to numpy array\n\n dim : integer\n The length of the resulting array\n\n dtype : string or np.dtype\n Type which the resulting array should be, e.g. 'float', np.int8\n\n Returns\n -------\n arr : numpy array\n Resulting numpy array of length ``dim`` and type ``dtype``\n\n Examples\n --------\n >>> aN(1, dim=2, dtype='float')\n array([ 1., 1.])\n\n >>> aN(1, dtype='int')\n array([1, 1, 1])\n\n >>> aN(np.array([1,2,3]), dtype='float')\n array([ 1., 2., 3.])"}
{"_id": "q_14176", "text": "Apply the documentation from ``superclass`` to ``subclass`` by filling\n in all overridden member function docstrings with those from the\n parent class"}
{"_id": "q_14177", "text": "Array slicer object for this tile\n\n >>> Tile((2,3)).slicer\n (slice(0, 2, None), slice(0, 3, None))\n\n >>> np.arange(10)[Tile((4,)).slicer]\n array([0, 1, 2, 3])"}
{"_id": "q_14178", "text": "Iterate the vector of all corners of the hyperrectangles\n\n >>> Tile(3, dim=2).corners\n array([[0, 0],\n [0, 3],\n [3, 0],\n [3, 3]])"}
{"_id": "q_14179", "text": "Returns the coordinate vectors associated with the tile.\n\n Parameters\n -----------\n norm : boolean\n can rescale the coordinates for you. False is no rescaling, True is\n rescaling so that all coordinates are from 0 -> 1. If a scalar,\n the same norm is applied uniformally while if an iterable, each\n scale is applied to each dimension.\n\n form : string\n In what form to return the vector array. Can be one of:\n 'broadcast' -- return 1D arrays that are broadcasted to be 3D\n\n 'flat' -- return array without broadcasting so each component\n is 1D and the appropriate length as the tile\n\n 'meshed' -- arrays are explicitly broadcasted and so all have\n a 3D shape, each the size of the tile.\n\n 'vector' -- array is meshed and combined into one array with\n the vector components along last dimension [Nz, Ny, Nx, 3]\n\n Examples\n --------\n >>> Tile(3, dim=2).coords(form='meshed')[0]\n array([[ 0., 0., 0.],\n [ 1., 1., 1.],\n [ 2., 2., 2.]])\n\n >>> Tile(3, dim=2).coords(form='meshed')[1]\n array([[ 0., 1., 2.],\n [ 0., 1., 2.],\n [ 0., 1., 2.]])\n\n >>> Tile([4,5]).coords(form='vector').shape\n (4, 5, 2)\n\n >>> [i.shape for i in Tile((4,5), dim=2).coords(form='broadcast')]\n [(4, 1), (1, 5)]"}
{"_id": "q_14180", "text": "Test whether coordinates are contained within this tile.\n\n Parameters\n ----------\n items : ndarray [3] or [N, 3]\n N coordinates to check are within the bounds of the tile\n\n pad : integer or ndarray [3]\n anisotropic padding to apply in the contain test\n\n Examples\n --------\n >>> Tile(5, dim=2).contains([[-1, 0], [2, 3], [2, 6]])\n array([False, True, False], dtype=bool)"}
{"_id": "q_14181", "text": "Intersection of tiles, returned as a tile\n\n >>> Tile.intersection(Tile([0, 1], [5, 4]), Tile([1, 0], [4, 5]))\n Tile [1, 1] -> [4, 4] ([3, 3])"}
{"_id": "q_14182", "text": "Translate a tile by an amount dr\n\n >>> Tile(5).translate(1)\n Tile [1, 1, 1] -> [6, 6, 6] ([5, 5, 5])"}
{"_id": "q_14183", "text": "Returns a filtered image after applying the Fourier-space filters"}
{"_id": "q_14184", "text": "Sets Fourier-space filters for the image. The image is filtered by\n subtracting values from the image at slices.\n\n Parameters\n ----------\n slices : List of indices or slice objects.\n The q-values in Fourier space to filter.\n values : np.ndarray\n The complete array of Fourier space peaks to subtract off. values\n should be the same size as the FFT of the image; only the portions\n of values at slices will be removed.\n\n Examples\n --------\n To remove a two Fourier peaks in the data at q=(10, 10, 10) &\n (245, 245, 245), where im is the residuals of a model:\n\n * slices = [(10,10,10), (245, 245, 245)]\n * values = np.fft.fftn(im)\n * im.set_filter(slices, values)"}
{"_id": "q_14185", "text": "When given a raw image and the scaled version of the same image, it\n extracts the ``exposure`` parameters associated with those images.\n This is useful when\n\n Parameters\n ----------\n raw : array_like\n The image loaded fresh from a file\n\n scaled : array_like\n Image scaled using :func:`peri.initializers.normalize`\n\n Returns\n -------\n exposure : tuple of numbers\n Returns the exposure parameters (emin, emax) which get mapped to\n (0, 1) in the scaled image. Can be passed to\n :func:`~peri.util.RawImage.__init__`"}
{"_id": "q_14186", "text": "Interal draw method, simply prints to screen"}
{"_id": "q_14187", "text": "Update the value of the progress and update progress bar.\n\n Parameters\n -----------\n value : integer\n The current iteration of the progress"}
{"_id": "q_14188", "text": "Make sure that the required comps are included in the list of\n components supplied by the user. Also check that the parameters are\n consistent across the many components."}
{"_id": "q_14189", "text": "Check that the list of components `comp` is compatible with both the\n varmap and modelstr for this Model"}
{"_id": "q_14190", "text": "Put a figure label in an axis"}
{"_id": "q_14191", "text": "Compare the data, model, and residuals of a state.\n\n Makes an image of any 2D slice of a state that compares the data,\n model, and residuals. The upper left portion of the image is the raw\n data, the central portion the model, and the lower right portion the\n image. Either plots the image using plt.imshow() or returns a\n np.ndarray of the image pixels for later use.\n\n Parameters\n ----------\n st : peri.ImageState object\n The state to plot.\n tile : peri.util.Tile object\n The slice of the image to plot. Can be any xy, xz, or yz\n projection, but it must return a valid 2D slice (the slice is\n squeezed internally).\n\n data_vmin : {Float, `calc`}, optional\n vmin for the imshow for the data and generative model (shared).\n Default is 'calc' = 0.5(data.min() + model.min())\n data_vmax : {Float, `calc`}, optional\n vmax for the imshow for the data and generative model (shared).\n Default is 'calc' = 0.5(data.max() + model.max())\n res_vmin : Float, optional\n vmin for the imshow for the residuals. Default is -0.1\n Default is 'calc' = 0.5(data.min() + model.min())\n res_vmax : Float, optional\n vmax for the imshow for the residuals. Default is +0.1\n edgepts : {Nested list-like, Float, 'calc'}, optional.\n The vertices of the triangles which determine the splitting of\n the image. The vertices are at (image corner, (edge, y), and\n (x,edge), where edge is the appropriate edge of the image.\n edgepts[0] : (x,y) points for the upper edge\n edgepts[1] : (x,y) points for the lower edge\n where `x` is the coordinate along the image's 0th axis and `y`\n along the images 1st axis. Default is 'calc,' which calculates\n edge points by splitting the image into 3 regions of equal\n area. If edgepts is a float scalar, calculates the edge points\n based on a constant fraction of distance from the edge.\n do_imshow : Bool\n If True, imshow's and returns the returned handle.\n If False, returns the array as a [M,N,4] array.\n data_cmap : matplotlib colormap instance\n The colormap to use for the data and model.\n res_cmap : matplotlib colormap instance\n The colormap to use for the residuals.\n\n Returns\n -------\n image : {matplotlib.pyplot.AxesImage, numpy.ndarray}\n If `do_imshow` == True, the returned handle from imshow.\n If `do_imshow` == False, an [M,N,4] np.ndarray of the image\n pixels."}
{"_id": "q_14192", "text": "Returns 3 masks that trisect an image into 3 triangular portions.\n\n Parameters\n ----------\n imshape : 2-element list-like of ints\n The shape of the image. Elements after the first 2 are ignored.\n\n edgepts : Nested list-like, float, or `calc`, optional.\n The vertices of the triangles which determine the splitting of\n the image. The vertices are at (image corner, (edge, y), and\n (x,edge), where edge is the appropriate edge of the image.\n edgepts[0] : (x,y) points for the upper edge\n edgepts[1] : (x,y) points for the lower edge\n where `x` is the coordinate along the image's 0th axis and `y`\n along the images 1st axis. Default is 'calc,' which calculates\n edge points by splitting the image into 3 regions of equal\n area. If edgepts is a float scalar, calculates the edge points\n based on a constant fraction of distance from the edge.\n\n Returns\n -------\n upper_mask : numpy.ndarray\n Boolean array; True in the image's upper region.\n center_mask : numpy.ndarray\n Boolean array; True in the image's center region.\n lower_mask : numpy.ndarray\n Boolean array; True in the image's lower region."}
{"_id": "q_14193", "text": "each element of std0 should correspond with the element of std1"}
{"_id": "q_14194", "text": "Returns a list of the global parameter names.\n\n Parameters\n ----------\n s : :class:`peri.states.ImageState`\n The state to name the globals of.\n remove_params : Set or None\n A set of unique additional parameters to remove from the globals\n list.\n\n Returns\n -------\n all_params : list\n The list of the global parameter names, with each of\n remove_params removed."}
{"_id": "q_14195", "text": "Returns a non-constant damping vector, allowing certain parameters to be\n more strongly damped than others.\n\n Parameters\n ----------\n params : List\n The list of parameter names, in order.\n damping : Float\n The default value of the damping.\n increase_list: List\n A nested 2-element list of the params to increase and their\n scale factors. All parameters containing the string\n increase_list[i][0] are increased by a factor increase_list[i][1].\n Returns\n -------\n damp_vec : np.ndarray\n The damping vector to use."}
{"_id": "q_14196", "text": "Finds the particles in a tile, as numpy.ndarray of ints.\n\n Parameters\n ----------\n positions : `numpy.ndarray`\n [N,3] array of the particle positions to check in the tile\n tile : :class:`peri.util.Tile` instance\n Tile of the region inside which to check for particles.\n\n Returns\n -------\n numpy.ndarray, int\n The indices of the particles in the tile."}
{"_id": "q_14197", "text": "Ensures that all particles are included in exactly 1 group"}
{"_id": "q_14198", "text": "Finds the biggest region size for LM particle optimization with a\n given memory constraint.\n\n Input Parameters\n ----------------\n s : :class:`peri.states.ImageState`\n The state with the particles\n region_size : Int or 3-element list-like of ints, optional.\n The initial guess for the region size. Default is 40\n max_mem : Numeric, optional\n The maximum memory for the optimizer to take. Default is 1e9\n\n Other Parameters\n ----------------\n bounds: 2-element list-like of 3-element lists.\n The sub-region of the image over which to look for particles.\n bounds[0]: The lower-left corner of the image region.\n bounds[1]: The upper-right corner of the image region.\n Default (None -> ([0,0,0], s.oshape.shape)) is a box of the entire\n image size, i.e. the default places every particle in the image\n somewhere in the groups.\n Returns\n -------\n region_size : numpy.ndarray of ints of the region size."}
{"_id": "q_14199", "text": "Runs Levenberg-Marquardt optimization on a state.\n\n Convenience wrapper for LMGlobals. Same keyword args, but the defaults\n have been set to useful values for optimizing globals.\n See LMGlobals and LMEngine for documentation.\n\n See Also\n --------\n do_levmarq_particles : Levenberg-Marquardt optimization of a\n specified set of particles.\n\n do_levmarq_all_particle_groups : Levenberg-Marquardt optimization\n of all the particles in the state.\n\n LMGlobals : Optimizer object; the workhorse of do_levmarq.\n\n LMEngine : Engine superclass for all the optimizers."}
{"_id": "q_14200", "text": "Crawls slowly to the minimum-cost state.\n\n Blocks the global parameters into small enough sections such that each\n can be optimized separately while including all the pixels (i.e. no\n decimation). Optimizes the globals, then the psf separately if desired,\n then particles, then a line minimization along the step direction to\n speed up convergence.\n\n Parameters\n ----------\n s : :class:`peri.states.ImageState`\n The state to optimize\n desc : string, optional\n Description to append to the states.save() call every loop.\n Set to `None` to avoid saving. Default is `'finish'`.\n n_loop : Int, optional\n The number of times to loop over in the optimizer. Default is 4.\n max_mem : Numeric, optional\n The maximum amount of memory allowed for the optimizers' J's,\n for both particles & globals. Default is 1e9.\n separate_psf : Bool, optional\n If True, does the psf optimization separately from the rest of\n the globals, since the psf has a more tortuous fit landscape.\n Default is True.\n fractol : Float, optional\n Fractional change in error at which to terminate. Default 1e-4\n errtol : Float, optional\n Absolute change in error at which to terminate. Default 1e-2\n dowarn : Bool, optional\n Whether to log a warning if termination results from finishing\n loops rather than from convergence. Default is True.\n\n Returns\n -------\n dictionary\n Information about the optimization. Has two keys: ``'converged'``,\n a Bool which of whether optimization stopped due to convergence\n (True) or due to max number of iterations (False), and\n ``'loop_values'``, a [n_loop+1, N] ``numpy.ndarray`` of the\n state's values, at the start of optimization and at the end of\n each loop, before the line minimization."}
{"_id": "q_14201", "text": "LM run, evaluating 1 step at a time.\n\n Broyden or eigendirection updates replace full-J updates until\n a full-J update occurs. Does not run with the calculated J (no\n internal run)."}
{"_id": "q_14202", "text": "Workhorse for do_run_2"}
{"_id": "q_14203", "text": "Takes more steps without calculating J again.\n\n Given a fixed damping, J, JTJ, iterates calculating steps, with\n optional Broyden or eigendirection updates. Iterates either until\n a bad step is taken or for self.run_length times.\n Called internally by do_run_2() but is also useful on its own.\n\n Parameters\n ----------\n initial_count : Int, optional\n The initial count of the run. Default is 0. Increasing from\n 0 effectively temporarily decreases run_length.\n subblock : None or np.ndarray of bools, optional\n If not None, a boolean mask which determines which sub-\n block of parameters to run over. Default is None, i.e.\n all the parameters.\n update_derr : Bool, optional\n Set to False to not update the variable that determines\n delta_err, preventing premature termination through errtol.\n\n Notes\n -----\n It might be good to do something similar to update_derr with the\n parameter values, but this is trickier because of Broyden updates\n and _fresh_J."}
{"_id": "q_14204", "text": "Returns a dict of termination statistics\n\n Parameters\n ----------\n get_cos : Bool, optional\n Whether or not to calcualte the cosine of the residuals\n with the tangent plane of the model using the current J.\n The calculation may take some time. Default is True\n\n Returns\n -------\n dict\n Has keys\n delta_vals : The last change in parameter values.\n delta_err : The last change in the error.\n exp_err : The expected (last) change in the error.\n frac_err : The fractional change in the error.\n num_iter : The number of iterations completed.\n error : The current error."}
{"_id": "q_14205", "text": "Returns a Bool of whether the algorithm has found a satisfactory minimum"}
{"_id": "q_14206", "text": "Checks if the full J should be updated.\n\n Right now, just updates after update_J_frequency loops"}
{"_id": "q_14207", "text": "Updates J, JTJ, and internal counters."}
{"_id": "q_14208", "text": "Execute a Broyden update of J"}
{"_id": "q_14209", "text": "Execute an eigen update of J"}
{"_id": "q_14210", "text": "Updates self.J, returns nothing"}
{"_id": "q_14211", "text": "Takes an array param_vals, updates function, returns the new error"}
{"_id": "q_14212", "text": "Updates the opt_obj, returns new error."}
{"_id": "q_14213", "text": "Resets the particle groups and optionally the region size and damping.\n\n Parameters\n ----------\n new_region_size : : Int or 3-element list-like of ints, optional\n The region size for sub-blocking particles. Default is 40\n do_calc_size : Bool, optional\n If True, calculates the region size internally based on\n the maximum allowed memory. Default is True\n new_damping : Float or None, optional\n The new damping of the optimizer. Set to None to leave\n as the default for LMParticles. Default is None.\n new_max_mem : Numeric, optional\n The maximum allowed memory for J to occupy. Default is 1e9"}
{"_id": "q_14214", "text": "workhorse for the self.do_run_xx methods."}
{"_id": "q_14215", "text": "Calls LMParticles.do_internal_run for each group of particles."}
{"_id": "q_14216", "text": "Resets the initial radii used for updating the particles. Call\n if any of the particle radii or positions have been changed\n external to the augmented state."}
{"_id": "q_14217", "text": "Resets the aug_state and the LMEngine"}
{"_id": "q_14218", "text": "Returns an object with a the numbers of shares a link has had using\n Buffer.\n\n www will be stripped, but other subdomains will not."}
{"_id": "q_14219", "text": "Take a sample from a field given flat indices or a shaped slice\n\n Parameters\n -----------\n inds : list of indices\n One dimensional (raveled) indices to return from the field\n\n slicer : slice object\n A shaped (3D) slicer that returns a section of image\n\n flat : boolean\n Whether to flatten the sampled item before returning"}
{"_id": "q_14220", "text": "Setup the image model formation equation and corresponding objects into\n their various objects. `mdl` is a `peri.models.Model` object"}
{"_id": "q_14221", "text": "Switch out the data for the model's recreation of the data."}
{"_id": "q_14222", "text": "Get the tiles corresponding to a particular section of image needed to\n be updated. Inputs are the parameters and values. Returned is the\n padded tile, inner tile, and slicer to go between, but accounting for\n wrap with the edge of the image as necessary."}
{"_id": "q_14223", "text": "Creates an image, as a `peri.util.Image`, which is similar\n to the image in the tutorial"}
{"_id": "q_14224", "text": "Set the overall shape of the calculation area. The total shape of that\n the calculation can possibly occupy, in pixels. The second, inner, is\n the region of interest within the image."}
{"_id": "q_14225", "text": "Notify parent of a parameter change"}
{"_id": "q_14226", "text": "Set the shape for all components"}
{"_id": "q_14227", "text": "Ensure that shared parameters are the same value everywhere"}
{"_id": "q_14228", "text": "Read all environment variables to see if they contain PERI"}
{"_id": "q_14229", "text": "List all pending memberships, listed only for group admins."}
{"_id": "q_14230", "text": "List all user pending memberships."}
{"_id": "q_14231", "text": "Create new group."}
{"_id": "q_14232", "text": "Manage your group."}
{"_id": "q_14233", "text": "List user group members."}
{"_id": "q_14234", "text": "Approve a user."}
{"_id": "q_14235", "text": "Get an initial featuring of sphere positions in an image.\n\n Parameters\n -----------\n image : :class:`peri.util.Image` object\n Image object which defines the image file as well as the region.\n\n feature_rad : float\n Radius of objects to find, in pixels. This is a featuring radius\n and not a real radius, so a better value is frequently smaller\n than the real radius (half the actual radius is good). If ``use_tp``\n is True, then the twice ``feature_rad`` is passed as trackpy's\n ``diameter`` keyword.\n\n dofilter : boolean, optional\n Whether to remove the background before featuring. Doing so can\n often greatly increase the success of initial featuring and\n decrease later optimization time. Filtering functions by fitting\n the image to a low-order polynomial and featuring the residuals.\n In doing so, this will change the mean intensity of the featured\n image and hence the good value of ``minmass`` will change when\n ``dofilter`` is True. Default is False.\n\n order : 3-element tuple, optional\n If `dofilter`, the 2+1D Leg Poly approximation to the background\n illumination field. Default is (3,3,3).\n\n Other Parameters\n ----------------\n invert : boolean, optional\n Whether to invert the image for featuring. Set to True if the\n image is dark particles on a bright background. Default is True\n minmass : Float or None, optional\n The minimum mass/masscut of a particle. Default is None, which\n calculates internally.\n use_tp : Bool, optional\n Whether or not to use trackpy. Default is False, since trackpy\n cuts out particles at the edge.\n\n Returns\n --------\n positions : np.ndarray [N,3]\n Positions of the particles in order (z,y,x) in image pixel units.\n\n Notes\n -----\n Optionally filters the image by fitting the image I(x,y,z) to a\n polynomial, then subtracts this fitted intensity variation and uses\n centroid methods to find the particles."}
{"_id": "q_14236", "text": "Completely optimizes a state from an image of roughly monodisperse\n particles.\n\n The user can interactively select the image. The state is periodically\n saved during optimization, with different filename for different stages\n of the optimization.\n\n Parameters\n ----------\n statemaker : Function\n A statemaker function. Given arguments `im` (a\n :class:`~peri.util.Image`), `pos` (numpy.ndarray), `rad` (ndarray),\n and any additional `statemaker_kwargs`, must return a\n :class:`~peri.states.ImageState`. There is an example function in\n scripts/statemaker_example.py\n feature_rad : Int, odd\n The particle radius for featuring, as passed to locate_spheres.\n actual_rad : Float, optional\n The actual radius of the particles. Default is feature_rad\n im_name : string, optional\n The file name of the image to load. If not set, it is selected\n interactively through Tk.\n tile : :class:`peri.util.Tile`, optional\n The tile of the raw image to be analyzed. Default is None, the\n entire image.\n invert : Bool, optional\n Whether to invert the image for featuring, as passed to trackpy.\n Default is True.\n desc : String, optional\n A description to be inserted in saved state. The save name will\n be, e.g., '0.tif-peri-' + desc + 'initial-burn.pkl'. Default is ''\n use_full_path : Bool, optional\n Set to True to use the full path name for the image. Default\n is False.\n featuring_params : Dict, optional\n kwargs-like dict of any additional keyword arguments to pass to\n ``get_initial_featuring``, such as ``'use_tp'`` or ``'minmass'``.\n Default is ``{}``.\n statemaker_kwargs : Dict, optional\n kwargs-like dict of any additional keyword arguments to pass to\n the statemaker function. Default is ``{}``.\n\n Other Parameters\n ----------------\n max_mem : Numeric\n The maximum additional memory to use for the optimizers, as\n passed to optimize.burn. Default is 1e9.\n min_rad : Float, optional\n The minimum particle radius, as passed to addsubtract.add_subtract.\n Particles with a fitted radius smaller than this are identified\n as fake and removed. Default is 0.5 * actual_rad.\n max_rad : Float, optional\n The maximum particle radius, as passed to addsubtract.add_subtract.\n Particles with a fitted radius larger than this are identified\n as fake and removed. Default is 1.5 * actual_rad, however you\n may find better results if you make this more stringent.\n rz_order : int, optional\n If nonzero, the order of an additional augmented rscl(z)\n parameter for optimization. Default is 0; i.e. no rscl(z)\n optimization.\n zscale : Float, optional\n The zscale of the image. Default is 1.0\n\n Returns\n -------\n s : :class:`peri.states.ImageState`\n The optimized state.\n\n See Also\n --------\n feature_from_pos_rad : Using a previous state's globals and\n user-provided positions and radii as an initial guess,\n completely optimizes a state.\n\n get_particle_featuring : Using a previous state's globals and\n positions as an initial guess, completely optimizes a state.\n\n translate_featuring : Use a previous state's globals and\n centroids methods for an initial particle guess, completely\n optimizes a state.\n\n Notes\n -----\n Proceeds by centroid-featuring the image for an initial guess of\n particle positions, then optimizing the globals + positions until\n termination as called in _optimize_from_centroid.\n The ``Other Parameters`` are passed to _optimize_from_centroid."}
{"_id": "q_14237", "text": "Combines centroid featuring with the globals from a previous state.\n\n Runs trackpy.locate on an image, sets the globals from a previous state,\n calls _translate_particles\n\n Parameters\n ----------\n feature_rad : Int, odd\n The particle radius for featuring, as passed to locate_spheres.\n\n state_name : String or None, optional\n The name of the initially-optimized state. Default is None,\n which prompts the user to select the name interactively\n through a Tk window.\n im_name : String or None, optional\n The name of the new image to optimize. Default is None,\n which prompts the user to select the name interactively\n through a Tk window.\n use_full_path : Bool, optional\n Set to True to use the full path of the state instead of\n partial path names (e.g. /full/path/name/state.pkl vs\n state.pkl). Default is False\n actual_rad : Float or None, optional\n The initial guess for the particle radii. Default is the median\n of the previous state.\n invert : Bool\n Whether to invert the image for featuring, as passed to\n addsubtract.add_subtract and locate_spheres. Set to False\n if the image is bright particles on a dark background.\n Default is True (dark particles on bright background).\n featuring_params : Dict, optional\n kwargs-like dict of any additional keyword arguments to pass to\n ``get_initial_featuring``, such as ``'use_tp'`` or ``'minmass'``.\n Default is ``{}``.\n\n\n Other Parameters\n ----------------\n max_mem : Numeric\n The maximum additional memory to use for the optimizers, as\n passed to optimize.burn. Default is 1e9.\n desc : String, optional\n A description to be inserted in saved state. The save name will\n be, e.g., '0.tif-peri-' + desc + 'initial-burn.pkl'. Default is ''\n min_rad : Float, optional\n The minimum particle radius, as passed to addsubtract.add_subtract.\n Particles with a fitted radius smaller than this are identified\n as fake and removed. Default is 0.5 * actual_rad.\n max_rad : Float, optional\n The maximum particle radius, as passed to addsubtract.add_subtract.\n Particles with a fitted radius larger than this are identified\n as fake and removed. Default is 1.5 * actual_rad, however you\n may find better results if you make this more stringent.\n rz_order : int, optional\n If nonzero, the order of an additional augmented rscl(z)\n parameter for optimization. Default is 0; i.e. no rscl(z)\n optimization.\n do_polish : Bool, optional\n Set to False to only optimize the particles and add-subtract.\n Default is True, which then runs a polish afterwards.\n\n Returns\n -------\n s : :class:`peri.states.ImageState`\n The optimized state.\n\n See Also\n --------\n get_initial_featuring : Features an image from scratch, using\n centroid methods as initial particle locations.\n\n feature_from_pos_rad : Using a previous state's globals and\n user-provided positions and radii as an initial guess,\n completely optimizes a state.\n\n translate_featuring : Use a previous state's globals and\n centroids methods for an initial particle guess, completely\n optimizes a state.\n\n Notes\n -----\n The ``Other Parameters`` are passed to _translate_particles.\n Proceeds by:\n 1. Find a guess of the particle positions through centroid methods.\n 2. Optimize particle positions only.\n 3. Optimize particle positions and radii only.\n 4. Add-subtract missing and bad particles.\n 5. If polish, optimize the illumination, background, and particles.\n 6. If polish, optimize everything."}
{"_id": "q_14238", "text": "Links the state ``st`` psf zscale with the global zscale"}
{"_id": "q_14239", "text": "Workhorse for creating & optimizing states with an initial centroid\n guess.\n\n This is an example function that works for a particular microscope. For\n your own microscope, you'll need to change particulars such as the psf\n type and the orders of the background and illumination.\n\n Parameters\n ----------\n im : :class:`~peri.util.RawImage`\n A RawImage of the data.\n pos : [N,3] element numpy.ndarray.\n The initial guess for the N particle positions.\n rad : N element numpy.ndarray.\n The initial guess for the N particle radii.\n\n slab : :class:`peri.comp.objs.Slab` or None, optional\n If not None, a slab corresponding to that in the image. Default\n is None.\n mem_level : {'lo', 'med-lo', 'med', 'med-hi', 'hi'}, optional\n A valid memory level for the state to control the memory overhead\n at the expense of accuracy. Default is `'hi'`\n\n Returns\n -------\n :class:`~peri.states.ImageState`\n An ImageState with a linked z-scale, a ConfocalImageModel, and\n all the necessary components with orders at which are useful for\n my particular test case."}
{"_id": "q_14240", "text": "Used to check if there is a dict in a dict"}
{"_id": "q_14241", "text": "Create random parameters for this ILM that mimic experiments\n as closely as possible without real assumptions."}
{"_id": "q_14242", "text": "Creates a barnes interpolant & calculates its values"}
{"_id": "q_14243", "text": "Returns details of the posting schedules associated with a social media\n profile."}
{"_id": "q_14244", "text": "Calculates the moments of the probability distribution p with vector v"}
{"_id": "q_14245", "text": "Calculates the 3D psf at a particular z pixel height\n\n Parameters\n ----------\n zint : float\n z pixel height in image coordinates , converted to 1/k by the\n function using the slab position as well\n\n size : int, list, tuple\n The size over which to calculate the psf, can be 1 or 3 elements\n for the different axes in image pixel coordinates\n\n zoffset : float\n Offset in pixel units to use in the calculation of the psf\n\n cutval : float\n If not None, the psf will be cut along a curve corresponding to\n p(r) == 0 with exponential damping exp(-d^4)\n\n getextent : boolean\n If True, also return the extent of the psf in pixels for example\n to get the support size. Can only be used with cutval."}
{"_id": "q_14246", "text": "fftshift and pad the field with zeros until it has size finalshape.\n if zpad is off, then no padding is put on the z direction. returns\n the fourier transform of the field"}
{"_id": "q_14247", "text": "Pack the parameters into the form necessary for the integration\n routines above. For example, packs for calculate_linescan_psf"}
{"_id": "q_14248", "text": "Calculates a linescan psf"}
{"_id": "q_14249", "text": "Sends ICMP echo requests to destination `dst` `count` times.\n Returns a deferred which fires when responses are finished."}
{"_id": "q_14250", "text": "Make an HTTP request and return the body"}
{"_id": "q_14251", "text": "Expire any items in the cache older than `age` seconds"}
{"_id": "q_14252", "text": "Set a key `k` to value `v`"}
{"_id": "q_14253", "text": "Return True if key `k` exists"}
{"_id": "q_14254", "text": "Given a record timestamp, verify the chain integrity.\n\n :param timestamp: UNIX time / POSIX time / Epoch time\n :return: 'True' if the timestamp fits the chain. 'False' otherwise."}
{"_id": "q_14255", "text": "Make request and convert JSON response to python objects"}
{"_id": "q_14256", "text": "Returns all active bets"}
{"_id": "q_14257", "text": "Return bets with given filters and ordering.\n\n :param type: return bets only with this type.\n Use None to include all (default).\n :param order_by: '-last_stake' or 'last_stake' to sort by stake's\n created date or None for default ordering.\n :param state: one of 'active', 'closed', 'all' (default 'active').\n :param project_id: return bets associated with given project id in kava\n :param page: default 1.\n :param page_site: page size (default 100)."}
{"_id": "q_14258", "text": "Subscribe to event for given bet ids."}
{"_id": "q_14259", "text": "Convert a string of XML which represents a NIST Randomness Beacon value\n into a 'NistBeaconValue' object.\n\n :param input_xml: XML to build a 'NistBeaconValue' from\n :return: A 'NistBeaconValue' object, 'None' otherwise"}
{"_id": "q_14260", "text": "Returns a 'minified' version of the javascript content"}
{"_id": "q_14261", "text": "Passes each parsed log line to `fn`\n This is a better idea than storing a giant log file in memory"}
{"_id": "q_14262", "text": "Returns a big list of all log lines since the last run"}
{"_id": "q_14263", "text": "Validate secret link token.\n\n :param token: Token value.\n :param expected_data: A dictionary of key/values that must be present\n in the data part of the token (i.e. included via ``extra_data`` in\n ``create_token``)."}
{"_id": "q_14264", "text": "Get cryptographic engine."}
{"_id": "q_14265", "text": "32bit counter aggregator with wrapping"}
{"_id": "q_14266", "text": "Sets up source objects from the given config"}
{"_id": "q_14267", "text": "Callback that all event sources call when they have a new event\n or list of events"}
{"_id": "q_14268", "text": "Watchdog timer function. \n\n Recreates sources which have not generated events in 10*interval if\n they have watchdog set to true in their configuration"}
{"_id": "q_14269", "text": "Validate that date is in the future."}
{"_id": "q_14270", "text": "Validate message."}
{"_id": "q_14271", "text": "Verify token and save in session if it's valid."}
{"_id": "q_14272", "text": "Return a basic meaningful name based on device type"}
{"_id": "q_14273", "text": "Do not warn on external images."}
{"_id": "q_14274", "text": "Connect receivers to signals."}
{"_id": "q_14275", "text": "Receiver for request-accepted signal."}
{"_id": "q_14276", "text": "Receiver for request-accepted signal to send email notification."}
{"_id": "q_14277", "text": "Receiver for request-created signal to send email notification."}
{"_id": "q_14278", "text": "Receiver for request-rejected signal to send email notification."}
{"_id": "q_14279", "text": "Render a template and send as email."}
{"_id": "q_14280", "text": "Create a new secret link."}
{"_id": "q_14281", "text": "Validate a secret link token.\n\n Only queries the database if token is valid to determine that the token\n has not been revoked."}
{"_id": "q_14282", "text": "Revoken a secret link."}
{"_id": "q_14283", "text": "Get access request for a specific receiver."}
{"_id": "q_14284", "text": "Confirm that senders email is valid."}
{"_id": "q_14285", "text": "Given required properties from a NistBeaconValue,\n compute the SHA512Hash object.\n\n :param version: NistBeaconValue.version\n :param frequency: NistBeaconValue.frequency\n :param timestamp: NistBeaconValue.timestamp\n :param seed_value: NistBeaconValue.seed_value\n :param prev_output: NistBeaconValue.previous_output_value\n :param status_code: NistBeaconValue.status_code\n\n :return: SHA512 Hash for NistBeaconValue signature verification"}
{"_id": "q_14286", "text": "Verify a given NIST message hash and signature for a beacon value.\n\n :param timestamp: The timestamp of the record being verified.\n :param message_hash:\n The hash that was carried out over the message.\n This is an object belonging to the `Crypto.Hash` module.\n :param signature: The signature that needs to be validated.\n :return: True if verification is correct. False otherwise."}
{"_id": "q_14287", "text": "Template filter to check if a record is embargoed."}
{"_id": "q_14288", "text": "Create an access request."}
{"_id": "q_14289", "text": "Confirm email address."}
{"_id": "q_14290", "text": "Creates a generic endpoint connection that doesn't finish"}
{"_id": "q_14291", "text": "Get reverse direction of ordering."}
{"_id": "q_14292", "text": "Get column which is being order by."}
{"_id": "q_14293", "text": "Get query with correct ordering."}
{"_id": "q_14294", "text": "Open the file referenced in this object, and scrape the version.\n\n :return:\n The version as a string, an empty string if there is no match\n to the magic_line, or any file exception messages encountered."}
{"_id": "q_14295", "text": "Set the version for this given file.\n\n :param new_version: The new version string to set."}
{"_id": "q_14296", "text": "Configure SSH client options"}
{"_id": "q_14297", "text": "Starts the timer for this source"}
{"_id": "q_14298", "text": "Called for every timer tick. Calls self.get which can be a deferred\n and passes that result back to the queueBack method\n \n Returns a deferred"}
{"_id": "q_14299", "text": "List pending access requests and shared links."}
{"_id": "q_14300", "text": "Create a TCP connection to Riemann with automatic reconnection"}
{"_id": "q_14301", "text": "Remove all or self.queueDepth events from the queue"}
{"_id": "q_14302", "text": "Create a UDP connection to Riemann"}
{"_id": "q_14303", "text": "Sets up HTTP connector and starts queue timer"}
{"_id": "q_14304", "text": "Adapts an Event object to a Riemann protobuf event Event"}
{"_id": "q_14305", "text": "Encode a list of Tensor events with protobuf"}
{"_id": "q_14306", "text": "Send a Tensor Event to Riemann"}
{"_id": "q_14307", "text": "Opens local preview of your blog website"}
{"_id": "q_14308", "text": "Get the relative path to the API resource collection\n\n If self.collection_endpoint is not set, it will default to the lowercase name of the resource class plus an \"s\" and the terminating \"/\"\n :param cls: Resource class\n :return: Relative path to the resource collection"}
{"_id": "q_14309", "text": "Looks for errors in source code of your blog"}
{"_id": "q_14310", "text": "Retreive preview results for ID."}
{"_id": "q_14311", "text": "value_class is initially a string with the import path to the resource class, but we need to get the actual class before doing any work\n\n We do not expect the actual clas to be in value_class since the beginning to avoid nasty import egg-before-chicken errors"}
{"_id": "q_14312", "text": "Saves changes and sends them to GitHub"}
{"_id": "q_14313", "text": "Return the given number as a string with a sign in front of it, ie. `+` if the number is positive, `-` otherwise."}
{"_id": "q_14314", "text": "Show Zebra balance.\n\n Like the hours balance, vacation left, etc."}
{"_id": "q_14315", "text": "Show all messages in the `messages` key of the given dict."}
{"_id": "q_14316", "text": "Loop through messages and execute tasks"}
{"_id": "q_14317", "text": "Return True if it's time to log"}
{"_id": "q_14318", "text": "Abort an initiated SASL authentication process. The expected result\n state is ``failure``."}
{"_id": "q_14319", "text": "Template tag that renders the footer information based on the\n authenticated user's permissions."}
{"_id": "q_14320", "text": "Builds the parameters needed to present the user with a datatrans payment form.\n\n :param amount: The amount and currency we want the user to pay\n :param client_ref: A unique reference for this payment\n :return: The parameters needed to display the datatrans form"}
{"_id": "q_14321", "text": "Builds the parameters needed to present the user with a datatrans form to register a credit card.\n Contrary to a payment form, datatrans will not show an amount.\n\n :param client_ref: A unique reference for this alias capture.\n :return: The parameters needed to display the datatrans form"}
{"_id": "q_14322", "text": "Generates the circle."}
{"_id": "q_14323", "text": "Given a string key it returns a long value,\n this long value represents a place on the hash ring.\n\n md5 is currently used because it mixes well."}
{"_id": "q_14324", "text": "Get the ``portMappings`` field for the app container."}
{"_id": "q_14325", "text": "Construct widget."}
{"_id": "q_14326", "text": "Perform post-construction operations."}
{"_id": "q_14327", "text": "Handle activation of item in listing."}
{"_id": "q_14328", "text": "Handle selection of path segment."}
{"_id": "q_14329", "text": "Raises a `requests.exceptions.HTTPError` if the response has a non-200\n status code."}
{"_id": "q_14330", "text": "Sometimes we need the protocol object so that we can manipulate the\n underlying transport in tests."}
{"_id": "q_14331", "text": "Callback to collect the Server-Sent Events content of a response. Callbacks\n passed will receive event data.\n\n :param response:\n The response from the SSE request.\n :param handler:\n The handler for the SSE protocol."}
{"_id": "q_14332", "text": "Recursively make requests to each endpoint in ``endpoints``."}
{"_id": "q_14333", "text": "Get a JSON field from the response JSON.\n\n :param: response_json:\n The parsed JSON content of the response.\n :param: field_name:\n The name of the field in the JSON to get."}
{"_id": "q_14334", "text": "Parse value from string.\n\n Convert :code:`value` to\n\n .. code-block:: python\n\n >>> parser = Config()\n >>> parser.parse('12345', type_=int)\n <<< 12345\n >>>\n >>> parser.parse('1,2,3,4', type_=list, subtype=int)\n <<< [1, 2, 3, 4]\n\n :param value: string\n :param type\\\\_: the type to return\n :param subtype: subtype for iterator types\n :return: the parsed config value"}
{"_id": "q_14335", "text": "Parse a value from an environment variable.\n\n .. code-block:: python\n\n >>> os.environ['FOO']\n <<< '12345'\n >>>\n >>> os.environ['BAR']\n <<< '1,2,3,4'\n >>>\n >>> 'BAZ' in os.environ\n <<< False\n >>>\n >>> parser = Config()\n >>> parser.get('FOO', type_=int)\n <<< 12345\n >>>\n >>> parser.get('BAR', type_=list, subtype=int)\n <<< [1, 2, 3, 4]\n >>>\n >>> parser.get('BAZ', default='abc123')\n <<< 'abc123'\n >>>\n >>> parser.get('FOO', type_=int, mapper=lambda x: x*10)\n <<< 123450\n\n :param key: the key to look up the value under\n :param default: default value to return when when no value is present\n :param type\\\\_: the type to return\n :param subtype: subtype for iterator types\n :param mapper: a function to post-process the value with\n :return: the parsed config value"}
{"_id": "q_14336", "text": "Perform a request to a specific endpoint. Raise an error if the status\n code indicates a client or server error."}
{"_id": "q_14337", "text": "Check the result of each request that we made. If a failure occurred,\n but some requests succeeded, log and count the failures. If all\n requests failed, raise an error.\n\n :return:\n The list of responses, with a None value for any requests that\n failed."}
{"_id": "q_14338", "text": "Finalize options to be used."}
{"_id": "q_14339", "text": "Run clean."}
{"_id": "q_14340", "text": "Set up a client key if one does not exist already.\n\n https://gist.github.com/glyph/27867a478bb71d8b6046fbfb176e1a33#file-local-certs-py-L32-L50\n\n :type pem_path: twisted.python.filepath.FilePath\n :param pem_path:\n The path to the certificate directory to use.\n :rtype: twisted.internet.defer.Deferred"}
{"_id": "q_14341", "text": "Set up a client key in Vault if one does not exist already.\n\n :param client:\n The Vault API client to use.\n :param mount_path:\n The Vault key/value mount path to use.\n :rtype: twisted.internet.defer.Deferred"}
{"_id": "q_14342", "text": "Get a single value for the given key out of the given set of headers.\n\n :param twisted.web.http_headers.Headers headers:\n The set of headers in which to look for the header value\n :param str key:\n The header key"}
{"_id": "q_14343", "text": "Perform a request.\n\n :param: method:\n The HTTP method to use (example is `GET`).\n :param: url:\n The URL to use. The default value is the URL this client was\n created with (`self.url`) (example is `http://localhost:8080`)\n :param: kwargs:\n Any other parameters that will be passed to `treq.request`, for\n example headers. Or any URL parameters to override, for example\n path, query or fragment."}
{"_id": "q_14344", "text": "Fetch and return new children.\n\n Will only fetch children whilst canFetchMore is True.\n\n .. note::\n\n It is the caller's responsibility to add each fetched child to this\n parent if desired using :py:meth:`Item.addChild`."}
{"_id": "q_14345", "text": "Run the server, i.e. start listening for requests on the given host and\n port.\n\n :param reactor: The ``IReactorTCP`` to use.\n :param endpoint_description:\n The Twisted description for the endpoint to listen on.\n :return:\n A deferred that returns an object that provides ``IListeningPort``."}
{"_id": "q_14346", "text": "Create a marathon-acme instance.\n\n :param client_creator:\n The txacme client creator function.\n :param cert_store:\n The txacme certificate store instance.\n :param acme_email:\n Email address to use when registering with the ACME service.\n :param allow_multiple_certs:\n Whether to allow multiple certificates per app port.\n :param marathon_addr:\n Address for the Marathon instance to find app domains that require\n certificates.\n :param marathon_timeout:\n Amount of time in seconds to wait for response headers to be received\n from Marathon.\n :param sse_timeout:\n Amount of time in seconds to wait for some event data to be received\n from Marathon.\n :param mlb_addrs:\n List of addresses for marathon-lb instances to reload when a new\n certificate is issued.\n :param group:\n The marathon-lb group (``HAPROXY_GROUP``) to consider when finding\n app domains.\n :param reactor: The reactor to use."}
{"_id": "q_14347", "text": "Initialise the storage directory with the certificates directory and a\n default wildcard self-signed certificate for HAProxy.\n\n :return: the storage path and certs path"}
{"_id": "q_14348", "text": "Parse the field and value from a line."}
{"_id": "q_14349", "text": "Translates bytes into lines, and calls lineReceived.\n\n Copied from ``twisted.protocols.basic.LineOnlyReceiver`` but using\n str.splitlines() to split on ``\\r\\n``, ``\\n``, and ``\\r``."}
{"_id": "q_14350", "text": "Dispatch the event to the handler."}
{"_id": "q_14351", "text": "Fetch the list of apps from Marathon, find the domains that require\n certificates, and issue certificates for any domains that don't already\n have a certificate."}
{"_id": "q_14352", "text": "Issue a certificate for the given domain."}
{"_id": "q_14353", "text": "When removing an object, other objects with references to the current\n object should remove those references. This function identifies objects\n with forward references to the current object, then removes those\n references.\n\n :param obj: Object to which forward references should be removed"}
{"_id": "q_14354", "text": "Ensure that all forward references on the provided object have the\n appropriate backreferences.\n\n :param StoredObject obj: Database record\n :param list fields: Optional list of field names to check"}
{"_id": "q_14355", "text": "Initializes the bot, plugins, and everything."}
{"_id": "q_14356", "text": "Connects to slack and enters the main loop.\n\n * start - If True, rtm.start API is used. Else rtm.connect API is used\n\n For more info, refer to\n https://python-slackclient.readthedocs.io/en/latest/real_time_messaging.html#rtm-start-vs-rtm-connect"}
{"_id": "q_14357", "text": "Does cleanup of bot and plugins."}
{"_id": "q_14358", "text": "Sends a message to the specified channel\n\n * channel - The channel to send to. This can be a SlackChannel object, a channel id, or a channel name\n (without the #)\n * text - String to send\n * thread - reply to the thread. See https://api.slack.com/docs/message-threading#threads_party\n * reply_broadcast - Set to true to indicate your reply is germane to all members of a channel"}
{"_id": "q_14359", "text": "Sends a message to a user as an IM\n\n * user - The user to send to. This can be a SlackUser object, a user id, or the username (without the @)\n * text - String to send"}
{"_id": "q_14360", "text": "Run an external command in a separate process and detach it from the current process. Excepting\n `stdout`, `stderr`, and `stdin` all file descriptors are closed after forking. If `daemonize`\n is True then the parent process exits. All stdio is redirected to `os.devnull` unless\n specified. The `preexec_fn`, `shell`, `cwd`, and `env` parameters are the same as their `Popen`\n counterparts. Return the PID of the child process if not daemonized."}
{"_id": "q_14361", "text": "Close open file descriptors."}
{"_id": "q_14362", "text": "Redirect a system stream to the provided target."}
{"_id": "q_14363", "text": "Takes a SlackEvent, parses it for a command, and runs against registered plugin"}
{"_id": "q_14364", "text": "Show current allow and deny blocks for the given acl."}
{"_id": "q_14365", "text": "Create a new acl."}
{"_id": "q_14366", "text": "Delete an acl."}
{"_id": "q_14367", "text": "Applies a given HTML attributes to each field widget of a given form.\n\n Example:\n\n set_form_widgets_attrs(my_form, {'class': 'clickable'})"}
{"_id": "q_14368", "text": "Returns a module from a given app by its name.\n\n :param str app_name:\n :param str module_name:\n :rtype: module or None"}
{"_id": "q_14369", "text": "Run the mongod process."}
{"_id": "q_14370", "text": "Create a proxy to a class instance stored in ``proxies``.\n\n :param class BaseSchema: Base schema (e.g. ``StoredObject``)\n :param str label: Name of class variable to set\n :param class ProxiedClass: Class to get or create\n :param function get_key: Extension-specific key function; may return e.g.\n the current Flask request"}
{"_id": "q_14371", "text": "Class decorator factory; adds proxy class variables to target class.\n\n :param dict proxy_map: Mapping between class variable labels and proxied\n classes\n :param function get_key: Extension-specific key function; may return e.g.\n the current Flask request"}
{"_id": "q_14372", "text": "Return primary key; if value is StoredObject, verify\n that it is loaded."}
{"_id": "q_14373", "text": "Assign to a nested dictionary.\n\n :param dict data: Dictionary to mutate\n :param value: Value to set\n :param list *keys: List of nested keys\n\n >>> data = {}\n >>> set_nested(data, 'hi', 'k0', 'k1', 'k2')\n >>> data\n {'k0': {'k1': {'k2': 'hi'}}}"}
{"_id": "q_14374", "text": "Retrieve user by username"}
{"_id": "q_14375", "text": "Sets permissions on user object"}
{"_id": "q_14376", "text": "Schedules a function to be called after some period of time.\n\n * duration - time in seconds to wait before firing\n * func - function to be called\n * args - arguments to pass to the function"}
{"_id": "q_14377", "text": "Returns Gravatar image URL for a given string or UserModel.\n\n Example:\n\n {% load gravatar %}\n {% gravatar_get_url user_model %}\n\n :param UserModel, str obj:\n :param int size:\n :param str default:\n :return:"}
{"_id": "q_14378", "text": "Returns Gravatar image HTML tag for a given string or UserModel.\n\n Example:\n\n {% load gravatar %}\n {% gravatar_get_img user_model %}\n\n :param UserModel, str obj:\n :param int size:\n :param str default:\n :return:"}
{"_id": "q_14379", "text": "Causes the bot to gracefully shutdown."}
{"_id": "q_14380", "text": "Causes the bot to resume operation in the channel.\n\n Usage:\n !wake [channel name] - unignore the specified channel (or current if none specified)"}
{"_id": "q_14381", "text": "Checks if the path is correct and exists, must be abs-> a dir -> and not a file."}
{"_id": "q_14382", "text": "Checks if the url contains S3. Not an accurate validation of the url"}
{"_id": "q_14383", "text": "Return a valid absolute path. filename can be relative or absolute."}
{"_id": "q_14384", "text": "Select path by criterion.\n\n :param filters: a lambda function that take a `pathlib.Path` as input,\n boolean as a output.\n :param recursive: include files in subfolder or not.\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u6839\u636efilters\u4e2d\u5b9a\u4e49\u7684\u6761\u4ef6\u9009\u62e9\u8def\u5f84\u3002"}
{"_id": "q_14385", "text": "Select dir path by criterion.\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u6839\u636efilters\u4e2d\u5b9a\u4e49\u7684\u6761\u4ef6\u9009\u62e9\u6587\u4ef6\u5939\u3002"}
{"_id": "q_14386", "text": "Count how many files in this directory. Including file in sub folder."}
{"_id": "q_14387", "text": "Select file path by extension.\n\n :param ext:\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u9009\u62e9\u4e0e\u9884\u5b9a\u4e49\u7684\u82e5\u5e72\u4e2a\u6269\u5c55\u540d\u5339\u914d\u7684\u6587\u4ef6\u3002"}
{"_id": "q_14388", "text": "Select file path by text pattern in file name.\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u9009\u62e9\u6587\u4ef6\u540d\u4e2d\u5305\u542b\u6307\u5b9a\u5b50\u5b57\u7b26\u4e32\u7684\u6587\u4ef6\u3002"}
{"_id": "q_14389", "text": "Select file path by text pattern in absolute path.\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u9009\u62e9\u7edd\u5bf9\u8def\u5f84\u4e2d\u5305\u542b\u6307\u5b9a\u5b50\u5b57\u7b26\u4e32\u7684\u6587\u4ef6\u3002"}
{"_id": "q_14390", "text": "Select file path by size.\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u9009\u62e9\u6240\u6709\u6587\u4ef6\u5927\u5c0f\u5728\u4e00\u5b9a\u8303\u56f4\u5185\u7684\u6587\u4ef6\u3002"}
{"_id": "q_14391", "text": "Select file path by modify time.\n\n :param min_time: lower bound timestamp\n :param max_time: upper bound timestamp\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u9009\u62e9\u6240\u6709 :attr:`pathlib_mate.pathlib2.Path.mtime` \u5728\u4e00\u5b9a\u8303\u56f4\u5185\u7684\u6587\u4ef6\u3002"}
{"_id": "q_14392", "text": "Select file path by create time.\n\n :param min_time: lower bound timestamp\n :param max_time: upper bound timestamp\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u9009\u62e9\u6240\u6709 :attr:`pathlib_mate.pathlib2.Path.ctime` \u5728\u4e00\u5b9a\u8303\u56f4\u5185\u7684\u6587\u4ef6\u3002"}
{"_id": "q_14393", "text": "Make a zip archive.\n\n :param dst: output file path. if not given, will be automatically assigned.\n :param filters: custom path filter. By default it allows any file.\n :param compress: compress or not.\n :param overwrite: overwrite exists or not.\n :param verbose: display log or not.\n :return:"}
{"_id": "q_14394", "text": "Decorate methods when locking repository is required."}
{"_id": "q_14395", "text": "Decorate methods when synchronizing repository is required."}
{"_id": "q_14396", "text": "Investigate pickling errors."}
{"_id": "q_14397", "text": "Walk the repository and yield all found files relative path joined with file name.\n\n :parameters:\n #. relativePath (str): The relative path from which start the walk."}
{"_id": "q_14398", "text": "Walk repository and yield all found directories relative path\n\n :parameters:\n #. relativePath (str): The relative path from which start the walk."}
{"_id": "q_14399", "text": "Walk repository and yield all found directories relative path.\n\n :parameters:\n #. relativePath (str): The relative path from which start the walk."}
{"_id": "q_14400", "text": "Walk a certain directory in repository and yield all found directories relative path.\n\n :parameters:\n #. relativePath (str): The relative path of the directory."}
{"_id": "q_14401", "text": "Synchronizes the Repository information with the directory.\n All registered but missing files and directories in the directory,\n will be automatically removed from the Repository.\n\n :parameters:\n #. verbose (boolean): Whether to be warn and inform about any abnormalities."}
{"_id": "q_14402", "text": "Create a repository at given real path or load any existing one.\n This method insures the creation of the directory in the system if it is missing.\\n\n Unlike create_repository, this method doesn't erase any existing repository\n in the path but loads it instead.\n\n **N.B. On some systems and some paths, creating a directory may requires root permissions.**\n\n :Parameters:\n #. path (string): The real absolute path where to create the Repository.\n If '.' or an empty string is passed, the current working directory will be used.\n #. info (None, object): Any information that can identify the repository.\n #. verbose (boolean): Whether to be warn and informed about any abnormalities."}
{"_id": "q_14403", "text": "Remove .pyrepinfo file from path if exists and related files and directories\n when respective flags are set to True.\n\n :Parameters:\n #. path (None, string): The path of the directory where to remove an existing repository.\n If None, current repository is removed if initialized.\n #. relatedFiles (boolean): Whether to also remove all related files from system as well.\n #. relatedFolders (boolean): Whether to also remove all related directories from system as well.\n Directories will be removed only if they are left empty after removing the files.\n #. verbose (boolean): Whether to be warn and informed about any abnormalities."}
{"_id": "q_14404", "text": "Save repository .pyrepinfo to disk."}
{"_id": "q_14405", "text": "Create a tar file package of all the repository files and directories.\n Only files and directories that are stored in the repository info\n are stored in the package tar file.\n\n **N.B. On some systems packaging requires root permissions.**\n\n :Parameters:\n #. path (None, string): The real absolute path where to create the package.\n If None, it will be created in the same directory as the repository\n If '.' or an empty string is passed, the current working directory will be used.\n #. name (None, string): The name to give to the package file\n If None, the package directory name will be used with the appropriate extension added.\n #. mode (None, string): The writing mode of the tarfile.\n If None, automatically the best compression mode will be chose.\n Available modes are ('w', 'w:', 'w:gz', 'w:bz2')"}
{"_id": "q_14406", "text": "get directory info from the Repository.\n\n :Parameters:\n #. relativePath (string): The relative to the repository path of the directory.\n\n :Returns:\n #. info (None, dictionary): The directory information dictionary.\n If None, it means an error has occurred.\n #. error (string): The error message if any error occurred."}
{"_id": "q_14407", "text": "get parent directory info of a file or directory from the Repository.\n\n :Parameters:\n #. relativePath (string): The relative to the repository path of the file or directory of which the parent directory info is requested.\n\n :Returns:\n #. info (None, dictionary): The directory information dictionary.\n If None, it means an error has occurred.\n #. error (string): The error message if any error occurred."}
{"_id": "q_14408", "text": "get file information dict from the repository given its relative path and name.\n\n :Parameters:\n #. relativePath (string): The relative to the repository path of the directory where the file is.\n #. name (string): The file name.\n If None is given, name will be split from relativePath.\n\n :Returns:\n #. info (None, dictionary): The file information dictionary.\n If None, it means an error has occurred.\n #. errorMessage (string): The error message if any error occurred."}
{"_id": "q_14409", "text": "Adds a directory in the repository and creates its\n attribute in the Repository with utc timestamp.\n It insures adding all the missing directories in the path.\n\n :Parameters:\n #. relativePath (string): The relative to the repository path of the directory to add in the repository.\n #. info (None, string, pickable object): Any random info about the folder.\n\n :Returns:\n #. info (dict): The directory info dict."}
{"_id": "q_14410", "text": "Remove directory from repository.\n\n :Parameters:\n #. relativePath (string): The relative to the repository path of the directory to remove from the repository.\n #. removeFromSystem (boolean): Whether to also remove directory and all files from the system.\\n\n Only files saved in the repository will be removed and empty left directories."}
{"_id": "q_14411", "text": "Rename a directory in the repository. It insures renaming the file in the system.\n\n :Parameters:\n #. relativePath (string): The relative to the repository path of the directory where the file is located.\n #. name (string): The file name.\n #. newName (string): The file new name.\n #. replace (boolean): Whether to force renaming when new folder name exists in the system.\n It fails when new folder name is registered in repository.\n #. verbose (boolean): Whether to be warn and informed about any abnormalities."}
{"_id": "q_14412", "text": "Copy an exisitng system file to the repository.\n attribute in the Repository with utc timestamp.\n\n :Parameters:\n #. path (str): The full path of the file to copy into the repository.\n #. relativePath (str): The relative to the repository path of the directory where the file should be dumped.\n If relativePath does not exist, it will be created automatically.\n #. name (string): The file name.\n If None is given, name will be split from path.\n #. description (None, string, pickable object): Any random description about the file.\n #. replace (boolean): Whether to replace any existing file with the same name if existing.\n #. verbose (boolean): Whether to be warn and informed about any abnormalities."}
{"_id": "q_14413", "text": "Get a list of keys for the accounts"}
{"_id": "q_14414", "text": "Ensure value is string."}
{"_id": "q_14415", "text": "Build a workflow definition from the cloud_harness task."}
{"_id": "q_14416", "text": "Monitors data kept in files in the predefined directory in a new thread.\n\n Note: Due to the underlying library, it may take a few milliseconds after this method is started for changes to\n start to being noticed."}
{"_id": "q_14417", "text": "Stops monitoring the predefined directory."}
{"_id": "q_14418", "text": "Called when a file in the monitored directory has been moved.\n\n Breaks move down into a delete and a create (which it is sometimes detected as!).\n :param event: the file system event"}
{"_id": "q_14419", "text": "List the contents of the archive directory."}
{"_id": "q_14420", "text": "Restore a project from the archive."}
{"_id": "q_14421", "text": "Tears down all temp files and directories."}
{"_id": "q_14422", "text": "Copy this file to other place."}
{"_id": "q_14423", "text": "List the entities found directly under the given path.\n\n Args:\n path (str): The path of the entity to be listed. Must start with a '/'.\n\n Returns:\n The list of entity names directly under the given path:\n\n u'/12345/folder_1'\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14424", "text": "Download a file from storage service to local disk.\n\n Existing files on the target path will be overwritten.\n The download is not recursive, as it only works on files.\n\n Args:\n path (str): The path of the entity to be downloaded. Must start with a '/'.\n\n Returns:\n None\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14425", "text": "Upload local file content to a storage service destination folder.\n\n Args:\n local_file(str)\n dest_path(str):\n absolute Storage service path '/project' prefix is essential\n suffix should be the name the file will have on in the destination folder\n i.e.: /project/folder/.../file_name\n mimetype(str): set the contentType attribute\n\n Returns:\n The uuid of created file entity as string\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14426", "text": "Delete an entity from the storage service using its path.\n\n Args:\n path(str): The path of the entity to be delete\n\n Returns:\n The uuid of created file entity as string\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14427", "text": "Clients a Docker client.\n\n Will raise a `ConnectionError` if the Docker daemon is not accessible.\n :return: the Docker client"}
{"_id": "q_14428", "text": "Decorate methods when repository path is required."}
{"_id": "q_14429", "text": "Get repository descriptive stats\n\n :Returns:\n #. numberOfDirectories (integer): Number of diretories in repository\n #. numberOfFiles (integer): Number of files in repository"}
{"_id": "q_14430", "text": "Reset repository instance."}
{"_id": "q_14431", "text": "Remove all repository from path along with all repository tracked files.\n\n :Parameters:\n #. path (None, string): The path the repository to remove.\n #. removeEmptyDirs (boolean): Whether to remove remaining empty\n directories."}
{"_id": "q_14432", "text": "Get whether creating a file or a directory from the basenane of the given\n path is allowed\n\n :Parameters:\n #. path (str): The absolute or relative path or simply the file\n or directory name.\n\n :Returns:\n #. allowed (bool): Whether name is allowed.\n #. message (None, str): Reason for the name to be forbidden."}
{"_id": "q_14433", "text": "Given a path, return relative path to diretory\n\n :Parameters:\n #. path (str): Path as a string\n #. split (boolean): Whether to split path to its components\n\n :Returns:\n #. relativePath (str, list): Relative path as a string or as a list\n of components if split is True"}
{"_id": "q_14434", "text": "Check whether a given relative path is a repository file path\n\n :Parameters:\n #. relativePath (string): File relative path\n\n :Returns:\n #. isRepoFile (boolean): Whether file is a repository file.\n #. isFileOnDisk (boolean): Whether file is found on disk.\n #. isFileInfoOnDisk (boolean): Whether file info is found on disk.\n #. isFileClassOnDisk (boolean): Whether file class is found on disk."}
{"_id": "q_14435", "text": "Create a tar file package of all the repository files and directories.\n Only files and directories that are tracked in the repository\n are stored in the package tar file.\n\n **N.B. On some systems packaging requires root permissions.**\n\n :Parameters:\n #. path (None, string): The real absolute path where to create the\n package. If None, it will be created in the same directory as\n the repository. If '.' or an empty string is passed, the current\n working directory will be used.\n #. name (None, string): The name to give to the package file\n If None, the package directory name will be used with the\n appropriate extension added.\n #. mode (None, string): The writing mode of the tarfile.\n If None, automatically the best compression mode will be chose.\n Available modes are ('w', 'w:', 'w:gz', 'w:bz2')"}
{"_id": "q_14436", "text": "Creates a new cross-service client."}
{"_id": "q_14437", "text": "Renames an item in this collection as a transaction.\n\n Will override if new key name already exists.\n :param key: the current name of the item\n :param new_key: the new name that the item should have"}
{"_id": "q_14438", "text": "Use default hash method to return hash value of a piece of string\n default setting use 'utf-8' encoding."}
{"_id": "q_14439", "text": "Return md5 hash value of a piece of a file\n\n Estimate processing time on:\n\n :param abspath: the absolute path to the file\n :param nbytes: only has first N bytes of the file. if 0 or None,\n hash all file\n\n CPU = i7-4600U 2.10GHz - 2.70GHz, RAM = 8.00 GB\n 1 second can process 0.25GB data\n\n - 0.59G - 2.43 sec\n - 1.3G - 5.68 sec\n - 1.9G - 7.72 sec\n - 2.5G - 10.32 sec\n - 3.9G - 16.0 sec"}
{"_id": "q_14440", "text": "Return sha256 hash value of a piece of a file\n\n Estimate processing time on:\n\n :param abspath: the absolute path to the file\n :param nbytes: only has first N bytes of the file. if 0 or None,\n hash all file"}
{"_id": "q_14441", "text": "Create a new folder having exactly same structure with this directory.\n However, all files are just empty file with same file name.\n\n :param dst: destination directory. The directory can't exists before\n you execute this.\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u521b\u5efa\u4e00\u4e2a\u76ee\u5f55\u7684\u955c\u50cf\u62f7\u8d1d, \u4e0e\u62f7\u8d1d\u64cd\u4f5c\u4e0d\u540c\u7684\u662f, \u6587\u4ef6\u7684\u526f\u672c\u53ea\u662f\u5728\u6587\u4ef6\u540d\u4e0a\n \u4e0e\u539f\u4ef6\u4e00\u81f4, \u4f46\u662f\u662f\u7a7a\u6587\u4ef6, \u5b8c\u5168\u6ca1\u6709\u5185\u5bb9, \u6587\u4ef6\u5927\u5c0f\u4e3a0\u3002"}
{"_id": "q_14442", "text": "Execute every ``.py`` file as main script.\n\n :param py_exe: str, python command or python executable path.\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u5c06\u76ee\u5f55\u4e0b\u7684\u6240\u6709Python\u6587\u4ef6\u4f5c\u4e3a\u4e3b\u811a\u672c\u7528\u5f53\u524d\u89e3\u91ca\u5668\u8fd0\u884c\u3002"}
{"_id": "q_14443", "text": "Trail white space at end of each line for every ``.py`` file.\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u5c06\u76ee\u5f55\u4e0b\u7684\u6240\u6709\u88ab\u9009\u62e9\u7684\u6587\u4ef6\u4e2d\u884c\u672b\u7684\u7a7a\u683c\u5220\u9664\u3002"}
{"_id": "q_14444", "text": "Auto convert your python code in a directory to pep8 styled code.\n\n :param kwargs: arguments for ``autopep8.fix_code`` method.\n\n **\u4e2d\u6587\u6587\u6863**\n\n \u5c06\u76ee\u5f55\u4e0b\u7684\u6240\u6709Python\u6587\u4ef6\u7528pep8\u98ce\u683c\u683c\u5f0f\u5316\u3002\u589e\u52a0\u5176\u53ef\u8bfb\u6027\u548c\u89c4\u8303\u6027\u3002"}
{"_id": "q_14445", "text": "Get most recent modify time in timestamp."}
{"_id": "q_14446", "text": "Get most recent access time in timestamp."}
{"_id": "q_14447", "text": "Get most recent create time in timestamp."}
{"_id": "q_14448", "text": "Create a new storage service REST client.\n\n Arguments:\n environment: The service environment to be used for the client\n access_token: The access token used to authenticate with the\n service\n\n Returns:\n A storage_service.api.ApiClient instance\n\n Example:\n >>> storage_client = ApiClient.new(my_access_token)"}
{"_id": "q_14449", "text": "Get generic entity by UUID.\n\n Args:\n entity_id (str): The UUID of the requested entity.\n\n Returns:\n A dictionary describing the entity::\n\n {\n u'collab_id': 2271,\n u'created_by': u'303447',\n u'created_on': u'2017-03-10T12:50:06.077891Z',\n u'description': u'',\n u'entity_type': u'project',\n u'modified_by': u'303447',\n u'modified_on': u'2017-03-10T12:50:06.077946Z',\n u'name': u'2271',\n u'uuid': u'3abd8742-d069-44cf-a66b-2370df74a682'\n }\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14450", "text": "Get metadata of an entity.\n\n Args:\n entity_type (str): Type of the entity. Admitted values: ['project',\n 'folder', 'file'].\n entity_id (str): The UUID of the entity to be modified.\n\n Returns:\n A dictionary of the metadata::\n\n {\n u'bar': u'200',\n u'foo': u'100'\n }\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14451", "text": "Delete the selected metadata entries of an entity.\n\n Only deletes selected metadata keys, for a complete wipe, use set_metadata.\n\n Args:\n entity_type (str): Type of the entity. Admitted values: ['project',\n 'folder', 'file'].\n entity_id (srt): The UUID of the entity to be modified.\n metadata_keys (lst): A list of metada keys to be deleted.\n\n Returns:\n A dictionary of the updated object metadata::\n\n {\n u'bar': u'200',\n u'foo': u'100'\n }\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14452", "text": "List all the projects the user have access to.\n\n This function does not retrieve all results, pages have\n to be manually retrieved by the caller.\n\n Args:\n hpc (bool): If 'true', the result will contain only the HPC projects\n (Unicore projects).\n access (str): If provided, the result will contain only projects\n where the user has the provided acccess.\n Admitted values: ['read', 'write'].\n name (str): Filter on the project name.\n collab_id (int): Filter on the collab id.\n page_size (int): Number of elements per page.\n page (int): Number of the page\n ordering (str): Indicate on which fields to sort the result.\n Prepend '-' to invert order. Multiple values can be provided.\n Ordering is supported on: ['name', 'created_on', 'modified_on'].\n Example: ordering='name,created_on'\n\n Returns:\n A dictionary of the results::\n\n {\n u'count': 256,\n u'next': u'http://link.to.next/page',\n u'previous': None,\n u'results': [{u'collab_id': 2079,\n u'created_by': u'258666',\n u'created_on': u'2017-02-23T15:09:27.626973Z',\n u'description': u'',\n u'entity_type': u'project',\n u'modified_by': u'258666',\n u'modified_on': u'2017-02-23T15:09:27.627025Z',\n u'name': u'2079',\n u'uuid': u'64a6ad2e-acd1-44a3-a4cd-6bd96e3da2b0'}]\n }\n\n\n Raises:\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14453", "text": "Get information on a given project\n\n Args:\n project_id (str): The UUID of the requested project.\n\n Returns:\n A dictionary describing the project::\n\n {\n u'collab_id': 2271,\n u'created_by': u'303447',\n u'created_on': u'2017-03-10T12:50:06.077891Z',\n u'description': u'',\n u'entity_type': u'project',\n u'modified_by': u'303447',\n u'modified_on': u'2017-03-10T12:50:06.077946Z',\n u'name': u'2271',\n u'uuid': u'3abd8742-d069-44cf-a66b-2370df74a682'\n }\n\n Raises:\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14454", "text": "Create a new project.\n\n Args:\n collab_id (int): The id of the collab the project should be created in.\n\n Returns:\n A dictionary of details of the created project::\n\n {\n u'collab_id': 12998,\n u'created_by': u'303447',\n u'created_on': u'2017-03-21T14:06:32.293902Z',\n u'description': u'',\n u'entity_type': u'project',\n u'modified_by': u'303447',\n u'modified_on': u'2017-03-21T14:06:32.293967Z',\n u'name': u'12998',\n u'uuid': u'2516442e-1e26-4de1-8ed8-94523224cc40'\n }\n\n Raises:\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14455", "text": "Create a new folder.\n\n Args:\n name (srt): The name of the folder.\n parent (str): The UUID of the parent entity. The parent must be a\n project or a folder.\n\n Returns:\n A dictionary of details of the created folder::\n\n {\n u'created_by': u'303447',\n u'created_on': u'2017-03-21T14:06:32.293902Z',\n u'description': u'',\n u'entity_type': u'folder',\n u'modified_by': u'303447',\n u'modified_on': u'2017-03-21T14:06:32.293967Z',\n u'name': u'myfolder',\n u'parent': u'3abd8742-d069-44cf-a66b-2370df74a682',\n u'uuid': u'2516442e-1e26-4de1-8ed8-94523224cc40'\n }\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14456", "text": "Get information on a given folder.\n\n Args:\n folder (str): The UUID of the requested folder.\n\n Returns:\n A dictionary of the folder details if found::\n\n {\n u'created_by': u'303447',\n u'created_on': u'2017-03-21T14:06:32.293902Z',\n u'description': u'',\n u'entity_type': u'folder',\n u'modified_by': u'303447',\n u'modified_on': u'2017-03-21T14:06:32.293967Z',\n u'name': u'myfolder',\n u'parent': u'3abd8742-d069-44cf-a66b-2370df74a682',\n u'uuid': u'2516442e-1e26-4de1-8ed8-94523224cc40'\n }\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14457", "text": "Upload a file content. The file entity must already exist.\n\n If an ETag is provided the file stored on the server is verified\n against it. If it does not match, StorageException is raised.\n This means the client needs to update its knowledge of the resource\n before attempting to update again. This can be used for optimistic\n concurrency control.\n\n Args:\n file_id (str): The UUID of the file whose content is written.\n etag (str): The etag to match the contents against.\n source (str): The path of the local file whose content to be uploaded.\n content (str): A string of the content to be uploaded.\n\n Note:\n ETags should be enclosed in double quotes::\n\n my_etag = '\"71e1ed9ee52e565a56aec66bc648a32c\"'\n\n Returns:\n The ETag of the file upload::\n\n '\"71e1ed9ee52e565a56aec66bc648a32c\"'\n\n Raises:\n IOError: The source cannot be opened.\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14458", "text": "Copy file content from source file to target file.\n\n Args:\n file_id (str): The UUID of the file whose content is written.\n source_file (str): The UUID of the file whose content is copied.\n\n Returns:\n None\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14459", "text": "Download file content.\n\n Args:\n file_id (str): The UUID of the file whose content is requested\n etag (str): If the content is not changed since the provided ETag,\n the content won't be downloaded. If the content is changed, it\n will be downloaded and returned with its new ETag.\n\n Note:\n ETags should be enclosed in double quotes::\n\n my_etag = '\"71e1ed9ee52e565a56aec66bc648a32c\"'\n\n\n Returns:\n A tuple of ETag and content (etag, content) if the content was\n retrieved. If an etag was provided, and content didn't change\n returns (None, None)::\n\n ('\"71e1ed9ee52e565a56aec66bc648a32c\"', 'Hello world!')\n\n Raises:\n StorageArgumentException: Invalid arguments\n StorageForbiddenException: Server response code 403\n StorageNotFoundException: Server response code 404\n StorageException: other 400-600 error codes"}
{"_id": "q_14460", "text": "Add an Option object to the user interface."}
{"_id": "q_14461", "text": "Append a positional argument to the user interface.\n\n Optional positional arguments must be added after the required ones. \n The user interface can have at most one recurring positional argument, \n and if present, that argument must be the last one."}
{"_id": "q_14462", "text": "Read program documentation from a DocParser compatible file.\n\n docsfiles is a list of paths to potential docsfiles: parse if present.\n A string is taken as a list of one item."}
{"_id": "q_14463", "text": "Return user friendly help on positional arguments in the program."}
{"_id": "q_14464", "text": "Return user friendly help on positional arguments. \n\n indent is the number of spaces preceeding the text on each line. \n \n The indent of the documentation is dependent on the length of the \n longest label that is shorter than maxindent. A label longer than \n maxindent will be printed on its own line.\n \n width is maximum allowed page width, use self.width if 0."}
{"_id": "q_14465", "text": "Return a summary of program options, their values and origins.\n \n width is maximum allowed page width, use self.width if 0."}
{"_id": "q_14466", "text": "pymongo expects a dict"}
{"_id": "q_14467", "text": "Parse text blocks from a file."}
{"_id": "q_14468", "text": "Pop, parse and return the first self.nargs items from args.\n\n if self.nargs > 1 a list of parsed values will be returned.\n \n Raise BadNumberOfArguments or BadArgument on errors.\n \n NOTE: argv may be modified in place by this method."}
{"_id": "q_14469", "text": "Parse arguments found in settings files.\n \n Use the values in self.true for True in settings files, or those in \n self.false for False, case insensitive."}
{"_id": "q_14470", "text": "Return the separator that preceding format i, or '' for i == 0."}
{"_id": "q_14471", "text": "Sets the service name and version the request should target\n\n Args:\n service (str): The name of the service as displayed in the services.json file\n version (str): The version of the service as displayed in the services.json file\n\n Returns:\n The request builder instance in order to chain calls"}
{"_id": "q_14472", "text": "Adds parameters to the request params\n\n Args:\n params (dict): The parameters to add to the request params\n\n Returns:\n The request builder instance in order to chain calls"}
{"_id": "q_14473", "text": "Defines if the an exception should be thrown after the request is sent\n\n Args:\n exception_class (class): The class of the exception to instantiate\n should_throw (function): The predicate that should indicate if the exception\n should be thrown. This function will be called with the response as a parameter\n\n Returns:\n The request builder instance in order to chain calls"}
{"_id": "q_14474", "text": "Return a URL to redirect the user to for OAuth authentication."}
{"_id": "q_14475", "text": "Exchange the authorization code for an access token."}
{"_id": "q_14476", "text": "Wraps Lock.acquire"}
{"_id": "q_14477", "text": "Wraps Lock.release"}
{"_id": "q_14478", "text": "Handle a dict that might contain a wrapped state for a custom type."}
{"_id": "q_14479", "text": "Wrap the marshalled state in a dictionary.\n\n The returned dictionary has two keys, corresponding to the ``type_key`` and ``state_key``\n options. The former holds the type name and the latter holds the marshalled state.\n\n :param typename: registered name of the custom type\n :param state: the marshalled state of the object\n :return: an object serializable by the serializer"}
{"_id": "q_14480", "text": "Sort here works by sorting by timestamp by default"}
{"_id": "q_14481", "text": "Adds the data from a ConnectorDB export. If it is a stream export, then the folder\n is the location of the export. If it is a device export, then the folder is the export folder\n with the stream name as a subdirectory\n\n If it is a user export, you will use the path of the export folder, with the user/device/stream \n appended to the end::\n\n myuser.export(\"./exportdir\")\n DatapointArray().loadExport(\"./exportdir/username/devicename/streamname\")"}
{"_id": "q_14482", "text": "Shifts all timestamps in the datapoint array by the given number of seconds.\n It is the same as the 'tshift' pipescript transform.\n\n Warning: The shift is performed in-place! This means that it modifies the underlying array::\n\n d = DatapointArray([{\"t\":56,\"d\":1}])\n d.tshift(20)\n print(d) # [{\"t\":76,\"d\":1}]"}
{"_id": "q_14483", "text": "Gets the sum of the data portions of all datapoints within"}
{"_id": "q_14484", "text": "Start the event loop to collect data from the serial device."}
{"_id": "q_14485", "text": "Create a new user."}
{"_id": "q_14486", "text": "Parse Visual Novel search pages.\n\n :param soup: The BS4 class object\n :return: A list of dictionaries containing a name and id."}
{"_id": "q_14487", "text": "Parse a page of producer or staff results\n\n :param soup: The BS4 class object\n :return: A list of dictionaries containing a name and nationality."}
{"_id": "q_14488", "text": "Parse a page of tag or trait results. Same format.\n\n :param soup: BS4 Class Object\n :return: A list of tags, Nothing else really useful there"}
{"_id": "q_14489", "text": "Parse a page of user results\n\n :param soup: Bs4 Class object\n :return: A list of dictionaries containing a name and join date"}
{"_id": "q_14490", "text": "Applies a function to a set of files and an output directory.\n\n :param str output_dir: Output directory\n :param list[str] file_paths: Absolute file paths to move"}
{"_id": "q_14491", "text": "Enable HTTP access to a dataset.\n\n This only works on datasets in some systems. For example, datasets stored\n in AWS S3 object storage and Microsoft Azure Storage can be published as\n datasets accessible over HTTP. A published dataset is world readable."}
{"_id": "q_14492", "text": "Makes a Spark Submit style job submission line.\n\n :param masterIP: The Spark leader IP address.\n :param default_parameters: Application specific Spark configuration parameters.\n :param memory: The memory to allocate to each Spark driver and executor.\n :param arguments: Arguments to pass to the submitted job.\n :param override_parameters: Parameters passed by the user, that override our defaults.\n \n :type masterIP: MasterAddress\n :type default_parameters: list of string\n :type arguments: list of string\n :type memory: int or None\n :type override_parameters: list of string or None"}
{"_id": "q_14493", "text": "Augment a list of \"docker run\" arguments with those needed to map the notional Spark master address to the\n real one, if they are different."}
{"_id": "q_14494", "text": "Refresh reloads data from the server. It raises an error if it fails to get the object's metadata"}
{"_id": "q_14495", "text": "Creates the device. Attempts to create private devices by default,\n but if public is set to true, creates public devices.\n\n You can also set other default properties by passing in the relevant information.\n For example, setting a device with the given nickname and description::\n\n dev.create(nickname=\"mydevice\", description=\"This is an example\")\n\n Furthermore, ConnectorDB supports creation of a device's streams immediately,\n which can considerably speed up device setup::\n\n dev.create(streams={\n \"stream1\": {\"schema\": '{\\\"type\\\":\\\"number\\\"}'}\n })\n\n Note that the schema must be encoded as a string when creating in this format."}
{"_id": "q_14496", "text": "Returns the list of streams that belong to the device"}
{"_id": "q_14497", "text": "Exports the device to the given directory. The directory can't exist. \n You can later import this device by running import_device on a user."}
{"_id": "q_14498", "text": "invalidates the device's current api key, and generates a new one. Resets current auth to use the new apikey,\n since the change would have future queries fail if they use the old api key."}
{"_id": "q_14499", "text": "Returns the list of users in the database"}
{"_id": "q_14500", "text": "Use BWA to create reference index files\n\n :param JobFunctionWrappingJob job: passed automatically by Toil\n :param str ref_id: FileStoreID for the reference genome\n :return: FileStoreIDs for BWA index files\n :rtype: tuple(str, str, str, str, str)"}
{"_id": "q_14501", "text": "Returns the ConnectorDB object that the logger uses. Raises an error if Logger isn't able to connect"}
{"_id": "q_14502", "text": "Adds the given stream to the logger. Requires an active connection to the ConnectorDB database.\n\n If a schema is not specified, loads the stream from the database. If a schema is specified, and the stream\n does not exist, creates the stream. You can also add stream properties such as description or nickname to be added\n during creation."}
{"_id": "q_14503", "text": "Insert the datapoint into the logger for the given stream name. The logger caches the datapoint\n and eventually synchronizes it with ConnectorDB"}
{"_id": "q_14504", "text": "Attempt to sync with the ConnectorDB server"}
{"_id": "q_14505", "text": "Start the logger background synchronization service. This allows you to not need to\n worry about syncing with ConnectorDB - you just insert into the Logger, and the Logger\n will by synced every syncperiod."}
{"_id": "q_14506", "text": "Stops the background synchronization thread"}
{"_id": "q_14507", "text": "Job version of `download_url`"}
{"_id": "q_14508", "text": "Output the parent-child relations to the given file"}
{"_id": "q_14509", "text": "Returns a string that represents the container ID of the current Docker container. If this\n function is invoked outside of a container a NotInsideContainerError is raised.\n\n >>> import subprocess\n >>> import sys\n >>> a = subprocess.check_output(['docker', 'run', '-v',\n ... sys.modules[__name__].__file__ + ':/foo.py',\n ... 'python:2.7.12','python', '-c',\n ... 'from foo import current_docker_container_id;\\\\\n ... print current_docker_container_id()'])\n int call will fail if a is not a valid hex string\n >>> int(a, 16) > 0\n True"}
{"_id": "q_14510", "text": "Performs alignment of fastqs to bam via STAR\n\n --limitBAMsortRAM step added to deal with memory explosion when sorting certain samples.\n The value was chosen to complement the recommended amount of memory to have when running STAR (60G)\n\n :param JobFunctionWrappingJob job: passed automatically by Toil\n :param str r1_id: FileStoreID of fastq (pair 1)\n :param str r2_id: FileStoreID of fastq (pair 2 if applicable, else pass None)\n :param str star_index_url: STAR index tarball\n :param bool wiggle: If True, will output a wiggle file and return it\n :return: FileStoreID from RSEM\n :rtype: str"}
{"_id": "q_14511", "text": "Creates a stream given an optional JSON schema encoded as a python dict. You can also add other properties\n of the stream, such as the icon, datatype or description. Create accepts both a string schema and\n a dict-encoded schema."}
{"_id": "q_14512", "text": "Exports the stream to the given directory. The directory can't exist. \n You can later import this device by running import_stream on a device."}
{"_id": "q_14513", "text": "returns the device which owns the given stream"}
{"_id": "q_14514", "text": "Iterates over the labels of terms in the ontology\n\n :param str ontology: The name of the ontology\n :param str ols_base: An optional, custom OLS base url\n :rtype: iter[str]"}
{"_id": "q_14515", "text": "Prepares and runs the pipeline. Note this method must be invoked both from inside a\n Docker container and while the docker daemon is reachable.\n\n :param str name: The name of the command to start the workflow.\n :param str desc: The description of the workflow."}
{"_id": "q_14516", "text": "Returns the config file contents as a string. The config file is generated and then deleted."}
{"_id": "q_14517", "text": "Returns the path of the mount point of the current container. If this method is invoked\n outside of a Docker container a NotInsideContainerError is raised. Likewise if the docker\n daemon is unreachable from inside the container a UserError is raised. This method is\n idempotent."}
{"_id": "q_14518", "text": "Add an argument to the given arg_parser with the given name.\n\n :param argparse.ArgumentParser arg_parser:\n :param str name: The name of the option."}
{"_id": "q_14519", "text": "Creates and returns an ArgumentParser object prepopulated with 'no clean', 'cores' and\n 'restart' arguments."}
{"_id": "q_14520", "text": "Creates and returns a list that represents a command for running the pipeline."}
{"_id": "q_14521", "text": "setauth sets the authentication header for use in the session.\n It is for use when apikey is updated or something of the sort, such that\n there is a seamless experience."}
{"_id": "q_14522", "text": "Attempts to ping the server using current credentials, and responds with the path of the currently\n authenticated device"}
{"_id": "q_14523", "text": "Send a POST CRUD API request to the given path using the given data which will be converted\n to json"}
{"_id": "q_14524", "text": "Send a delete request to the given path of the CRUD API. This deletes the object. Or at least tries to."}
{"_id": "q_14525", "text": "Subscribe to the given stream with the callback"}
{"_id": "q_14526", "text": "Creates the given user - using the passed in email and password.\n\n You can also set other default properties by passing in the relevant information::\n\n usr.create(\"my@email\",\"mypass\",description=\"I like trains.\")\n\n Furthermore, ConnectorDB permits immediate initialization of an entire user tree,\n so that you can create all relevant devices and streams in one go::\n\n usr.create(\"my@email\",\"mypass\",devices={\n \"device1\": {\n \"nickname\": \"My train\",\n \"streams\": {\n \"stream1\": {\n \"schema\": \"{\\\"type\\\":\\\"string\\\"}\",\n \"datatype\": \"train.choochoo\"\n }\n },\n }\n })\n\n The user and meta devices are created by default. If you want to add streams to the user device,\n use the \"streams\" option in place of devices in create."}
{"_id": "q_14527", "text": "Returns the list of devices that belong to the user"}
{"_id": "q_14528", "text": "Use SAMtools to create reference index file\n\n :param JobFunctionWrappingJob job: passed automatically by Toil\n :param str ref_id: FileStoreID for the reference genome\n :return: FileStoreID for reference index\n :rtype: str"}
{"_id": "q_14529", "text": "Marks reads as PCR duplicates using Sambamba\n\n :param JobFunctionWrappingJob job: passed automatically by Toil\n :param str bam: FileStoreID for BAM file\n :return: FileStoreID for sorted BAM file\n :rtype: str"}
{"_id": "q_14530", "text": "Marks reads as PCR duplicates using SAMBLASTER\n\n :param JobFunctionWrappingJob job: passed automatically by Toil\n :param str sam: FileStoreID for SAM file\n :return: FileStoreID for deduped SAM file\n :rtype: str"}
{"_id": "q_14531", "text": "Sorts BAM file using Picard SortSam\n\n :param JobFunctionWrappingJob job: passed automatically by Toil\n :param str bam: FileStoreID for BAM file\n :param boolean sort_by_name: If true, sorts by read name instead of coordinate.\n :return: FileStoreID for sorted BAM file\n :rtype: str"}
{"_id": "q_14532", "text": "Creates recalibration table for Base Quality Score Recalibration\n\n :param JobFunctionWrappingJob job: passed automatically by Toil\n :param str bam: FileStoreID for BAM file\n :param str bai: FileStoreID for BAM index file\n :param str ref: FileStoreID for reference genome fasta file\n :param str ref_dict: FileStoreID for reference genome sequence dictionary file\n :param str fai: FileStoreID for reference genome fasta index file\n :param str dbsnp: FileStoreID for dbSNP VCF file\n :param str mills: FileStoreID for Mills VCF file\n :param bool unsafe: If True, runs GATK in UNSAFE mode: \"-U ALLOW_SEQ_DICT_INCOMPATIBILITY\"\n :return: FileStoreID for the recalibration table file\n :rtype: str"}
{"_id": "q_14533", "text": "RNA quantification via Kallisto\n\n :param JobFunctionWrappingJob job: passed automatically by Toil\n :param str r1_id: FileStoreID of fastq (pair 1)\n :param str r2_id: FileStoreID of fastq (pair 2 if applicable, otherwise pass None for single-end)\n :param str kallisto_index_url: FileStoreID for Kallisto index file\n :return: FileStoreID from Kallisto output\n :rtype: str"}
{"_id": "q_14534", "text": "RNA quantification with RSEM\n\n :param JobFunctionWrappingJob job: Passed automatically by Toil\n :param str bam_id: FileStoreID of transcriptome bam for quantification\n :param str rsem_ref_url: URL of RSEM reference (tarball)\n :param bool paired: If True, uses parameters for paired end data\n :return: FileStoreIDs for RSEM's gene and isoform output\n :rtype: str"}
{"_id": "q_14535", "text": "Update the descriptive metadata interactively.\n\n Uses values entered by the user. Note that the function keeps recursing\n whenever a value is another ``CommentedMap`` or a ``list``. The\n function works as passing dictionaries and lists into a function edits\n the values in place."}
{"_id": "q_14536", "text": "Create a proto dataset."}
{"_id": "q_14537", "text": "Default editor updating of readme content."}
{"_id": "q_14538", "text": "Show the descriptive metadata in the readme."}
{"_id": "q_14539", "text": "Add a file to the proto dataset."}
{"_id": "q_14540", "text": "Convert a proto dataset into a dataset.\n\n This step is carried out after all files have been added to the dataset.\n Freezing a dataset finalizes it with a stamp marking it as frozen."}
{"_id": "q_14541", "text": "Copy a dataset to a different location."}
{"_id": "q_14542", "text": "Prepare test set for C++ SAR prediction code.\n Find all items the test users have seen in the past.\n\n Arguments:\n test (pySpark.DataFrame): input dataframe which contains test users."}
{"_id": "q_14543", "text": "Send the given command thru the websocket"}
{"_id": "q_14544", "text": "Attempt to connect to the websocket - and returns either True or False depending on if\n the connection was successful or not"}
{"_id": "q_14545", "text": "This is called when a connection is lost - it attempts to reconnect to the server"}
{"_id": "q_14546", "text": "Send subscribe command for all existing subscriptions. This allows to resume a connection\n that was closed"}
{"_id": "q_14547", "text": "Called when the websocket is opened"}
{"_id": "q_14548", "text": "Called when the websocket is closed"}
{"_id": "q_14549", "text": "This function is called whenever there is a message received from the server"}
{"_id": "q_14550", "text": "Each time the server sends a ping message, we record the timestamp. If we haven't received a ping\n within the given interval, then we assume that the connection was lost, close the websocket and\n attempt to reconnect"}
{"_id": "q_14551", "text": "Filters VCF file using GATK VariantFiltration. Fixes extra pair of quotation marks in VCF header that\n may interfere with other VCF tools.\n\n :param JobFunctionWrappingJob job: passed automatically by Toil\n :param str vcf_id: FileStoreID for input VCF file\n :param str filter_name: Name of filter for VCF header\n :param str filter_expression: JEXL filter expression\n :param str ref_fasta: FileStoreID for reference genome fasta\n :param str ref_fai: FileStoreID for reference genome index file\n :param str ref_dict: FileStoreID for reference genome sequence dictionary file\n :return: FileStoreID for filtered VCF file\n :rtype: str"}
{"_id": "q_14552", "text": "Applies variant quality score recalibration to VCF file using GATK ApplyRecalibration\n\n :param JobFunctionWrappingJob job: passed automatically by Toil\n :param str mode: Determines variant recalibration mode (SNP or INDEL)\n :param str vcf: FileStoreID for input VCF file\n :param str recal_table: FileStoreID for recalibration table file\n :param str tranches: FileStoreID for tranches file\n :param str ref_fasta: FileStoreID for reference genome fasta\n :param str ref_fai: FileStoreID for reference genome index file\n :param str ref_dict: FileStoreID for reference genome sequence dictionary file\n :param float ts_filter_level: Sensitivity expressed as a percentage, default is 99.0\n :param bool unsafe_mode: If True, runs gatk UNSAFE mode: \"-U ALLOW_SEQ_DICT_INCOMPATIBILITY\"\n :return: FileStoreID for recalibrated VCF file\n :rtype: str"}
{"_id": "q_14553", "text": "Merges VCF files using GATK CombineVariants\n\n :param JobFunctionWrappingJob job: Toil Job instance\n :param dict vcfs: Dictionary of VCF FileStoreIDs {sample identifier: FileStoreID}\n :param str ref_fasta: FileStoreID for reference genome fasta\n :param str ref_fai: FileStoreID for reference genome index file\n :param str ref_dict: FileStoreID for reference genome sequence dictionary file\n :param str merge_option: Value for --genotypemergeoption flag (Default: 'UNIQUIFY')\n 'UNIQUIFY': Multiple variants at a single site are merged into a\n single variant record.\n 'UNSORTED': Used to merge VCFs from the same sample\n :return: FileStoreID for merged VCF file\n :rtype: str"}
{"_id": "q_14554", "text": "Gets the configuration for this project from the default JSON file, or writes one if it doesn't exist\n\n :rtype: dict"}
{"_id": "q_14555", "text": "Gets the data for a given term\n\n :param str ontology: The name of the ontology\n :param str iri: The IRI of a term\n :rtype: dict"}
{"_id": "q_14556", "text": "Searches the OLS with the given term\n\n :param str name:\n :param list[str] query_fields: Fields to query\n :return: dict"}
{"_id": "q_14557", "text": "Suggest terms from an optional list of ontologies\n\n :param str name:\n :param list[str] ontology:\n :rtype: dict\n\n .. seealso:: https://www.ebi.ac.uk/ols/docs/api#_suggest_term"}
{"_id": "q_14558", "text": "Iterates over the descendants of a given term\n\n :param str ontology: The name of the ontology\n :param str iri: The IRI of a term\n :param int size: The size of each page. Defaults to 500, which is the maximum allowed by the EBI.\n :param int sleep: The amount of time to sleep between pages. Defaults to 0 seconds.\n :rtype: iter[dict]"}
{"_id": "q_14559", "text": "Iterates over parent-child relations\n\n :param str ontology: The name of the ontology\n :param int size: The size of each page. Defaults to 500, which is the maximum allowed by the EBI.\n :param int sleep: The amount of time to sleep between pages. Defaults to 0 seconds.\n :rtype: iter[tuple[str,str]]"}
{"_id": "q_14560", "text": "Adds the given stream to the query construction. The function supports both stream\r\n names and Stream objects."}
{"_id": "q_14561", "text": "This needs some tidying up. To avoid circular imports we import\n everything here but it makes this method a bit more gross."}
{"_id": "q_14562", "text": "Start spark and hdfs master containers\n\n :param job: The underlying job."}
{"_id": "q_14563", "text": "Stop spark and hdfs worker containers\n\n :param job: The underlying job."}
{"_id": "q_14564", "text": "Compress anything to bytes or string.\n\n :params obj: \n :params level: \n :params return_type: if bytes, then return bytes; if str, then return\n base64.b64encode bytes in utf-8 string."}
{"_id": "q_14565", "text": "attempt to deduce if a pre 100 year was lost\n due to padded zeros being taken off"}
{"_id": "q_14566", "text": "Change unicode output into bytestrings in Python 2\n\n tzname() API changed in Python 3. It used to return bytes, but was changed\n to unicode strings"}
{"_id": "q_14567", "text": "Strip comments from line string."}
{"_id": "q_14568", "text": "Tokenizer. Generates tokens stream from text"}
{"_id": "q_14569", "text": "Convert a registry key's values to a dictionary."}
{"_id": "q_14570", "text": "Get the zonefile metadata\n\n See `zonefile_metadata`_\n\n :returns:\n A dictionary with the database metadata\n\n .. deprecated:: 2.6\n See deprecation warning in :func:`zoneinfo.gettz`. To get metadata,\n query the attribute ``zoneinfo.ZoneInfoFile.metadata``."}
{"_id": "q_14571", "text": "Get the configuration for the given JID based on XMPP_HTTP_UPLOAD_ACCESS.\n\n If the JID does not match any rule, ``False`` is returned."}
{"_id": "q_14572", "text": "Given a datetime and a time zone, determine whether or not a given datetime\n would fall in a gap.\n\n :param dt:\n A :class:`datetime.datetime` (whose time zone will be ignored if ``tz``\n is provided.)\n\n :param tz:\n A :class:`datetime.tzinfo` with support for the ``fold`` attribute. If\n ``None`` or not provided, the datetime's own time zone will be used.\n\n :return:\n Returns a boolean value whether or not the \"wall time\" exists in ``tz``."}
{"_id": "q_14573", "text": "Look up a zone ID for a zone string.\n\n Args: conn: boto.route53.Route53Connection\n zone: string eg. foursquare.com\n Returns: zone ID eg. ZE2DYFZDWGSL4.\n Raises: ZoneNotFoundError if zone not found."}
{"_id": "q_14574", "text": "Fetch all pieces of a Route 53 config from Amazon.\n\n Args: zone: string, hosted zone id.\n conn: boto.route53.Route53Connection\n Returns: list of ElementTrees, one for each piece of config."}
{"_id": "q_14575", "text": "Merge a set of fetched Route 53 config Etrees into a canonical form.\n\n Args: cfg_chunks: [ lxml.etree.ETree ]\n Returns: lxml.etree.Element"}
{"_id": "q_14576", "text": "Create a new HMAC hash.\n\n :param secret: The secret used when hashing data.\n :type secret: bytes\n :param data: The data to hash.\n :type data: bytes\n :param alg: The algorithm to use when hashing `data`.\n :type alg: str\n :return: New HMAC hash.\n :rtype: bytes"}
{"_id": "q_14577", "text": "Decodes the given token's header and payload and validates the signature.\n\n :param secret: The secret used to decode the token. Must match the\n secret used when creating the token.\n :type secret: Union[str, bytes]\n :param token: The token to decode.\n :type token: Union[str, bytes]\n :param alg: The algorithm used to decode the token. Must match the\n algorithm used when creating the token.\n :type alg: str\n :return: The decoded header and payload.\n :rtype: Tuple[dict, dict]"}
{"_id": "q_14578", "text": "Compares the given signatures.\n\n :param expected: The expected signature.\n :type expected: Union[str, bytes]\n :param actual: The actual signature.\n :type actual: Union[str, bytes]\n :return: Do the signatures match?\n :rtype: bool"}
{"_id": "q_14579", "text": "Is the token valid? This method only checks the timestamps within the\n token and compares them against the current time if none is provided.\n\n :param time: The timestamp to validate against\n :type time: Union[int, None]\n :return: The validity of the token.\n :rtype: bool"}
{"_id": "q_14580", "text": "Check for registered claims in the payload and move them to the\n registered_claims property, overwriting any extant claims."}
{"_id": "q_14581", "text": "Create a token based on the data held in the class.\n\n :return: A new token\n :rtype: str"}
{"_id": "q_14582", "text": "Decodes the given token into an instance of `Jwt`.\n\n :param secret: The secret used to decode the token. Must match the\n secret used when creating the token.\n :type secret: Union[str, bytes]\n :param token: The token to decode.\n :type token: Union[str, bytes]\n :param alg: The algorithm used to decode the token. Must match the\n algorithm used when creating the token.\n :type alg: str\n :return: The decoded token.\n :rtype: `Jwt`"}
{"_id": "q_14583", "text": "Orders population members from lowest fitness to highest fitness\n\n Args:\n Members (list): list of PyGenetics Member objects\n\n Returns:\n lsit: ordered lsit of Members, from highest fitness to lowest fitness"}
{"_id": "q_14584", "text": "Download a file."}
{"_id": "q_14585", "text": "Population fitness == average member fitness score"}
{"_id": "q_14586", "text": "Returns average cost function return value for all members"}
{"_id": "q_14587", "text": "Returns median cost function return value for all members"}
{"_id": "q_14588", "text": "Returns Member objects of population"}
{"_id": "q_14589", "text": "Generates the next population from a previously evaluated generation\n\n Args:\n mut_rate (float): mutation rate for new members (0.0 - 1.0)\n max_mut_amt (float): how much the member is allowed to mutate\n (0.0 - 1.0, proportion change of mutated parameter)\n log_base (int): the higher this number, the more likely the first\n Members (chosen with supplied selection function) are chosen\n as parents for the next generation"}
{"_id": "q_14590", "text": "Test a file is a valid json file.\n\n - *.json: uncompressed, utf-8 encode json file\n - *.js: uncompressed, utf-8 encode json file\n - *.gz: compressed, utf-8 encode json file"}
{"_id": "q_14591", "text": "``set`` dumper."}
{"_id": "q_14592", "text": "``collections.deque`` dumper."}
{"_id": "q_14593", "text": "``numpy.ndarray`` dumper."}
{"_id": "q_14594", "text": "Returns the last recurrence before the given datetime instance. The\n inc keyword defines what happens if dt is an occurrence. With\n inc=True, if dt itself is an occurrence, it will be returned."}
{"_id": "q_14595", "text": "Return a config dictionary with normalized keys regardless of\n whether the keys were specified in environment variables or in config\n files"}
{"_id": "q_14596", "text": "Returns a generator with all environmental vars with prefix PIP_"}
{"_id": "q_14597", "text": "Return True if the callable throws the specified exception\n\n\t>>> throws_exception(lambda: int('3'))\n\tFalse\n\t>>> throws_exception(lambda: int('a'))\n\tTrue\n\t>>> throws_exception(lambda: int('a'), KeyError)\n\tFalse"}
{"_id": "q_14598", "text": "Convert the result back into the input type."}
{"_id": "q_14599", "text": "Convert all tags in an XHTML tree to HTML by removing their\n XHTML namespace."}
{"_id": "q_14600", "text": "Return an HTML string representation of the document.\n\n Note: if include_meta_content_type is true this will create a\n ``<meta http-equiv=\"Content-Type\" ...>`` tag in the head;\n regardless of the value of include_meta_content_type any existing\n ``<meta http-equiv=\"Content-Type\" ...>`` tag will be removed\n\n The ``encoding`` argument controls the output encoding (defauts to\n ASCII, with &#...; character references for any characters outside\n of ASCII). Note that you can pass the name ``'unicode'`` as\n ``encoding`` argument to serialise to a Unicode string.\n\n The ``method`` argument defines the output method. It defaults to\n 'html', but can also be 'xml' for xhtml output, or 'text' to\n serialise to plain text without markup.\n\n To leave out the tail text of the top-level element that is being\n serialised, pass ``with_tail=False``.\n\n The ``doctype`` option allows passing in a plain string that will\n be serialised before the XML tree. Note that passing in non\n well-formed content here will make the XML output non well-formed.\n Also, an existing doctype in the document tree will not be removed\n when serialising an ElementTree instance.\n\n Example::\n\n >>> from lxml import html\n >>> root = html.fragment_fromstring('<p>Hello<br>world!</p>')\n\n >>> html.tostring(root)\n b'<p>Hello<br>world!</p>'\n >>> html.tostring(root, method='html')\n b'<p>Hello<br>world!</p>'\n\n >>> html.tostring(root, method='xml')\n b'<p>Hello<br/>world!</p>'\n\n >>> html.tostring(root, method='text')\n b'Helloworld!'\n\n >>> html.tostring(root, method='text', encoding='unicode')\n u'Helloworld!'\n\n >>> root = html.fragment_fromstring('<div><p>Hello<br>world!</p>TAIL</div>')\n >>> html.tostring(root[0], method='text', encoding='unicode')\n u'Helloworld!TAIL'\n\n >>> html.tostring(root[0], method='text', encoding='unicode', with_tail=False)\n u'Helloworld!'\n\n >>> doc = html.document_fromstring('<p>Hello<br>world!</p>')\n >>> html.tostring(doc, method='html', encoding='unicode')\n u'<html><body><p>Hello<br>world!</p></body></html>'\n\n >>> print(html.tostring(doc, method='html', encoding='unicode',\n ... doctype='<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\"'\n ... ' \"http://www.w3.org/TR/html4/strict.dtd\">'))\n <!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\" \"http://www.w3.org/TR/html4/strict.dtd\">\n <html><body><p>Hello<br>world!</p></body></html>"}
{"_id": "q_14601", "text": "Run the CSS expression on this element and its children,\n returning a list of the results.\n\n Equivalent to lxml.cssselect.CSSSelect(expr, translator='html')(self)\n -- note that pre-compiling the expression can provide a substantial\n speedup."}
{"_id": "q_14602", "text": "Returns True if only a single class is being run or some tests within a single class"}
{"_id": "q_14603", "text": "Returns True if only a module is being run"}
{"_id": "q_14604", "text": "Validate request id."}
{"_id": "q_14605", "text": "Ensure that the given path is decoded,\n NONE when no expected encoding works"}
{"_id": "q_14606", "text": "Attempts to detect at BOM at the start of the stream. If\n an encoding can be determined from the BOM return the name of the\n encoding otherwise return None"}
{"_id": "q_14607", "text": "Selects the new remote addr from the given list of ips in\n X-Forwarded-For. By default it picks the one that the `num_proxies`\n proxy server provides. Before 0.9 it would always pick the first.\n\n .. versionadded:: 0.8"}
{"_id": "q_14608", "text": "Converts amount value from several types into Decimal."}
{"_id": "q_14609", "text": "Parse a file into an ElemenTree using the BeautifulSoup parser.\n\n You can pass a different BeautifulSoup parser through the\n `beautifulsoup` keyword, and a diffent Element factory function\n through the `makeelement` keyword. By default, the standard\n ``BeautifulSoup`` class and the default factory of `lxml.html` are\n used."}
{"_id": "q_14610", "text": "Convert a BeautifulSoup tree to a list of Element trees.\n\n Returns a list instead of a single root Element to support\n HTML-like soup with more than one root element.\n\n You can pass a different Element factory through the `makeelement`\n keyword."}
{"_id": "q_14611", "text": "String representation of the exception."}
{"_id": "q_14612", "text": "Render the traceback for the interactive console."}
{"_id": "q_14613", "text": "Like the plaintext attribute but returns a generator"}
{"_id": "q_14614", "text": "Helper function that returns lines with extra information."}
{"_id": "q_14615", "text": "Returns the locations found via self.index_urls\n\n Checks the url_name on the main (first in the list) index and\n use this url_name to produce all locations"}
{"_id": "q_14616", "text": "Find all available versions for project_name\n\n This checks index_urls, find_links and dependency_links\n All versions found are returned\n\n See _link_package_versions for details on which files are accepted"}
{"_id": "q_14617", "text": "Returns elements of links in order, non-egg links first, egg links\n second, while eliminating duplicates"}
{"_id": "q_14618", "text": "Yields all links in the page"}
{"_id": "q_14619", "text": "Returns True if this link can be verified after download, False if it\n cannot, and None if we cannot determine."}
{"_id": "q_14620", "text": "Return filenames for package's data files in 'src_dir"}
{"_id": "q_14621", "text": "Filter filenames for package's data files in 'src_dir"}
{"_id": "q_14622", "text": "Parse a requirements file and yield InstallRequirement instances.\n\n :param filename: Path or url of requirements file.\n :param finder: Instance of pip.index.PackageFinder.\n :param comes_from: Origin description of requirements.\n :param options: Global options.\n :param session: Instance of pip.download.PipSession.\n :param wheel_cache: Instance of pip.wheel.WheelCache"}
{"_id": "q_14623", "text": "Joins a line ending in '\\' with the previous line."}
{"_id": "q_14624", "text": "Ensure statement only contains allowed nodes."}
{"_id": "q_14625", "text": "Flatten one level of attribute access."}
{"_id": "q_14626", "text": "Binds the app context to the current context."}
{"_id": "q_14627", "text": "Pops the app context."}
{"_id": "q_14628", "text": "Binds the request context to the current context."}
{"_id": "q_14629", "text": "Registers a function as URL value preprocessor for this\n blueprint. It's called before the view functions are called and\n can modify the url values provided."}
{"_id": "q_14630", "text": "Registers an error handler that becomes active for this blueprint\n only. Please be aware that routing does not happen local to a\n blueprint so an error handler for 404 usually is not handled by\n a blueprint unless it is caused inside a view function. Another\n special case is the 500 internal server error which is always looked\n up from the application.\n\n Otherwise works as the :meth:`~flask.Flask.errorhandler` decorator\n of the :class:`~flask.Flask` object."}
{"_id": "q_14631", "text": "Request contexts disappear when the response is started on the server.\n This is done for efficiency reasons and to make it less likely to encounter\n memory leaks with badly written WSGI middlewares. The downside is that if\n you are using streamed responses, the generator cannot access request bound\n information any more.\n\n This function however can help you keep the context around for longer::\n\n from flask import stream_with_context, request, Response\n\n @app.route('/stream')\n def streamed_response():\n @stream_with_context\n def generate():\n yield 'Hello '\n yield request.args['name']\n yield '!'\n return Response(generate())\n\n Alternatively it can also be used around a specific generator::\n\n from flask import stream_with_context, request, Response\n\n @app.route('/stream')\n def streamed_response():\n def generate():\n yield 'Hello '\n yield request.args['name']\n yield '!'\n return Response(stream_with_context(generate()))\n\n .. versionadded:: 0.9"}
{"_id": "q_14632", "text": "Sometimes it is necessary to set additional headers in a view. Because\n views do not have to return response objects but can return a value that\n is converted into a response object by Flask itself, it becomes tricky to\n add headers to it. This function can be called instead of using a return\n and you will get a response object which you can use to attach headers.\n\n If view looked like this and you want to add a new header::\n\n def index():\n return render_template('index.html', foo=42)\n\n You can now do something like this::\n\n def index():\n response = make_response(render_template('index.html', foo=42))\n response.headers['X-Parachutes'] = 'parachutes are cool'\n return response\n\n This function accepts the very same arguments you can return from a\n view function. This for example creates a response with a 404 error\n code::\n\n response = make_response(render_template('not_found.html'), 404)\n\n The other use case of this function is to force the return value of a\n view function into a response which is helpful with view\n decorators::\n\n response = make_response(view_function())\n response.headers['X-Parachutes'] = 'parachutes are cool'\n\n Internally this function does the following things:\n\n - if no arguments are passed, it creates a new response argument\n - if one argument is passed, :meth:`flask.Flask.make_response`\n is invoked with it.\n - if more than one argument is passed, the arguments are passed\n to the :meth:`flask.Flask.make_response` function as tuple.\n\n .. versionadded:: 0.6"}
{"_id": "q_14633", "text": "Helpful helper method that returns the cookie domain that should\n be used for the session cookie if session cookies are used."}
{"_id": "q_14634", "text": "Return a directory to store cached wheels in for link.\n\n Because there are M wheels for any one sdist, we provide a directory\n to cache them in, and then consult that directory when looking up\n cache hits.\n\n We only insert things into the cache if they have plausible version\n numbers, so that we don't contaminate the cache with things that were not\n unique. E.g. ./package might have dozens of installs done for it and build\n a version of 0.0...and if we built and cached a wheel, we'd end up using\n the same wheel even if the source has been edited.\n\n :param cache_dir: The cache_dir being used by pip.\n :param link: The link of the sdist for which this will cache wheels."}
{"_id": "q_14635", "text": "Return True if the extracted wheel in wheeldir should go into purelib."}
{"_id": "q_14636", "text": "Raises errors or warns if called with an incompatible Wheel-Version.\n\n Pip should refuse to install a Wheel-Version that's a major series\n ahead of what it's compatible with (e.g 2.0 > 1.1); and warn when\n installing a version only minor version ahead (e.g 1.2 > 1.1).\n\n version: a 2-tuple representing a Wheel-Version (Major, Minor)\n name: name of wheel or package to raise exception about\n\n :raises UnsupportedWheel: when an incompatible Wheel-Version is given"}
{"_id": "q_14637", "text": "Build one wheel.\n\n :return: The filename of the built wheel, or None if the build failed."}
{"_id": "q_14638", "text": "Yield names and strings used by `code` and its nested code objects"}
{"_id": "q_14639", "text": "Write the pip delete marker file into this directory."}
{"_id": "q_14640", "text": "Return True if we're running inside a virtualenv, False otherwise."}
{"_id": "q_14641", "text": "Returns the effective username of the current process."}
{"_id": "q_14642", "text": "Return a distutils install scheme"}
{"_id": "q_14643", "text": "Parse the cache control headers returning a dictionary with values\n for the different directives."}
{"_id": "q_14644", "text": "Return a cached response if it exists in the cache, otherwise\n return False."}
{"_id": "q_14645", "text": "Algorithm for caching requests.\n\n This assumes a requests Response object."}
{"_id": "q_14646", "text": "Update zipimporter cache data for a given normalized path.\n\n Any sub-path entries are processed as well, i.e. those corresponding to zip\n archives embedded in other zip archives.\n\n Given updater is a callable taking a cache entry key and the original entry\n (after already removing the entry from the cache), and expected to update\n the entry and possibly return a new one to be inserted in its place.\n Returning None indicates that the entry should not be replaced with a new\n one. If no updater is given, the cache entries are simply removed without\n any additional processing, the same as if the updater simply returned None."}
{"_id": "q_14647", "text": "There are a couple of template scripts in the package. This\n function loads one of them and prepares it for use."}
{"_id": "q_14648", "text": "Write changed .pth file back to disk"}
{"_id": "q_14649", "text": "Add filters to a filterer from a list of names."}
{"_id": "q_14650", "text": "Configure a handler from a dictionary."}
{"_id": "q_14651", "text": "Python 3 implementation of execfile."}
{"_id": "q_14652", "text": "Monkey-patch tempfile.tempdir with replacement, ensuring it exists"}
{"_id": "q_14653", "text": "Prefixes stub URLs like 'user@hostname:user/repo.git' with 'ssh://'.\n That's required because although they use SSH they sometimes doesn't\n work with a ssh:// scheme (e.g. Github). But we need a scheme for\n parsing. Hence we remove it again afterwards and return it as a stub."}
{"_id": "q_14654", "text": "Get an item or attribute of an object but prefer the item."}
{"_id": "q_14655", "text": "Internal hook that can be overridden to hook a different generate\n method in.\n\n .. versionadded:: 2.5"}
{"_id": "q_14656", "text": "Finds all the templates the loader can find, compiles them\n and stores them in `target`. If `zip` is `None`, instead of in a\n zipfile, the templates will be will be stored in a directory.\n By default a deflate zip algorithm is used, to switch to\n the stored algorithm, `zip` can be set to ``'stored'``.\n\n `extensions` and `filter_func` are passed to :meth:`list_templates`.\n Each template returned will be compiled to the target folder or\n zipfile.\n\n By default template compilation errors are ignored. In case a\n log function is provided, errors are logged. If you want template\n syntax errors to abort the compilation you can set `ignore_errors`\n to `False` and you will get an exception on syntax errors.\n\n If `py_compile` is set to `True` .pyc files will be written to the\n target instead of standard .py files. This flag does not do anything\n on pypy and Python 3 where pyc files are not picked up by itself and\n don't give much benefit.\n\n .. versionadded:: 2.4"}
{"_id": "q_14657", "text": "Yield distributions accessible on a sys.path directory"}
{"_id": "q_14658", "text": "Declare that package 'packageName' is a namespace package"}
{"_id": "q_14659", "text": "Ensure that the parent directory of `path` exists"}
{"_id": "q_14660", "text": "Yield entry point objects from `group` matching `name`\n\n If `name` is None, yields all entry points in `group` from all\n distributions in the working set, otherwise only ones matching\n both `group` and `name` are yielded (in distribution order)."}
{"_id": "q_14661", "text": "Is distribution `dist` acceptable for this environment?\n\n The distribution must match the platform and python version\n requirements specified when this environment was created, or False\n is returned."}
{"_id": "q_14662", "text": "Evaluate a PEP 426 environment marker on CPython 2.4+.\n Return a boolean indicating the marker result in this environment.\n Raise SyntaxError if marker is invalid.\n\n This implementation uses the 'parser' module, which is not implemented\n on\n Jython and has been superseded by the 'ast' module in Python 2.6 and\n later."}
{"_id": "q_14663", "text": "Calls the standard formatter, but will indent all of the log messages\n by our current indentation level."}
{"_id": "q_14664", "text": "Return formatted currency value.\n\n >>> format_currency(1099.98, 'USD', locale='en_US')\n u'$1,099.98'\n >>> format_currency(1099.98, 'USD', locale='es_CO')\n u'US$\\\\xa01.099,98'\n >>> format_currency(1099.98, 'EUR', locale='de_DE')\n u'1.099,98\\\\xa0\\\\u20ac'\n\n The format can also be specified explicitly. The currency is\n placed with the '\u00a4' sign. As the sign gets repeated the format\n expands (\u00a4 being the symbol, \u00a4\u00a4 is the currency abbreviation and\n \u00a4\u00a4\u00a4 is the full name of the currency):\n\n >>> format_currency(1099.98, 'EUR', u'\\xa4\\xa4 #,##0.00', locale='en_US')\n u'EUR 1,099.98'\n >>> format_currency(1099.98, 'EUR', u'#,##0.00 \\xa4\\xa4\\xa4',\n ... locale='en_US')\n u'1,099.98 euros'\n\n Currencies usually have a specific number of decimal digits. This function\n favours that information over the given format:\n\n >>> format_currency(1099.98, 'JPY', locale='en_US')\n u'\\\\xa51,100'\n >>> format_currency(1099.98, 'COP', u'#,##0.00', locale='es_ES')\n u'1.100'\n\n However, the number of decimal digits can be overriden from the currency\n information, by setting the last parameter to ``False``:\n\n >>> format_currency(1099.98, 'JPY', locale='en_US', currency_digits=False)\n u'\\\\xa51,099.98'\n >>> format_currency(1099.98, 'COP', u'#,##0.00', locale='es_ES',\n ... currency_digits=False)\n u'1.099,98'\n\n If a format is not specified the type of currency format to use\n from the locale can be specified:\n\n >>> format_currency(1099.98, 'EUR', locale='en_US', format_type='standard')\n u'\\\\u20ac1,099.98'\n\n When the given currency format type is not available, an exception is\n raised:\n\n >>> format_currency('1099.98', 'EUR', locale='root', format_type='unknown')\n Traceback (most recent call last):\n ...\n UnknownCurrencyFormatError: \"'unknown' is not a known currency format type\"\n\n By default the locale is allowed to truncate and round a high-precision\n number by forcing its format pattern onto the decimal part. You can bypass\n this behavior with the `decimal_quantization` parameter:\n\n >>> format_currency(1099.9876, 'USD', locale='en_US')\n u'$1,099.99'\n >>> format_currency(1099.9876, 'USD', locale='en_US',\n ... decimal_quantization=False)\n u'$1,099.9876'\n\n :param number: the number to format\n :param currency: the currency code\n :param format: the format string to use\n :param locale: the `Locale` object or locale identifier\n :param currency_digits: use the currency's natural number of decimal digits\n :param format_type: the currency format type to use\n :param decimal_quantization: Truncate and round high-precision numbers to\n the format pattern. Defaults to `True`."}
{"_id": "q_14665", "text": "Parse number format patterns"}
{"_id": "q_14666", "text": "Return minimal quantum of a number, as defined by precision."}
{"_id": "q_14667", "text": "Return maximum precision of a decimal instance's fractional part.\n Precision is extracted from the fractional part only."}
{"_id": "q_14668", "text": "Returns normalized scientific notation components of a value."}
{"_id": "q_14669", "text": "Python 2.6 compatability"}
{"_id": "q_14670", "text": "Yield ``Requirement`` objects for each specification in `strs`\n\n `strs` must be a string, or a (possibly-nested) iterable thereof."}
{"_id": "q_14671", "text": "Fetch an egg needed for building"}
{"_id": "q_14672", "text": "Roll n-sided dice and return each result and the total"}
{"_id": "q_14673", "text": "Ensures that string prices are converted into Price objects."}
{"_id": "q_14674", "text": "Price field for attrs.\n\n See `help(attr.ib)` for full signature.\n\n Usage:\n\n >>> from pricing import fields\n ... @attr.s\n ... class Test:\n ... price: Price = fields.price(default='USD 5.00')\n ...\n ... Test()\n Test(price=USD 5.00)"}
{"_id": "q_14675", "text": "Validate JSON-RPC request.\n\n :param request: RPC request object\n :type request: dict"}
{"_id": "q_14676", "text": "Get request method for service application."}
{"_id": "q_14677", "text": "Apply application method."}
{"_id": "q_14678", "text": "The name of the current blueprint"}
{"_id": "q_14679", "text": "Since Flask 0.8 we're monkeypatching the files object in case a\n request is detected that does not use multipart form data but the files\n object is accessed."}
{"_id": "q_14680", "text": "Factory to make an abstract dist object.\n\n Preconditions: Either an editable req with a source_dir, or satisfied_by or\n a wheel link, or a non-editable req with a source_dir.\n\n :return: A concrete DistAbstraction."}
{"_id": "q_14681", "text": "Add install_req as a requirement to install.\n\n :param parent_req_name: The name of the requirement that needed this\n added. The name is used because when multiple unnamed requirements\n resolve to the same name, we could otherwise end up with dependency\n links that point outside the Requirements set. parent_req must\n already be added. Note that None implies that this is a user\n supplied requirement, vs an inferred one.\n :return: Additional requirements to scan. That is either [] if\n the requirement is not applicable, or [install_req] if the\n requirement is applicable and has just been added."}
{"_id": "q_14682", "text": "Call handler for all pending reqs.\n\n :param handler: Handle a single requirement. Should take a requirement\n to install. Can optionally return an iterable of additional\n InstallRequirements to cover."}
{"_id": "q_14683", "text": "Return sorted list of all package namespaces"}
{"_id": "q_14684", "text": "Convert QuerySet objects to their list counter-parts"}
{"_id": "q_14685", "text": "Gets the requested template for the given language.\n\n Args:\n language: string, the language of the template to look for.\n\n template_type: string, 'iterable' or 'singular'. \n An iterable template is needed when the value is an iterable\n and needs more unpacking, e.g. list, tuple. A singular template \n is needed when unpacking is complete and the value is singular, \n e.g. string, int, float.\n\n indentation: int, the indentation level.\n \n key: multiple types, the array key.\n\n val: multiple types, the array values\n\n Returns:\n string, template formatting for arrays by language."}
{"_id": "q_14686", "text": "Merge the annotations from tokens_old into tokens_new, when the\n tokens in the new document already existed in the old document."}
{"_id": "q_14687", "text": "Copy annotations from the tokens listed in src to the tokens in dest"}
{"_id": "q_14688", "text": "Combine adjacent tokens when there is no HTML between the tokens, \n and they share an annotation"}
{"_id": "q_14689", "text": "Serialize the list of tokens into a list of text chunks, calling\n markup_func around text to add annotations."}
{"_id": "q_14690", "text": "Given a list of tokens, return a generator of the chunks of\n text for the data in the tokens."}
{"_id": "q_14691", "text": "like locate_unbalanced_start, except handling end tags and\n possibly moving the point earlier in the document."}
{"_id": "q_14692", "text": "This function takes a list of chunks and produces a list of tokens."}
{"_id": "q_14693", "text": "Takes an lxml element el, and generates all the text chunks for\n that tag. Each start tag is a chunk, each word is a chunk, and each\n end tag is a chunk.\n\n If skip_tag is true, then the outermost container tag is\n not returned (just its contents)."}
{"_id": "q_14694", "text": "Splits some text into words. Includes trailing whitespace\n on each word when appropriate."}
{"_id": "q_14695", "text": "The text representation of the start tag for a tag."}
{"_id": "q_14696", "text": "Serialize a single lxml element as HTML. The serialized form\n includes the elements tail. \n\n If skip_outer is true, then don't serialize the outermost tag"}
{"_id": "q_14697", "text": "Extract the constant value of 'symbol' from 'code'\n\n If the name 'symbol' is bound to a constant value by the Python code\n object 'code', return that value. If 'symbol' is bound to an expression,\n return 'default'. Otherwise, return 'None'.\n\n Return value is based on the first assignment to 'symbol'. 'symbol' must\n be a global, or at least a non-\"fast\" local in the code block. That is,\n only 'STORE_NAME' and 'STORE_GLOBAL' opcodes are checked, and 'symbol'\n must be present in 'code.co_names'."}
{"_id": "q_14698", "text": "IE conditional comments basically embed HTML that the parser\n doesn't normally see. We can't allow anything like that, so\n we'll kill any comments that could be conditional."}
{"_id": "q_14699", "text": "Parse a whole document into a string."}
{"_id": "q_14700", "text": "Export the svn repository at the url to the destination location"}
{"_id": "q_14701", "text": "Return the maximum revision for all files under a given location"}
{"_id": "q_14702", "text": "Wraps a method so that it performs a check in debug mode if the\n first request was already handled."}
{"_id": "q_14703", "text": "Returns the value of the `PROPAGATE_EXCEPTIONS` configuration\n value in case it's set, otherwise a sensible default is returned.\n\n .. versionadded:: 0.7"}
{"_id": "q_14704", "text": "Update the template context with some commonly used variables.\n This injects request, session, config and g into the template\n context as well as everything template context processors want\n to inject. Note that the as of Flask 0.6, the original values\n in the context will not be overridden if a context processor\n decides to return a value with the same key.\n\n :param context: the context as a dictionary that is updated in place\n to add extra variables."}
{"_id": "q_14705", "text": "Handles an HTTP exception. By default this will invoke the\n registered error handlers and fall back to returning the\n exception as response.\n\n .. versionadded:: 0.3"}
{"_id": "q_14706", "text": "Checks if an HTTP exception should be trapped or not. By default\n this will return `False` for all exceptions except for a bad request\n key error if ``TRAP_BAD_REQUEST_ERRORS`` is set to `True`. It\n also returns `True` if ``TRAP_HTTP_EXCEPTIONS`` is set to `True`.\n\n This is called for all HTTP exceptions raised by a view function.\n If it returns `True` for any exception the error handler for this\n exception is not called and it shows up as regular exception in the\n traceback. This is helpful for debugging implicitly raised HTTP\n exceptions.\n\n .. versionadded:: 0.8"}
{"_id": "q_14707", "text": "Exceptions that are recording during routing are reraised with\n this method. During debug we are not reraising redirect requests\n for non ``GET``, ``HEAD``, or ``OPTIONS`` requests and we're raising\n a different error instead to help debug situations.\n\n :internal:"}
{"_id": "q_14708", "text": "Dispatches the request and on top of that performs request\n pre and postprocessing as well as HTTP exception catching and\n error handling.\n\n .. versionadded:: 0.7"}
{"_id": "q_14709", "text": "Creates a URL adapter for the given request. The URL adapter\n is created at a point where the request context is not yet set up\n so the request is passed explicitly.\n\n .. versionadded:: 0.6\n\n .. versionchanged:: 0.9\n This can now also be called without a request object when the\n URL adapter is created for the application context."}
{"_id": "q_14710", "text": "Only API function for the config module.\n\n :return: {dict} loaded validated configuration."}
{"_id": "q_14711", "text": "Yield unique values in iterable, preserving order."}
{"_id": "q_14712", "text": "return modules that match module_name"}
{"_id": "q_14713", "text": "the partial self.class_name will be used to find actual TestCase classes"}
{"_id": "q_14714", "text": "return the actual test methods that matched self.method_name"}
{"_id": "q_14715", "text": "check if name combined with test prefixes or postfixes is found anywhere\n in the list of basenames\n\n :param name: string, the name you're searching for\n :param basenames: list, a list of basenames to check\n :param is_prefix: bool, True if this is a prefix search, which means it will\n also check if name matches any of the basenames without the prefixes or\n postfixes, if it is False then the prefixes or postfixes must be present\n (ie, the module we're looking for is the actual test module, not the parent\n modules it's contained in)\n :returns: string, the basename if it is found"}
{"_id": "q_14716", "text": "Returns true if the passed in path is a test module path\n\n :param path: string, the path to check, will need to start or end with the\n module test prefixes or postfixes to be considered valid\n :returns: boolean, True if a test module path, False otherwise"}
{"_id": "q_14717", "text": "Walk all the directories of basedir except hidden directories\n\n :param basedir: string, the directory to walk\n :returns: generator, same as os.walk"}
{"_id": "q_14718", "text": "given a basedir, yield all test modules paths recursively found in\n basedir that are test modules\n\n return -- generator"}
{"_id": "q_14719", "text": "Inject default arguments for dump functions."}
{"_id": "q_14720", "text": "Inject default arguments for load functions."}
{"_id": "q_14721", "text": "Sets multiple keys and values from a mapping.\n\n :param mapping: a mapping with the keys/values to set.\n :param timeout: the cache timeout for the key (if not specified,\n it uses the default timeout).\n :returns: Whether all given keys have been set.\n :rtype: boolean"}
{"_id": "q_14722", "text": "Increments the value of a key by `delta`. If the key does\n not yet exist it is initialized with `delta`.\n\n For supporting caches this is an atomic operation.\n\n :param key: the key to increment.\n :param delta: the delta to add.\n :returns: The new value or ``None`` for backend errors."}
{"_id": "q_14723", "text": "Ensure that a source_dir is set.\n\n This will create a temporary build dir if the name of the requirement\n isn't known yet.\n\n :param parent_dir: The ideal pip parent_dir for the source_dir.\n Generally src_dir for editables and build_dir for sdists.\n :return: self.source_dir"}
{"_id": "q_14724", "text": "Remove the source files from this requirement, if they are marked\n for deletion"}
{"_id": "q_14725", "text": "This reads the buffered incoming data from the client into one\n bytestring. By default this is cached but that behavior can be\n changed by setting `cache` to `False`.\n\n Usually it's a bad idea to call this method without checking the\n content length first as a client could send dozens of megabytes or more\n to cause memory problems on the server.\n\n Note that if the form data was already parsed this method will not\n return anything as form data parsing does not cache the data like\n this method does. To implicitly invoke form data parsing function\n set `parse_form_data` to `True`. When this is done the return value\n of this method will be an empty string if the form parser handles\n the data. This generally is not necessary as if the whole data is\n cached (which is the default) the form parser will used the cached\n data to parse the form data. Please be generally aware of checking\n the content length first in any case before calling this method\n to avoid exhausting server memory.\n\n If `as_text` is set to `True` the return value will be a decoded\n unicode string.\n\n .. versionadded:: 0.9"}
{"_id": "q_14726", "text": "This is automatically called right before the response is started\n and returns headers modified for the given environment. It returns a\n copy of the headers from the response with some modifications applied\n if necessary.\n\n For example the location header (if present) is joined with the root\n URL of the environment. Also the content length is automatically set\n to zero here for certain status codes.\n\n .. versionchanged:: 0.6\n Previously that function was called `fix_headers` and modified\n the response object in place. Also since 0.6, IRIs in location\n and content-location headers are handled properly.\n\n Also starting with 0.6, Werkzeug will attempt to set the content\n length if it is able to figure it out on its own. This is the\n case if all the strings in the response iterable are already\n encoded and the iterable is buffered.\n\n :param environ: the WSGI environment of the request.\n :return: returns a new :class:`~werkzeug.datastructures.Headers`\n object."}
{"_id": "q_14727", "text": "r\"\"\"\n Return full path to the user-specific cache dir for this application.\n\n \"appname\" is the name of application.\n\n Typical user cache directories are:\n Mac OS X: ~/Library/Caches/<AppName>\n Unix: ~/.cache/<AppName> (XDG default)\n Windows: C:\\Users\\<username>\\AppData\\Local\\<AppName>\\Cache\n\n On Windows the only suggestion in the MSDN docs is that local settings go\n in the `CSIDL_LOCAL_APPDATA` directory. This is identical to the\n non-roaming app data dir (the default returned by `user_data_dir`). Apps\n typically put cache data somewhere *under* the given dir here. Some\n examples:\n ...\\Mozilla\\Firefox\\Profiles\\<ProfileName>\\Cache\n ...\\Acme\\SuperApp\\Cache\\1.0\n\n OPINION: This function appends \"Cache\" to the `CSIDL_LOCAL_APPDATA` value."}
{"_id": "q_14728", "text": "Return full path to the user-specific data dir for this application.\n\n \"appname\" is the name of application.\n If None, just the system directory is returned.\n \"roaming\" (boolean, default False) can be set True to use the Windows\n roaming appdata directory. That means that for users on a Windows\n network setup for roaming profiles, this user data will be\n sync'd on login. See\n <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>\n for a discussion of issues.\n\n Typical user data directories are:\n Mac OS X: ~/Library/Application Support/<AppName>\n Unix: ~/.local/share/<AppName> # or in\n $XDG_DATA_HOME, if defined\n Win XP (not roaming): C:\\Documents and Settings\\<username>\\ ...\n ...Application Data\\<AppName>\n Win XP (roaming): C:\\Documents and Settings\\<username>\\Local ...\n ...Settings\\Application Data\\<AppName>\n Win 7 (not roaming): C:\\\\Users\\<username>\\AppData\\Local\\<AppName>\n Win 7 (roaming): C:\\\\Users\\<username>\\AppData\\Roaming\\<AppName>\n\n For Unix, we follow the XDG spec and support $XDG_DATA_HOME.\n That means, by default \"~/.local/share/<AppName>\"."}
{"_id": "q_14729", "text": "Return full path to the user-specific log dir for this application.\n\n \"appname\" is the name of application.\n If None, just the system directory is returned.\n\n Typical user cache directories are:\n Mac OS X: ~/Library/Logs/<AppName>\n Unix: ~/.cache/<AppName>/log # or under $XDG_CACHE_HOME if\n defined\n Win XP: C:\\Documents and Settings\\<username>\\Local Settings\\ ...\n ...Application Data\\<AppName>\\Logs\n Vista: C:\\\\Users\\<username>\\AppData\\Local\\<AppName>\\Logs\n\n On Windows the only suggestion in the MSDN docs is that local settings\n go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in\n examples of what some windows apps use for a logs dir.)\n\n OPINION: This function appends \"Logs\" to the `CSIDL_LOCAL_APPDATA`\n value for Windows and appends \"log\" to the user cache dir for Unix."}
{"_id": "q_14730", "text": "Return full path to the user-specific config dir for this application.\n\n \"appname\" is the name of application.\n If None, just the system directory is returned.\n \"roaming\" (boolean, default True) can be set False to not use the\n Windows roaming appdata directory. That means that for users on a\n Windows network setup for roaming profiles, this user data will be\n sync'd on login. See\n <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx>\n for a discussion of issues.\n\n Typical user data directories are:\n Mac OS X: same as user_data_dir\n Unix: ~/.config/<AppName>\n Win *: same as user_data_dir\n\n For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME.\n That means, by deafult \"~/.config/<AppName>\"."}
{"_id": "q_14731", "text": "This iterates over all relevant Python files. It goes through all\n loaded files from modules, all files in folders of already loaded modules\n as well as all files reachable through a package."}
{"_id": "q_14732", "text": "Create a reusable class from a generator function\n\n Parameters\n ----------\n func: GeneratorCallable[T_yield, T_send, T_return]\n the function to wrap\n\n Note\n ----\n * the callable must have an inspectable signature\n * If bound to a class, the new reusable generator is callable as a method.\n To opt out of this, add a :func:`staticmethod` decorator above\n this decorator."}
{"_id": "q_14733", "text": "Apply a function to all ``send`` values of a generator\n\n Parameters\n ----------\n func: ~typing.Callable[[T_send], T_mapped]\n the function to apply\n gen: Generable[T_yield, T_mapped, T_return]\n the generator iterable.\n\n Returns\n -------\n ~typing.Generator[T_yield, T_send, T_return]\n the mapped generator"}
{"_id": "q_14734", "text": "Wrapper around six.text_type to convert None to empty string"}
{"_id": "q_14735", "text": "Parse a string or file-like object into a tree"}
{"_id": "q_14736", "text": "Parse a HTML document into a well-formed tree\n\n stream - a filelike object or string containing the HTML to be parsed\n\n The optional encoding parameter must be a string that indicates\n the encoding. If specified, that encoding will be used,\n regardless of any BOM or later declaration (such as in a meta\n element)"}
{"_id": "q_14737", "text": "this converts the readin lines from\n sys to useable format, returns list\n of token and dict of tokens"}
{"_id": "q_14738", "text": "Loads bytecode from a file or file like object."}
{"_id": "q_14739", "text": "Return a copy of paramsDict, updated with kwargsDict entries, wrapped as\n stylesheet arguments.\n kwargsDict entries with a value of None are ignored."}
{"_id": "q_14740", "text": "Run a VCS subcommand\n This is simply a wrapper around call_subprocess that adds the VCS\n command name, and checks that the VCS is available"}
{"_id": "q_14741", "text": "Return implementation version."}
{"_id": "q_14742", "text": "Find rel=\"homepage\" and rel=\"download\" links in `page`, yielding URLs"}
{"_id": "q_14743", "text": "Read a local path, with special support for directories"}
{"_id": "q_14744", "text": "Remove duplicate entries from sys.path along with making them\n absolute"}
{"_id": "q_14745", "text": "Add a new path to known_paths by combining sitedir and 'name' or execute\n sitedir if it starts with 'import"}
{"_id": "q_14746", "text": "Check if user site directory is safe for inclusion\n\n The function tests for the command line flag (including environment var),\n process uid/gid equal to effective uid/gid.\n\n None: Disabled for security reasons\n False: Disabled by user (command line option)\n True: Safe and enabled"}
{"_id": "q_14747", "text": "Add a per user site-package to sys.path\n\n Each user has its own python directory with site-packages in the\n home directory.\n\n USER_BASE is the root directory for all Python versions\n\n USER_SITE is the user specific site-packages directory\n\n USER_SITE/.. can be used for data."}
{"_id": "q_14748", "text": "Define new built-ins 'quit' and 'exit'.\n These are simply strings that display a hint on how to exit."}
{"_id": "q_14749", "text": "Set the string encoding used by the Unicode implementation. The\n default is 'ascii', but if you're willing to experiment, you can\n change this."}
{"_id": "q_14750", "text": "Force easy_installed eggs in the global environment to get placed\n in sys.path after all packages inside the virtualenv. This\n maintains the \"least surprise\" result that packages in the\n virtualenv always mask global packages, never the other way\n around."}
{"_id": "q_14751", "text": "Adjust the special classpath sys.path entries for Jython. These\n entries should follow the base virtualenv lib directories."}
{"_id": "q_14752", "text": "Replace sources with .pyx extensions to sources with the target\n language extension. This mechanism allows language authors to supply\n pre-converted sources but to prefer the .pyx sources."}
{"_id": "q_14753", "text": "Copies a file from its location on the web to a designated \n place on the local machine.\n\n Args:\n file_path: Complete url of the file to copy, string (e.g. http://fool.com/input.css).\n\n target_path: Path and name of file on the local machine, string. (e.g. /directory/output.css)\n\n Returns:\n None."}
{"_id": "q_14754", "text": "Indentes css that has not been indented and saves it to a new file.\n A new file is created if the output destination does not already exist.\n\n Args:\n f: string, path to file.\n\n output: string, path/name of the output file (e.g. /directory/output.css).\n print type(response.read())\n\n Returns:\n None."}
{"_id": "q_14755", "text": "Adds line breaks after every occurance of a given character in a file.\n\n Args:\n f: string, path to input file.\n\n output: string, path to output file.\n\n Returns:\n None."}
{"_id": "q_14756", "text": "Reformats poorly written css. This function does not validate or fix errors in the code.\n It only gives code the proper indentation. \n\n Args:\n input_file: string, path to the input file.\n\n output_file: string, path to where the reformatted css should be saved. If the target file\n doesn't exist, a new file is created.\n\n Returns:\n None."}
{"_id": "q_14757", "text": "Take a list of strings and clear whitespace \n on each one. If a value in the list is not a \n string pass it through untouched.\n\n Args:\n iterable: mixed list\n\n Returns: \n mixed list"}
{"_id": "q_14758", "text": "Calculates the median of a list of integers or floating point numbers.\n\n Args:\n data: A list of integers or floating point numbers\n\n Returns:\n Sorts the list numerically and returns the middle number if the list has an odd number\n of items. If the list contains an even number of items the mean of the two middle numbers\n is returned."}
{"_id": "q_14759", "text": "Calculates the average or mean of a list of numbers\n\n Args:\n numbers: a list of integers or floating point numbers.\n\n numtype: string, 'decimal' or 'float'; the type of number to return.\n\n Returns:\n The average (mean) of the numbers as a floating point number\n or a Decimal object.\n\n Requires:\n The math module"}
{"_id": "q_14760", "text": "Calculates the population or sample variance of a list of numbers.\n A large number means the results are all over the place, while a\n small number means the results are comparatively close to the average.\n\n Args:\n numbers: a list of integers or floating point numbers to compare.\n\n type: string, 'population' or 'sample', the kind of variance to be computed.\n\n Returns:\n The computed population or sample variance.\n Defaults to population variance.\n\n Requires:\n The math module, average()"}
{"_id": "q_14761", "text": "Finds the percentage of one number over another.\n\n Args:\n a: The number that is a percent, int or float.\n\n b: The base number that a is a percent of, int or float.\n\n i: Optional boolean integer. True if the user wants the result returned as\n a whole number. Assumes False.\n\n r: Optional boolean round. True if the user wants the result rounded.\n Rounds to the second decimal point on floating point numbers. Assumes False.\n\n Returns:\n The argument a as a percentage of b. Throws a warning if integer is set to True\n and round is set to False."}
{"_id": "q_14762", "text": "Return a static resource from the shared folder."}
{"_id": "q_14763", "text": "Return a string representing the user agent."}
{"_id": "q_14764", "text": "Download link url into temp_dir using provided session"}
{"_id": "q_14765", "text": "Check download_dir for previously downloaded file with correct hash\n If a correct file is found return its path else None"}
{"_id": "q_14766", "text": "Handle currencyFormat subdirectives."}
{"_id": "q_14767", "text": "Handle exchange subdirectives."}
{"_id": "q_14768", "text": "Default template context processor. Injects `request`,\n `session` and `g`."}
{"_id": "q_14769", "text": "Renders a template from the template folder with the given\n context.\n\n :param template_name_or_list: the name of the template to be\n rendered, or an iterable with template names\n the first one existing will be rendered\n :param context: the variables that should be available in the\n context of the template."}
{"_id": "q_14770", "text": "Renders a template from the given template source string\n with the given context.\n\n :param source: the sourcecode of the template to be\n rendered\n :param context: the variables that should be available in the\n context of the template."}
{"_id": "q_14771", "text": "Use parse_version from pkg_resources or distutils as available."}
{"_id": "q_14772", "text": "All assignments to names go through this function."}
{"_id": "q_14773", "text": "Handles includes."}
{"_id": "q_14774", "text": "Visit named imports."}
{"_id": "q_14775", "text": "Create a whl file from all the files under 'base_dir'.\n\n Places .dist-info at the end of the archive."}
{"_id": "q_14776", "text": "Decorate a function with a reentrant lock to prevent multiple\r\n\tthreads from calling said thread simultaneously."}
{"_id": "q_14777", "text": "Create service, start server.\n\n :param app: application to instantiate a service\n :param host: interface to bound provider\n :param port: port to bound provider\n :param report_message: message format to report port\n :param provider_cls: server class provide a service"}
{"_id": "q_14778", "text": "List of wheels matching a requirement.\n\n :param req: The requirement to satisfy\n :param wheels: List of wheels to search."}
{"_id": "q_14779", "text": "Lookup an Amazon Product.\n\n :return:\n An instance of :class:`~.AmazonProduct` if one item was returned,\n or a list of :class:`~.AmazonProduct` instances if multiple\n items where returned."}
{"_id": "q_14780", "text": "This browse node's immediate ancestor in the browse node tree.\n\n :return:\n The ancestor as an :class:`~.AmazonBrowseNode`, or None."}
{"_id": "q_14781", "text": "This browse node's children in the browse node tree.\n\n :return:\n A list of this browse node's children in the browse node tree."}
{"_id": "q_14782", "text": "Safe Get Element.\n\n Get a child element of root (multiple levels deep) failing silently\n if any descendant does not exist.\n\n :param root:\n Lxml element.\n :param path:\n String path (i.e. 'Items.Item.Offers.Offer').\n :return:\n Element or None."}
{"_id": "q_14783", "text": "Get Offer Price and Currency.\n\n Return price according to the following process:\n\n * If product has a sale return Sales Price, otherwise,\n * Return Price, otherwise,\n * Return lowest offer price, otherwise,\n * Return None.\n\n :return:\n A tuple containing:\n\n 1. Float representation of price.\n 2. ISO Currency code (string)."}
{"_id": "q_14784", "text": "List Price.\n\n :return:\n A tuple containing:\n\n 1. Float representation of price.\n 2. ISO Currency code (string)."}
{"_id": "q_14785", "text": "Build a response by making a request or using the cache.\n\n This will end up calling send and returning a potentially\n cached response"}
{"_id": "q_14786", "text": "Returns a callable that looks up the given attribute from a\n passed object with the rules of the environment. Dots are allowed\n to access attributes of attributes. Integer parts in paths are\n looked up as integers."}
{"_id": "q_14787", "text": "Return a titlecased version of the value. I.e. words will start with\n uppercase letters, all remaining characters are lowercase."}
{"_id": "q_14788", "text": "Sort an iterable. Per default it sorts ascending, if you pass it\n true as first argument it will reverse the sorting.\n\n If the iterable is made of strings the third parameter can be used to\n control the case sensitiveness of the comparison which is disabled by\n default.\n\n .. sourcecode:: jinja\n\n {% for item in iterable|sort %}\n ...\n {% endfor %}\n\n It is also possible to sort by an attribute (for example to sort\n by the date of an object) by specifying the `attribute` parameter:\n\n .. sourcecode:: jinja\n\n {% for item in iterable|sort(attribute='date') %}\n ...\n {% endfor %}\n\n .. versionchanged:: 2.6\n The `attribute` parameter was added."}
{"_id": "q_14789", "text": "Group a sequence of objects by a common attribute.\n\n If you for example have a list of dicts or objects that represent persons\n with `gender`, `first_name` and `last_name` attributes and you want to\n group all users by genders you can do something like the following\n snippet:\n\n .. sourcecode:: html+jinja\n\n <ul>\n {% for group in persons|groupby('gender') %}\n <li>{{ group.grouper }}<ul>\n {% for person in group.list %}\n <li>{{ person.first_name }} {{ person.last_name }}</li>\n {% endfor %}</ul></li>\n {% endfor %}\n </ul>\n\n Additionally it's possible to use tuple unpacking for the grouper and\n list:\n\n .. sourcecode:: html+jinja\n\n <ul>\n {% for grouper, list in persons|groupby('gender') %}\n ...\n {% endfor %}\n </ul>\n\n As you can see the item we're grouping by is stored in the `grouper`\n attribute and the `list` contains all the objects that have this grouper\n in common.\n\n .. versionchanged:: 2.6\n It's now possible to use dotted notation to group by the child\n attribute of another attribute."}
{"_id": "q_14790", "text": "Creates a logger for the given application. This logger works\n similar to a regular Python logger but changes the effective logging\n level based on the application's debug flag. Furthermore this\n function also removes all attached handlers in case there was a\n logger with the log name before."}
{"_id": "q_14791", "text": "Returns True if the two strings are equal, False otherwise.\n\n The time taken is independent of the number of characters that match. Do\n not use this function for anything else than comparision with known\n length targets.\n\n This is should be implemented in C in order to get it completely right."}
{"_id": "q_14792", "text": "This method is called to derive the key. If you're unhappy with\n the default key derivation choices you can override them here.\n Keep in mind that the key derivation in itsdangerous is not intended\n to be used as a security method to make a complex key out of a short\n password. Instead you should use large random secret keys."}
{"_id": "q_14793", "text": "Unsigns the given string."}
{"_id": "q_14794", "text": "Returns a signed string serialized with the internal serializer.\n The return value can be either a byte or unicode string depending\n on the format of the internal serializer."}
{"_id": "q_14795", "text": "JSON-RPC server error.\n\n :param request_id: JSON-RPC request id\n :type request_id: int or str or None\n :param error: server error\n :type error: Exception"}
{"_id": "q_14796", "text": "Return a list all Python packages found within directory 'where'\n\n 'where' should be supplied as a \"cross-platform\" (i.e. URL-style)\n path; it will be converted to the appropriate local path syntax.\n 'exclude' is a sequence of package names to exclude; '*' can be used\n as a wildcard in the names, such that 'foo.*' will exclude all\n subpackages of 'foo' (but not 'foo' itself).\n\n 'include' is a sequence of package names to include. If it's\n specified, only the named packages will be included. If it's not\n specified, all found packages will be included. 'include' can contain\n shell style wildcard patterns just like 'exclude'.\n\n The list of included packages is built up first and then any\n explicitly excluded packages are removed from it."}
{"_id": "q_14797", "text": "Exclude any apparent package that apparently doesn't include its\n parent.\n\n For example, exclude 'foo.bar' if 'foo' is not present."}
{"_id": "q_14798", "text": "Verify our vary headers match and construct a real urllib3\n HTTPResponse object."}
{"_id": "q_14799", "text": "Remove RECORD.jws from a wheel by truncating the zip file.\n \n RECORD.jws must be at the end of the archive. The zip file must be an \n ordinary archive, with the compressed files and the directory in the same \n order, and without any non-zip content after the truncation point."}
{"_id": "q_14800", "text": "Unpack a wheel.\n\n Wheel content will be unpacked to {dest}/{name}-{ver}, where {name}\n is the package name and {ver} its version.\n\n :param wheelfile: The path to the wheel.\n :param dest: Destination directory (default to current directory)."}
{"_id": "q_14801", "text": "Regenerate the entry_points console_scripts for the named distribution."}
{"_id": "q_14802", "text": "Sets for the _draw_ and _ldraw_ attributes for each of the graph\n sub-elements by processing the xdot format of the graph."}
{"_id": "q_14803", "text": "Parses the Xdot attributes of all graph components and adds\n the components to a new canvas."}
{"_id": "q_14804", "text": "Returns a node given an ID or None if no such node exists."}
{"_id": "q_14805", "text": "Handles the list of edges for any graph changing."}
{"_id": "q_14806", "text": "Get datetime string from datetime object\n\n :param datetime datetime_obj: datetime object\n :return: datetime string\n :rtype: str"}
{"_id": "q_14807", "text": "Handles the left mouse button being double-clicked when the tool\n is in the 'normal' state.\n\n If the event occurred on this tool's component (or any contained\n component of that component), the method opens a Traits UI view on the\n object referenced by the 'element' trait of the component that was\n double-clicked, setting the tool as the active tool for the duration\n of the view."}
{"_id": "q_14808", "text": "Handles the diagram canvas being set"}
{"_id": "q_14809", "text": "Removes all components from the canvas"}
{"_id": "q_14810", "text": "Handles the domain model changing"}
{"_id": "q_14811", "text": "Maps a domain model to the diagram"}
{"_id": "q_14812", "text": "Removes listeners from a domain model"}
{"_id": "q_14813", "text": "Parses xdot data and returns the associated components."}
{"_id": "q_14814", "text": "Sets the font."}
{"_id": "q_14815", "text": "Returns the components of a polygon."}
{"_id": "q_14816", "text": "Returns the components of a polyline."}
{"_id": "q_14817", "text": "Returns text components."}
{"_id": "q_14818", "text": "Returns the components of an image."}
{"_id": "q_14819", "text": "Load the file."}
{"_id": "q_14820", "text": "Test if the point is within this ellipse"}
{"_id": "q_14821", "text": "Draws the component bounds for testing purposes"}
{"_id": "q_14822", "text": "Perform the action."}
{"_id": "q_14823", "text": "values pipe extract value from previous pipe.\n\n If previous pipe send a dictionary to values pipe, keys should contains\n the key of dictionary which you want to get. If previous pipe send list or\n tuple,\n\n :param prev: The previous iterator of pipe.\n :type prev: Pipe\n :returns: generator"}
{"_id": "q_14824", "text": "pack pipe takes n elements from previous generator and yield one\n list to next.\n\n :param prev: The previous iterator of pipe.\n :type prev: Pipe\n :param rest: Set True to allow to output the rest part of last elements.\n :type prev: boolean\n :param padding: Specify the padding element for the rest part of last elements.\n :type prev: boolean\n :returns: generator\n\n :Example:\n >>> result([1,2,3,4,5,6,7] | pack(3))\n [[1, 2, 3], [4, 5, 6]]\n\n >>> result([1,2,3,4,5,6,7] | pack(3, rest=True))\n [[1, 2, 3], [4, 5, 6], [7,]]\n\n >>> result([1,2,3,4,5,6,7] | pack(3, padding=None))\n [[1, 2, 3], [4, 5, 6], [7, None, None]]"}
{"_id": "q_14825", "text": "The pipe greps the data passed from previous generator according to\n given regular expression.\n\n :param prev: The previous iterator of pipe.\n :type prev: Pipe\n :param pattern: The pattern which used to filter out data.\n :type pattern: str|unicode|re pattern object\n :param inv: If true, invert the match condition.\n :type inv: boolean\n :param kw:\n :type kw: dict\n :returns: generator"}
{"_id": "q_14826", "text": "The pipe greps the data passed from previous generator according to\n given regular expression. The data passed to next pipe is MatchObject\n , dict or tuple which determined by 'to' in keyword argument.\n\n By default, match pipe yields MatchObject. Use 'to' in keyword argument\n to change the type of match result.\n\n If 'to' is dict, yield MatchObject.groupdict().\n If 'to' is tuple, yield MatchObject.groups().\n If 'to' is list, yield list(MatchObject.groups()).\n\n :param prev: The previous iterator of pipe.\n :type prev: Pipe\n :param pattern: The pattern which used to filter data.\n :type pattern: str|unicode\n :param to: What data type the result should be stored. dict|tuple|list\n :type to: type\n :returns: generator"}
{"_id": "q_14827", "text": "The resplit pipe split previous pipe input by regular expression.\n\n Use 'maxsplit' keyword argument to limit the number of split.\n\n :param prev: The previous iterator of pipe.\n :type prev: Pipe\n :param pattern: The pattern which used to split string.\n :type pattern: str|unicode"}
{"_id": "q_14828", "text": "wildcard pipe greps data passed from previous generator\n according to given regular expression.\n\n :param prev: The previous iterator of pipe.\n :type prev: Pipe\n :param pattern: The wildcard string which used to filter data.\n :type pattern: str|unicode|re pattern object\n :param inv: If true, invert the match condition.\n :type inv: boolean\n :returns: generator"}
{"_id": "q_14829", "text": "This pipe read data from previous iterator and write it to stdout.\n\n :param prev: The previous iterator of pipe.\n :type prev: Pipe\n :param endl: The end-of-line symbol for each output.\n :type endl: str\n :param thru: If true, data will passed to next generator. If false, data\n will be dropped.\n :type thru: bool\n :returns: generator"}
{"_id": "q_14830", "text": "This pipe get filenames or file object from previous pipe and read the\n content of file. Then, send the content of file line by line to next pipe.\n\n The start and end parameters are used to limit the range of reading from file.\n\n :param prev: The previous iterator of pipe.\n :type prev: Pipe\n :param filename: The files to be read. If None, use previous pipe input as filenames.\n :type filename: None|str|unicode|list|tuple\n :param mode: The mode to open file. default is 'r'\n :type mode: str\n :param trim: The function to trim the line before send to next pipe.\n :type trim: function object.\n :param start: if star is specified, only line number larger or equal to start will be sent.\n :type start: integer\n :param end: The last line number to read.\n :type end: integer\n :returns: generator"}
{"_id": "q_14831", "text": "sh pipe execute shell command specified by args. If previous pipe exists,\n read data from it and write it to stdin of shell process. The stdout of\n shell process will be passed to next pipe object line by line.\n\n A optional keyword argument 'trim' can pass a function into sh pipe. It is\n used to trim the output from shell process. The default trim function is\n str.rstrip. Therefore, any space characters in tail of\n shell process output line will be removed.\n\n For example:\n\n py_files = result(sh('ls') | strip | wildcard('*.py'))\n\n :param prev: The previous iterator of pipe.\n :type prev: Pipe\n :param args: The command line arguments. It will be joined by space character.\n :type args: list of string.\n :param kw: arguments for subprocess.Popen.\n :type kw: dictionary of options.\n :returns: generator"}
{"_id": "q_14832", "text": "This pipe wrap os.walk and yield absolute path one by one.\n\n :param prev: The previous iterator of pipe.\n :type prev: Pipe\n :param args: The end-of-line symbol for each output.\n :type args: list of string.\n :param kw: The end-of-line symbol for each output.\n :type kw: dictionary of options. Add 'endl' in kw to specify end-of-line symbol.\n :returns: generator"}
{"_id": "q_14833", "text": "alias of str.join"}
{"_id": "q_14834", "text": "alias of string.Template.substitute"}
{"_id": "q_14835", "text": "alias of string.Template.safe_substitute"}
{"_id": "q_14836", "text": "Convert data from previous pipe with specified encoding."}
{"_id": "q_14837", "text": "Construct the SQLAlchemy engine and session factory."}
{"_id": "q_14838", "text": "Convert Paginator instance to dict\n\n :return: Paging data\n :rtype: dict"}
{"_id": "q_14839", "text": "Check that a process is not running more than once, using PIDFILE"}
{"_id": "q_14840", "text": "Run a program and check program return code Note that some commands don't work\n well with Popen. So if this function is specifically called with 'shell=True',\n then it will run the old 'os.system'. In which case, there is no program output"}
{"_id": "q_14841", "text": "Yield each integer from a complex range string like \"1-9,12,15-20,23\"\n\n >>> list(parse_address_list('1-9,12,15-20,23'))\n [1, 2, 3, 4, 5, 6, 7, 8, 9, 12, 15, 16, 17, 18, 19, 20, 23]\n\n >>> list(parse_address_list('1-9,12,15-20,2-3-4'))\n Traceback (most recent call last):\n ...\n ValueError: format error in 2-3-4"}
{"_id": "q_14842", "text": "Handles the new Graph action."}
{"_id": "q_14843", "text": "Handles the open action."}
{"_id": "q_14844", "text": "Handles saving the current model to the last file."}
{"_id": "q_14845", "text": "Handles saving the current model to file."}
{"_id": "q_14846", "text": "Handles display of the edges editor."}
{"_id": "q_14847", "text": "Handles displaying a view about Godot."}
{"_id": "q_14848", "text": "Handles adding a Subgraph to the main graph."}
{"_id": "q_14849", "text": "Displays a dialog for graph selection if more than one exists.\n Returns None if the dialog is canceled."}
{"_id": "q_14850", "text": "Handles display of the options menu."}
{"_id": "q_14851", "text": "Load the object to a given file like object with the given\n protocol."}
{"_id": "q_14852", "text": "Return an instance of the class that is saved in the file with the\n given filename in the specified format."}
{"_id": "q_14853", "text": "Syntactically concise alias trait but creates a pair of lambda\n functions for every alias you declare.\n\n class MyClass(HasTraits):\n line_width = Float(3.0)\n thickness = Alias(\"line_width\")"}
{"_id": "q_14854", "text": "!DEMO!\n Cached list of keys that can be used to generate sentence."}
{"_id": "q_14855", "text": "Remove chain from current shelve file\n\n Args:\n name: chain name"}
{"_id": "q_14856", "text": "Build markov chain from source on top of existin chain\n\n Args:\n source: iterable which will be used to build chain\n chain: MarkovChain in currently loaded shelve file that\n will be extended by source"}
{"_id": "q_14857", "text": "!DEMO!\n Demo function that shows how to generate a simple sentence starting with\n uppercase letter without lenght limit.\n\n Args:\n chain: MarkovChain that will be used to generate sentence"}
{"_id": "q_14858", "text": "Adds a node to the graph."}
{"_id": "q_14859", "text": "Removes a node from the graph."}
{"_id": "q_14860", "text": "Returns the node with the given ID or None."}
{"_id": "q_14861", "text": "Handles the Graphviz layout program selection changing."}
{"_id": "q_14862", "text": "Returns a graph given a file or a filename."}
{"_id": "q_14863", "text": "Build a Godot graph instance from parsed data."}
{"_id": "q_14864", "text": "Builds a Godot graph."}
{"_id": "q_14865", "text": "Given a duration in seconds, determines the best units and multiplier to\n use to display the time. Return value is a 2-tuple of units and multiplier."}
{"_id": "q_14866", "text": "Formats a number of seconds using the best units."}
{"_id": "q_14867", "text": "Split a sequence into pieces of length n\n\n If the length of the sequence isn't a multiple of n, the rest is discarded.\n Note that nsplit will split strings into individual characters.\n\n Examples:\n >>> nsplit(\"aabbcc\")\n [(\"a\", \"a\"), (\"b\", \"b\"), (\"c\", \"c\")]\n >>> nsplit(\"aabbcc\",n=3)\n [(\"a\", \"a\", \"b\"), (\"b\", \"c\", \"c\")]\n\n # Note that cc is discarded\n >>> nsplit(\"aabbcc\",n=4)\n [(\"a\", \"a\", \"b\", \"b\")]"}
{"_id": "q_14868", "text": "Do url-encode resource ids"}
{"_id": "q_14869", "text": "Inserts a child into the object's children."}
{"_id": "q_14870", "text": "Deletes a child at a specified index from the object's children."}
{"_id": "q_14871", "text": "Sets up or removes a listener for children being replaced on a\n specified object."}
{"_id": "q_14872", "text": "Gets the label to display for a specified object."}
{"_id": "q_14873", "text": "Sets the label for a specified object."}
{"_id": "q_14874", "text": "Finishes initialising the editor by creating the underlying toolkit\n widget."}
{"_id": "q_14875", "text": "Adds the event listeners for a specified object."}
{"_id": "q_14876", "text": "Handles a list of nodes being set."}
{"_id": "q_14877", "text": "Adds a node to the graph for each item in 'features' using\n the GraphNodes from the editor factory."}
{"_id": "q_14878", "text": "Handles a list of edges being set."}
{"_id": "q_14879", "text": "Handles addition and removal of edges."}
{"_id": "q_14880", "text": "Adds an edge to the graph for each item in 'features' using\n the GraphEdges from the editor factory."}
{"_id": "q_14881", "text": "Handles parsing Xdot drawing directives."}
{"_id": "q_14882", "text": "Handles the containers of drawing components being set."}
{"_id": "q_14883", "text": "Give new nodes a unique ID."}
{"_id": "q_14884", "text": "Give new edges a unique ID."}
{"_id": "q_14885", "text": "Return an generator as iterator object.\n\n :param prev: Previous Pipe object which used for data input.\n :returns: A generator for iteration."}
{"_id": "q_14886", "text": "Wrap a reduce function to Pipe object. Reduce function is a function\n with at least two arguments. It works like built-in reduce function.\n It takes first argument for accumulated result, second argument for\n the new data to process. A keyword-based argument named 'init' is\n optional. If init is provided, it is used for the initial value of\n accumulated result. Or, the initial value is None.\n\n The first argument is the data to be converted. The return data from\n filter function should be a boolean value. If true, data can pass.\n Otherwise, data is omitted.\n\n :param func: The filter function to be wrapped.\n :type func: function object\n :param args: The default arguments to be used for filter function.\n :param kw: The default keyword arguments to be used for filter function.\n :returns: Pipe object"}
{"_id": "q_14887", "text": "Attach this connection's default database to the context using our alias."}
{"_id": "q_14888", "text": "Parses the drawing directive, updating the node components."}
{"_id": "q_14889", "text": "Parses the label drawing directive, updating the label\n components."}
{"_id": "q_14890", "text": "Handles the container of drawing components changing."}
{"_id": "q_14891", "text": "Handles the poition of the component changing."}
{"_id": "q_14892", "text": "Handles the Graphviz position attribute changing."}
{"_id": "q_14893", "text": "Handles the right mouse button being clicked when the tool is in\n the 'normal' state.\n\n If the event occurred on this tool's component (or any contained\n component of that component), the method opens a context menu with\n menu items from any tool of the parent component that implements\n MenuItemTool interface i.e. has a get_item() method."}
{"_id": "q_14894", "text": "Draws a closed polygon"}
{"_id": "q_14895", "text": "Test if a point is within this polygonal region"}
{"_id": "q_14896", "text": "Draws the Bezier component"}
{"_id": "q_14897", "text": "Return a dictionary of network name to active status bools.\n\n Sample virsh net-list output::\n\n Name State Autostart\n -----------------------------------------\n default active yes\n juju-test inactive no\n foobar inactive no\n\n Parsing the above would return::\n {\"default\": True, \"juju-test\": False, \"foobar\": False}\n\n See: http://goo.gl/kXwfC"}
{"_id": "q_14898", "text": "Broadcast an event to the database connections registered."}
{"_id": "q_14899", "text": "Method that gets run when the Worker thread is started.\n\n When there's an item in in_queue, it takes it out, passes it to func as an argument, and puts the result in out_queue."}
{"_id": "q_14900", "text": "runs the passed in arguments and returns an iterator on the output of\n running command"}
{"_id": "q_14901", "text": "Render the rel=prev and rel=next links to a Markup object for injection into a template"}
{"_id": "q_14902", "text": "Render the rel=canonical, rel=prev and rel=next links to a Markup object for injection into a template"}
{"_id": "q_14903", "text": "Selects the best content type.\n\n :param requested: a sequence of :class:`.ContentType` instances\n :param available: a sequence of :class:`.ContentType` instances\n that the server is capable of producing\n\n :returns: the selected content type (from ``available``) and the\n pattern that it matched (from ``requested``)\n :rtype: :class:`tuple` of :class:`.ContentType` instances\n :raises: :class:`.NoMatch` when a suitable match was not found\n\n This function implements the *Proactive Content Negotiation*\n algorithm as described in sections 3.4.1 and 5.3 of :rfc:`7231`.\n The input is the `Accept`_ header as parsed by\n :func:`.parse_http_accept_header` and a list of\n parsed :class:`.ContentType` instances. The ``available`` sequence\n should be a sequence of content types that the server is capable of\n producing. The selected value should ultimately be used as the\n `Content-Type`_ header in the generated response.\n\n .. _Accept: http://tools.ietf.org/html/rfc7231#section-5.3.2\n .. _Content-Type: http://tools.ietf.org/html/rfc7231#section-3.1.1.5"}
{"_id": "q_14904", "text": "Create a new URL from `input_url` with modifications applied.\n\n :param str input_url: the URL to modify\n\n :keyword str fragment: if specified, this keyword sets the\n fragment portion of the URL. A value of :data:`None`\n will remove the fragment portion of the URL.\n :keyword str host: if specified, this keyword sets the host\n portion of the network location. A value of :data:`None`\n will remove the network location portion of the URL.\n :keyword str password: if specified, this keyword sets the\n password portion of the URL. A value of :data:`None` will\n remove the password from the URL.\n :keyword str path: if specified, this keyword sets the path\n portion of the URL. A value of :data:`None` will remove\n the path from the URL.\n :keyword int port: if specified, this keyword sets the port\n portion of the network location. A value of :data:`None`\n will remove the port from the URL.\n :keyword query: if specified, this keyword sets the query portion\n of the URL. See the comments for a description of this\n parameter.\n :keyword str scheme: if specified, this keyword sets the scheme\n portion of the URL. A value of :data:`None` will remove\n the scheme. Note that this will make the URL relative and\n may have unintended consequences.\n :keyword str user: if specified, this keyword sets the user\n portion of the URL. A value of :data:`None` will remove\n the user and password portions.\n\n :keyword bool enable_long_host: if this keyword is specified\n and it is :data:`True`, then the host name length restriction\n from :rfc:`3986#section-3.2.2` is relaxed.\n :keyword bool encode_with_idna: if this keyword is specified\n and it is :data:`True`, then the ``host`` parameter will be\n encoded using IDN. If this value is provided as :data:`False`,\n then the percent-encoding scheme is used instead. If this\n parameter is omitted or included with a different value, then\n the ``host`` parameter is processed using :data:`IDNA_SCHEMES`.\n\n :return: the modified URL\n :raises ValueError: when a keyword parameter is given an invalid\n value\n\n If the `host` parameter is specified and not :data:`None`, then\n it will be processed as an Internationalized Domain Name (IDN)\n if the scheme appears in :data:`IDNA_SCHEMES`. Otherwise, it\n will be encoded as UTF-8 and percent encoded.\n\n The handling of the `query` parameter requires some additional\n explanation. You can specify a query value in three different\n ways - as a *mapping*, as a *sequence* of pairs, or as a *string*.\n This flexibility makes it possible to meet the wide range of\n finicky use cases.\n\n *If the query parameter is a mapping*, then the key + value pairs\n are *sorted by the key* before they are encoded. Use this method\n whenever possible.\n\n *If the query parameter is a sequence of pairs*, then each pair\n is encoded *in the given order*. Use this method if you require\n that parameter order is controlled.\n\n *If the query parameter is a string*, then it is *used as-is*.\n This form SHOULD BE AVOIDED since it can easily result in broken\n URLs since *no URL escaping is performed*. This is the obvious\n pass through case that is almost always present."}
{"_id": "q_14905", "text": "Removes the user & password and returns them along with a new url.\n\n :param str url: the URL to sanitize\n :return: a :class:`tuple` containing the authorization portion and\n the sanitized URL. The authorization is a simple user & password\n :class:`tuple`.\n\n >>> auth, sanitized = remove_url_auth('http://foo:bar@example.com')\n >>> auth\n ('foo', 'bar')\n >>> sanitized\n 'http://example.com'\n\n The return value from this function is simple named tuple with the\n following fields:\n\n - *auth* the username and password as a tuple\n - *username* the username portion of the URL or :data:`None`\n - *password* the password portion of the URL or :data:`None`\n - *url* the sanitized URL\n\n >>> result = remove_url_auth('http://me:secret@example.com')\n >>> result.username\n 'me'\n >>> result.password\n 'secret'\n >>> result.url\n 'http://example.com'"}
{"_id": "q_14906", "text": "Generate the user+password portion of a URL.\n\n :param str user: the user name or :data:`None`\n :param str password: the password or :data:`None`"}
{"_id": "q_14907", "text": "Normalize a host for a URL.\n\n :param str host: the host name to normalize\n\n :keyword bool enable_long_host: if this keyword is specified\n and it is :data:`True`, then the host name length restriction\n from :rfc:`3986#section-3.2.2` is relaxed.\n :keyword bool encode_with_idna: if this keyword is specified\n and it is :data:`True`, then the ``host`` parameter will be\n encoded using IDN. If this value is provided as :data:`False`,\n then the percent-encoding scheme is used instead. If this\n parameter is omitted or included with a different value, then\n the ``host`` parameter is processed using :data:`IDNA_SCHEMES`.\n :keyword str scheme: if this keyword is specified, then it is\n used to determine whether to apply IDN rules or not. This\n parameter is ignored if `encode_with_idna` is not :data:`None`.\n\n :return: the normalized and encoded string ready for inclusion\n into a URL"}
{"_id": "q_14908", "text": "Attempts to list all of the modules and submodules found within a given\n directory tree. This function searches the top-level of the directory\n tree for potential python modules and returns a list of candidate names.\n\n **Note:** This function returns a list of strings representing\n discovered module names, not the actual, loaded modules.\n\n :param directory: the directory to search for modules."}
{"_id": "q_14909", "text": "Attempts to list all of the classes within a specified module. This\n function works for modules located in the default path as well as\n extended paths via the sys.meta_path hooks.\n\n If a class filter is set, it will be called with each class as its\n parameter. This filter's return value must be interpretable as a\n boolean. Results that evaluate as True will include the type in the\n list of returned classes. Results that evaluate as False will exclude\n the type in the list of returned classes.\n\n :param mname: of the module to descend into\n :param cls_filter: a function to call to determine what classes should be\n included."}
{"_id": "q_14910", "text": "Strip out namespace data from an ElementTree.\n\n This function is recursive and will traverse all\n subnodes to the root element\n\n @param root: the root element\n\n @return: the same root element, minus namespace"}
{"_id": "q_14911", "text": "Load values from a dictionary structure. Nesting can be used to\n represent namespaces.\n\n >>> c = ConfigDict()\n >>> c.load_dict({'some': {'namespace': {'key': 'value'} } })\n {'some.namespace.key': 'value'}"}
{"_id": "q_14912", "text": "Extract and return oembed content for given urls.\n\n Required GET params:\n urls - list of urls to consume\n\n Optional GET params:\n width - maxwidth attribute for oembed content\n height - maxheight attribute for oembed content\n template_dir - template_dir to use when rendering oembed\n\n Returns:\n list of dictionaries with oembed metadata and renderings, json encoded"}
{"_id": "q_14913", "text": "A site profile detailing valid endpoints for a given domain. Allows for\n better auto-discovery of embeddable content.\n\n OEmbed-able content lives at a URL that maps to a provider."}
{"_id": "q_14914", "text": "scan path directory and any subdirectories for valid captain scripts"}
{"_id": "q_14915", "text": "Make the request params given location data"}
{"_id": "q_14916", "text": "Get the tax rate from the ZipTax response"}
{"_id": "q_14917", "text": "Ensure that a needed directory exists, creating it if it doesn't"}
{"_id": "q_14918", "text": "Registers a provider with the site."}
{"_id": "q_14919", "text": "Unregisters a provider from the site."}
{"_id": "q_14920", "text": "Populate the internal registry's dictionary with the regexes for each\n provider instance"}
{"_id": "q_14921", "text": "Find the right provider for a URL"}
{"_id": "q_14922", "text": "A hook for django-based oembed providers to delete any stored oembeds"}
{"_id": "q_14923", "text": "The heart of the matter"}
{"_id": "q_14924", "text": "Load up StoredProviders from url if it is an oembed scheme"}
{"_id": "q_14925", "text": "Iterate over the returned json and try to sort out any new providers"}
{"_id": "q_14926", "text": "Return the git hash as a string.\n\n Apparently someone got this from numpy's setup.py. It has since been\n modified a few times."}
{"_id": "q_14927", "text": "A kind of cheesy method that allows for callables or attributes to\n be used interchangably"}
{"_id": "q_14928", "text": "Build a dictionary of metadata for the requested object."}
{"_id": "q_14929", "text": "Parses the date from a url and uses it in the query. For objects which\n are unique for date."}
{"_id": "q_14930", "text": "Override the base."}
{"_id": "q_14931", "text": "Add the 909 OAI info to 035."}
{"_id": "q_14932", "text": "Check if we shall add cnum in 035."}
{"_id": "q_14933", "text": "Remove INSPIRE specific notes."}
{"_id": "q_14934", "text": "Remove dashes from ISBN."}
{"_id": "q_14935", "text": "Remove duplicate BibMatch DOIs."}
{"_id": "q_14936", "text": "041 Language."}
{"_id": "q_14937", "text": "Checks if files are not being uploaded to server.\n @timeout - time after which the script will register an error."}
{"_id": "q_14938", "text": "This is designed to work with prettified output from Beautiful Soup which indents with a single space.\n\n :param line: The line to split\n :param min_line_length: The minimum desired line length\n :param max_line_length: The maximum desired line length\n\n :return: A list of lines"}
{"_id": "q_14939", "text": "Converts capital letters to lower keeps first letter capital."}
{"_id": "q_14940", "text": "Scans a block of text and extracts oembed data on any urls,\n returning it in a list of dictionaries"}
{"_id": "q_14941", "text": "Try to maintain parity with what is extracted by extract since strip\n will most likely be used in conjunction with extract"}
{"_id": "q_14942", "text": "Automatically build the provider index."}
{"_id": "q_14943", "text": "Call this on an lxml.etree document to remove all namespaces"}
{"_id": "q_14944", "text": "Checks that the versions are consistent\n\n Parameters\n ----------\n desired_version: str\n optional; the version that all of these should match\n include_package: bool\n whether to check the special 'package' version for consistency\n (default False)\n strictness: str"}
{"_id": "q_14945", "text": "Merges a dictionary into the Rule object."}
{"_id": "q_14946", "text": "Creates a new instance of a rule by merging two dictionaries.\n\n This allows for independant configuration files to be merged\n into the defaults."}
{"_id": "q_14947", "text": "Add extra details to the message. Separate so that it can be overridden"}
{"_id": "q_14948", "text": "Emit a record.\n\n Format the record and send it to the specified addressees."}
{"_id": "q_14949", "text": "Reads a dom xml element in oaidc format and\n returns the bibrecord object"}
{"_id": "q_14950", "text": "display a progress that can update in place\n\n example -- \n total_length = 1000\n with echo.progress(total_length) as p:\n for x in range(total_length):\n # do something crazy\n p.update(x)\n\n length -- int -- the total size of what you will be updating progress on"}
{"_id": "q_14951", "text": "print format_msg to stderr"}
{"_id": "q_14952", "text": "prints a banner\n\n sep -- string -- the character that will be on the line on the top and bottom\n and before any of the lines, defaults to *\n count -- integer -- the line width, defaults to 80"}
{"_id": "q_14953", "text": "echo a prompt to the user and wait for an answer\n\n question -- string -- the prompt for the user\n choices -- list -- if given, only exit when prompt matches one of the choices\n return -- string -- the answer that was given by the user"}
{"_id": "q_14954", "text": "Log an attempt against key, incrementing the number of attempts for that key and potentially adding a lock to\n the lock table"}
{"_id": "q_14955", "text": "Adds an URL to the download queue.\n\n :param str url: URL to the music service track"}
{"_id": "q_14956", "text": "Add or update a key, value pair to the database"}
{"_id": "q_14957", "text": "Get the value of a given key"}
{"_id": "q_14958", "text": "Delete a given key or recursively delete the tree below it"}
{"_id": "q_14959", "text": "Set the thermostat mode\n\n :param mode: The desired mode integer value.\n Auto = 1\n Temporary hold = 2\n Permanent hold = 3"}
{"_id": "q_14960", "text": "Set the target temperature to the desired fahrenheit, with more granular control of the\n hold mode\n\n :param fahrenheit: The desired temperature in F\n :param mode: The desired mode to operate in"}
{"_id": "q_14961", "text": "Set the target temperature to the desired celsius, with more granular control of the hold\n mode\n\n :param celsius: The desired temperature in C\n :param mode: The desired mode to operate in"}
{"_id": "q_14962", "text": "Add a number of months to a timestamp"}
{"_id": "q_14963", "text": "Add a number of months to a date"}
{"_id": "q_14964", "text": "Is this the christmas period?"}
{"_id": "q_14965", "text": "Make a request to the NuHeat API\n\n :param url: The URL to request\n :param method: The type of request to make (GET, POST)\n :param data: Data to be sent along with POST requests\n :param params: Querystring parameters\n :param retry: Attempt to re-authenticate and retry request if necessary"}
{"_id": "q_14966", "text": "Sets the current music service to service_name.\n\n :param str service_name: Name of the music service\n :param str api_key: Optional API key if necessary"}
{"_id": "q_14967", "text": "Sets the current storage service to service_name and runs the connect method on the service.\n\n :param str service_name: Name of the storage service\n :param str custom_path: Custom path where to download tracks for local storage (optional, and must already exist, use absolute paths only)"}
{"_id": "q_14968", "text": "Read dataset from csv."}
{"_id": "q_14969", "text": "Reads dataset from json."}
{"_id": "q_14970", "text": "Reads dataset to csv.\n\n :param X: dataset as list of dict.\n :param y: labels."}
{"_id": "q_14971", "text": "Return representation of html start tag and attributes."}
{"_id": "q_14972", "text": "Provide signifance for features in dataset with anova using multiple hypostesis testing\n\n :param X: List of dict with key as feature names and values as features\n :param y: Labels\n :param threshold: Low-variens threshold to eliminate low varience features\n :param correcting_multiple_hypotesis: corrects p-val with multiple hypotesis testing\n :param method: method of multiple hypotesis testing\n :param alpha: alpha of multiple hypotesis testing\n :param sort_by: sorts output dataframe by pval or F\n :return: DataFrame with F and pval for each feature with their average values"}
{"_id": "q_14973", "text": "return True if callback is a vanilla plain jane function"}
{"_id": "q_14974", "text": "Restore the data dict - update the flask session and this object"}
{"_id": "q_14975", "text": "create string suitable for HTTP User-Agent header"}
{"_id": "q_14976", "text": "Given a document, return XML prettified."}
{"_id": "q_14977", "text": "Transform & and < to XML valid &amp; and &lt.\n\n Pass a list of tags as string to enable replacement of\n '<' globally but keep any XML tags in the list."}
{"_id": "q_14978", "text": "Properly format arXiv IDs."}
{"_id": "q_14979", "text": "Add correct nations field according to mapping in NATIONS_DEFAULT_MAP."}
{"_id": "q_14980", "text": "Try to capitalize properly a title string."}
{"_id": "q_14981", "text": "Convert some HTML tags to latex equivalents."}
{"_id": "q_14982", "text": "Download URL to a file."}
{"_id": "q_14983", "text": "Create a logger object."}
{"_id": "q_14984", "text": "Convert a date-value to the ISO date standard for humans."}
{"_id": "q_14985", "text": "Return True if license is compatible with Open Access"}
{"_id": "q_14986", "text": "Information about the current volume, issue, etc. is available\n in a file called issue.xml that is available in a higher directory."}
{"_id": "q_14987", "text": "Return the best effort start_date."}
{"_id": "q_14988", "text": "Extract oembed resources from a block of text. Returns a list\n of dictionaries.\n\n Max width & height can be specified:\n {% for embed in block_of_text|extract_oembeds:\"400x300\" %}\n\n Resource type can be specified:\n {% for photo_embed in block_of_text|extract_oembeds:\"photo\" %}\n\n Or both:\n {% for embed in block_of_text|extract_oembeds:\"400x300xphoto\" %}"}
{"_id": "q_14989", "text": "A node which parses everything between its two nodes, and replaces any links\n with OEmbed-provided objects, if possible.\n\n Supports two optional argument, which is the maximum width and height,\n specified like so:\n\n {% oembed 640x480 %}http://www.viddler.com/explore/SYSTM/videos/49/{% endoembed %}\n\n and or the name of a sub tempalte directory to render templates from:\n\n {% oembed 320x240 in \"comments\" %}http://www.viddler.com/explore/SYSTM/videos/49/{% endoembed %}\n\n or:\n\n {% oembed in \"comments\" %}http://www.viddler.com/explore/SYSTM/videos/49/{% endoembed %}\n\n either of those will render templates in oembed/comments/oembedtype.html\n\n Additionally, you can specify a context variable to drop the rendered text in:\n\n {% oembed 600x400 in \"comments\" as var_name %}...{% endoembed %}\n {% oembed as var_name %}...{% endoembed %}"}
{"_id": "q_14990", "text": "Generates a &lt;link&gt; tag with oembed autodiscovery bits for an object.\n\n {% oembed_autodiscover video %}"}
{"_id": "q_14991", "text": "Generates a &lt;link&gt; tag with oembed autodiscovery bits.\n\n {% oembed_url_scheme %}"}
{"_id": "q_14992", "text": "load the module so we can actually run the script's function"}
{"_id": "q_14993", "text": "get the contents of the script"}
{"_id": "q_14994", "text": "return that path to be able to call this script from the passed in\n basename\n\n example -- \n basepath = /foo/bar\n self.path = /foo/bar/che/baz.py\n self.call_path(basepath) # che/baz.py\n\n basepath -- string -- the directory you would be calling this script in\n return -- string -- the minimum path that you could use to execute this script\n in basepath"}
{"_id": "q_14995", "text": "load the script and set the parser and argument info\n\n I feel that this is way too brittle to be used long term, I think it just\n might be best to import the stupid module, the thing I don't like about that\n is then we import basically everything, which seems bad?"}
{"_id": "q_14996", "text": "return True if this script can be run from the command line"}
{"_id": "q_14997", "text": "Handles registering the fields with the FieldRegistry and creating a \n post-save signal for the model."}
{"_id": "q_14998", "text": "I need a way to ensure that this signal gets created for all child\n models, and since model inheritance doesn't have a 'contrubite_to_class'\n style hook, I am creating a fake virtual field which will be added to\n all subclasses and handles creating the signal"}
{"_id": "q_14999", "text": "Recusively merge the 2 dicts.\n\n Destructive on argument 'a'."}
{"_id": "q_15000", "text": "Fetch response headers and data from a URL, raising a generic exception\n for any kind of failure."}
{"_id": "q_15001", "text": "Given a url which may or may not be a relative url, convert it to a full\n url path given another full url as an example"}
{"_id": "q_15002", "text": "Generate a fake request object to allow oEmbeds to use context processors."}
{"_id": "q_15003", "text": "dynamically load a class given a string of the format\n \n package.Class"}
{"_id": "q_15004", "text": "A decorator for a function to dispatch on.\n\n The value returned by the dispatch function is used to look up the\n implementation function based on its dispatch key.\n\n The dispatch function is available using the `dispatch_fn` function."}
{"_id": "q_15005", "text": "Override the base get_record."}
{"_id": "q_15006", "text": "653 Free Keywords."}
{"_id": "q_15007", "text": "710 Collaboration."}
{"_id": "q_15008", "text": "Auto-discover INSTALLED_APPS registered_blocks.py modules and fail\n silently when not present. This forces an import on them thereby\n registering their blocks.\n\n This is a near 1-to-1 copy of how django's admin application registers\n models."}
{"_id": "q_15009", "text": "Verifies a block prior to registration."}
{"_id": "q_15010", "text": "Return a field created with the provided elements.\n\n Global position is set arbitrary to -1."}
{"_id": "q_15011", "text": "Create a list of records from the marcxml description.\n\n :returns: a list of objects initiated by the function create_record().\n Please see that function's docstring."}
{"_id": "q_15012", "text": "Create a record object from the marcxml description.\n\n Uses the lxml parser.\n\n The returned object is a tuple (record, status_code, list_of_errors),\n where status_code is 0 when there are errors, 1 when no errors.\n\n The return record structure is as follows::\n\n Record := {tag : [Field]}\n Field := (Subfields, ind1, ind2, value)\n Subfields := [(code, value)]\n\n .. code-block:: none\n\n .--------.\n | record |\n '---+----'\n |\n .------------------------+------------------------------------.\n |record['001'] |record['909'] |record['520'] |\n | | | |\n [list of fields] [list of fields] [list of fields] ...\n | | |\n | .--------+--+-----------. |\n | | | | |\n |[0] |[0] |[1] ... |[0]\n .----+------. .-----+-----. .--+--------. .---+-------.\n | Field 001 | | Field 909 | | Field 909 | | Field 520 |\n '-----------' '-----+-----' '--+--------' '---+-------'\n | | | |\n ... | ... ...\n |\n .----------+-+--------+------------.\n | | | |\n |[0] |[1] |[2] |\n [list of subfields] 'C' '4' ...\n |\n .----+---------------+------------------------+\n | | |\n ('a', 'value') | ('a', 'value for another a')\n ('b', 'value for subfield b')\n\n :param marcxml: an XML string representation of the record to create\n :param verbose: the level of verbosity: 0 (silent), 1-2 (warnings),\n 3(strict:stop when errors)\n :param correct: 1 to enable correction of marcxml syntax. Else 0.\n :return: a tuple (record, status_code, list_of_errors), where status\n code is 0 where there are errors, 1 when no errors"}
{"_id": "q_15013", "text": "Filter the given field.\n\n Filters given field and returns only that field instances that contain\n filter_subcode with given filter_value. As an input for search function\n accepts output from record_get_field_instances function. Function can be\n run in three modes:\n\n - 'e' - looking for exact match in subfield value\n - 's' - looking for substring in subfield value\n - 'r' - looking for regular expression in subfield value\n\n Example:\n\n record_filter_field(record_get_field_instances(rec, '999', '%', '%'),\n 'y', '2001')\n\n In this case filter_subcode is 'y' and filter_value is '2001'.\n\n :param field_instances: output from record_get_field_instances\n :param filter_subcode: name of the subfield\n :type filter_subcode: string\n :param filter_value: value of the subfield\n :type filter_value: string\n :param filter_mode: 'e','s' or 'r'"}
{"_id": "q_15014", "text": "Return True if rec1 is identical to rec2.\n\n It does so regardless of a difference in the 005 tag (i.e. the timestamp)."}
{"_id": "q_15015", "text": "Return the list of field instances for the specified tag and indications.\n\n Return empty list if not found.\n If tag is empty string, returns all fields\n\n Parameters (tag, ind1, ind2) can contain wildcard %.\n\n :param rec: a record structure as returned by create_record()\n :param tag: a 3 characters long string\n :param ind1: a 1 character long string\n :param ind2: a 1 character long string\n :param code: a 1 character long string\n :return: a list of field tuples (Subfields, ind1, ind2, value,\n field_position_global) where subfields is list of (code, value)"}
{"_id": "q_15016", "text": "Delete the field with the given position.\n\n If global field position is specified, deletes the field with the\n corresponding global field position.\n If field_position_local is specified, deletes the field with the\n corresponding local field position and tag.\n Else deletes all the fields matching tag and optionally ind1 and\n ind2.\n\n If both field_position_global and field_position_local are present,\n then field_position_local takes precedence.\n\n :param rec: the record data structure\n :param tag: the tag of the field to be deleted\n :param ind1: the first indicator of the field to be deleted\n :param ind2: the second indicator of the field to be deleted\n :param field_position_global: the global field position (record wise)\n :param field_position_local: the local field position (tag wise)\n :return: the list of deleted fields"}
{"_id": "q_15017", "text": "Add the fields into the record at the required position.\n\n The position is specified by the tag and the field_position_local in the\n list of fields.\n\n :param rec: a record structure\n :param tag: the tag of the fields to be moved\n :param field_position_local: the field_position_local to which the field\n will be inserted. If not specified, appends\n the fields to the tag.\n :param a: list of fields to be added\n :return: -1 if the operation failed, or the field_position_local if it was\n successful"}
{"_id": "q_15018", "text": "Move some fields to the position specified by 'field_position_local'.\n\n :param rec: a record structure as returned by create_record()\n :param tag: the tag of the fields to be moved\n :param field_positions_local: the positions of the fields to move\n :param field_position_local: insert the field before that\n field_position_local. If unspecified, appends\n the fields :return: the field_position_local\n is the operation was successful"}
{"_id": "q_15019", "text": "Delete all subfields with subfield_code in the record."}
{"_id": "q_15020", "text": "Return the the matching field.\n\n One has to enter either a global field position or a local field position.\n\n :return: a list of subfield tuples (subfield code, value).\n :rtype: list"}
{"_id": "q_15021", "text": "Replace a field with a new field."}
{"_id": "q_15022", "text": "Return the subfield of the matching field.\n\n One has to enter either a global field position or a local field position.\n\n :return: a list of subfield tuples (subfield code, value).\n :rtype: list"}
{"_id": "q_15023", "text": "Generate the XML for record 'rec'.\n\n :param rec: record\n :param tags: list of tags to be printed\n :return: string"}
{"_id": "q_15024", "text": "Generate the XML for field 'field' and returns it as a string."}
{"_id": "q_15025", "text": "Print a list of records.\n\n :param format: 1 XML, 2 HTML (not implemented)\n :param tags: list of tags to be printed\n if 'listofrec' is not a list it returns empty string"}
{"_id": "q_15026", "text": "Return the global and local positions of the first occurrence of the field.\n\n :param rec: A record dictionary structure\n :type rec: dictionary\n :param tag: The tag of the field to search for\n :type tag: string\n :param field: A field tuple as returned by create_field()\n :type field: tuple\n :param strict: A boolean describing the search method. If strict\n is False, then the order of the subfields doesn't\n matter. Default search method is strict.\n :type strict: boolean\n :return: A tuple of (global_position, local_position) or a\n tuple (None, None) if the field is not present.\n :rtype: tuple\n :raise InvenioBibRecordFieldError: If the provided field is invalid."}
{"_id": "q_15027", "text": "Find subfield instances in a particular field.\n\n It tests values in 1 of 3 possible ways:\n - Does a subfield code exist? (ie does 773__a exist?)\n - Does a subfield have a particular value? (ie 773__a == 'PhysX')\n - Do a pair of subfields have particular values?\n (ie 035__2 == 'CDS' and 035__a == '123456')\n\n Parameters:\n * rec - dictionary: a bibrecord structure\n * tag - string: the tag of the field (ie '773')\n * ind1, ind2 - char: a single characters for the MARC indicators\n * sub_key - char: subfield key to find\n * sub_value - string: subfield value of that key\n * sub_key2 - char: key of subfield to compare against\n * sub_value2 - string: expected value of second subfield\n * case_sensitive - bool: be case sensitive when matching values\n\n :return: false if no match found, else provides the field position (int)"}
{"_id": "q_15028", "text": "Remove unchanged volatile subfields from the record."}
{"_id": "q_15029", "text": "Turns all subfields to volatile"}
{"_id": "q_15030", "text": "Remove all non-empty controlfields from the record.\n\n :param rec: A record dictionary structure\n :type rec: dictionary"}
{"_id": "q_15031", "text": "Order subfields from a record alphabetically based on subfield code.\n\n If 'tag' is not None, only a specific tag of the record will be reordered,\n otherwise the whole record.\n\n :param rec: bibrecord\n :type rec: bibrec\n :param tag: tag where the subfields will be ordered\n :type tag: str"}
{"_id": "q_15032", "text": "Shift all global field positions.\n\n Shift all global field positions with global field positions\n higher or equal to 'start' from the value 'delta'."}
{"_id": "q_15033", "text": "Return true if MARC 'tag' matches a 'pattern'.\n\n 'pattern' is plain text, with % as wildcard\n\n Both parameters must be 3 characters long strings.\n\n .. doctest::\n\n >>> _tag_matches_pattern(\"909\", \"909\")\n True\n >>> _tag_matches_pattern(\"909\", \"9%9\")\n True\n >>> _tag_matches_pattern(\"909\", \"9%8\")\n False\n\n :param tag: a 3 characters long string\n :param pattern: a 3 characters long string\n :return: False or True"}
{"_id": "q_15034", "text": "Check if the global field positions in the record are valid.\n\n I.e., no duplicate global field positions and local field positions in the\n list of fields are ascending.\n\n :param record: the record data structure\n :return: the first error found as a string or None if no error was found"}
{"_id": "q_15035", "text": "Sort a set of fields by their indicators.\n\n Return a sorted list with correct global field positions."}
{"_id": "q_15036", "text": "Retrieve all children from node 'node' with name 'name'."}
{"_id": "q_15037", "text": "Iterate through all the children of a node.\n\n Returns one string containing the values from all the text-nodes\n recursively."}
{"_id": "q_15038", "text": "Check and correct the structure of the record.\n\n :param record: the record data structure\n :return: a list of errors found"}
{"_id": "q_15039", "text": "Clean MARCXML harvested from OAI.\n\n Allows the xml to be used with BibUpload or BibRecord.\n\n :param xml: either XML as a string or path to an XML file\n\n :return: ElementTree of clean data"}
{"_id": "q_15040", "text": "Generate the record deletion if deleted form OAI-PMH."}
{"_id": "q_15041", "text": "Converts the file associated with the file_name passed into a MP3 file.\n\n :param str file_name: Filename of the original file in local storage\n :param Queue delete_queue: Delete queue to add the original file to after conversion is done\n :return str: Filename of the new file in local storage"}
{"_id": "q_15042", "text": "Return a session for yesss.at."}
{"_id": "q_15043", "text": "Check for working login data."}
{"_id": "q_15044", "text": "Send an SMS."}
{"_id": "q_15045", "text": "Return the date of the article in file."}
{"_id": "q_15046", "text": "Return this articles' collection."}
{"_id": "q_15047", "text": "Attach fulltext FFT."}
{"_id": "q_15048", "text": "Yield single conversion objects from a MARCXML file or string.\n\n >>> from harvestingkit.inspire_cds_package import Inspire2CDS\n >>> for record in Inspire2CDS.from_source(\"inspire.xml\"):\n >>> xml = record.convert()"}
{"_id": "q_15049", "text": "Load configuration from config.\n\n Meant to run only once per system process as\n class variable in subclasses."}
{"_id": "q_15050", "text": "Clear any fields listed in field_list."}
{"_id": "q_15051", "text": "Add 035 number from 001 recid with given source."}
{"_id": "q_15052", "text": "650 Translate Categories."}
{"_id": "q_15053", "text": "Determine whether the desired version is a reasonable next version.\n\n Parameters\n ----------\n desired_version: str\n the proposed next version name"}
{"_id": "q_15054", "text": "Connects and logins to the server."}
{"_id": "q_15055", "text": "Downloads a file from the FTP server to target folder\n\n :param source_file: the absolute path for the file on the server\n it can be the one of the files coming from\n FtpHandler.dir().\n :type source_file: string\n :param target_folder: relative or absolute path of the\n destination folder default is the\n working directory.\n :type target_folder: string"}
{"_id": "q_15056", "text": "Changes the working directory on the server.\n\n :param folder: the desired directory.\n :type folder: string"}
{"_id": "q_15057", "text": "Creates a folder in the server\n\n :param folder: the folder to be created.\n :type folder: string"}
{"_id": "q_15058", "text": "Delete a file from the server.\n\n :param filename: the file to be deleted.\n :type filename: string"}
{"_id": "q_15059", "text": "Delete a folder from the server.\n\n :param foldername: the folder to be deleted.\n :type foldername: string"}
{"_id": "q_15060", "text": "Add a mail to the queue to be sent.\n\n WARNING: Commits by default!\n\n :param to_addresses: The names and addresses to send the email to, i.e. \"Steve<steve@fig14.com>, info@fig14.com\"\n :param from_address: Who the email is from i.e. \"Stephen Brown <s@fig14.com>\"\n :param subject: The email subject\n :param body: The html / text body of the email\n :param commit: Whether to commit to the database\n :param html: Is this a html email?\n :param session: The sqlalchemy session or None to use db.session"}
{"_id": "q_15061", "text": "Parse an HTTP accept-like header.\n\n :param str header_value: the header value to parse\n :return: a :class:`list` of :class:`.ContentType` instances\n in decreasing quality order. Each instance is augmented\n with the associated quality as a ``float`` property\n named ``quality``.\n\n ``Accept`` is a class of headers that contain a list of values\n and an associated preference value. The ever present `Accept`_\n header is a perfect example. It is a list of content types and\n an optional parameter named ``q`` that indicates the relative\n weight of a particular type. The most basic example is::\n\n Accept: audio/*;q=0.2, audio/basic\n\n Which states that I prefer the ``audio/basic`` content type\n but will accept other ``audio`` sub-types with an 80% mark down.\n\n .. _Accept: http://tools.ietf.org/html/rfc7231#section-5.3.2"}
{"_id": "q_15062", "text": "Parse a `Cache-Control`_ header, returning a dictionary of key-value pairs.\n\n Any of the ``Cache-Control`` parameters that do not have directives, such\n as ``public`` or ``no-cache`` will be returned with a value of ``True``\n if they are set in the header.\n\n :param str header_value: ``Cache-Control`` header value to parse\n :return: the parsed ``Cache-Control`` header values\n :rtype: dict\n\n .. _Cache-Control: https://tools.ietf.org/html/rfc7234#section-5.2"}
{"_id": "q_15063", "text": "Parse RFC7239 Forwarded header.\n\n :param str header_value: value to parse\n :keyword bool only_standard_parameters: if this keyword is specified\n and given a *truthy* value, then a non-standard parameter name\n will result in :exc:`~ietfparse.errors.StrictHeaderParsingFailure`\n :return: an ordered :class:`list` of :class:`dict` instances\n :raises: :exc:`ietfparse.errors.StrictHeaderParsingFailure` is\n raised if `only_standard_parameters` is enabled and a non-standard\n parameter name is encountered\n\n This function parses a :rfc:`7239` HTTP header into a :class:`list`\n of :class:`dict` instances with each instance containing the param\n values. The list is ordered as received from left to right and the\n parameter names are folded to lower case strings."}
{"_id": "q_15064", "text": "Parse a comma-separated list header.\n\n :param str value: header value to split into elements\n :return: list of header elements as strings"}
{"_id": "q_15065", "text": "Parse a named parameter list in the \"common\" format.\n\n :param parameter_list: sequence of string values to parse\n :keyword bool normalize_parameter_names: if specified and *truthy*\n then parameter names will be case-folded to lower case\n :keyword bool normalize_parameter_values: if omitted or specified\n as *truthy*, then parameter values are case-folded to lower case\n :keyword bool normalized_parameter_values: alternate way to spell\n ``normalize_parameter_values`` -- this one is deprecated\n :return: a sequence containing the name to value pairs\n\n The parsed values are normalized according to the keyword parameters\n and returned as :class:`tuple` of name to value pairs preserving the\n ordering from `parameter_list`. The values will have quotes removed\n if they were present."}
{"_id": "q_15066", "text": "Add a new value to the list.\n\n :param str name: name of the value that is being parsed\n :param str value: value that is being parsed\n :raises ietfparse.errors.MalformedLinkValue:\n if *strict mode* is enabled and a validation error\n is detected\n\n This method implements most of the validation mentioned in\n sections 5.3 and 5.4 of :rfc:`5988`. The ``_rfc_values``\n dictionary contains the appropriate values for the attributes\n that get special handling. If *strict mode* is enabled, then\n only values that are acceptable will be added to ``_values``."}
{"_id": "q_15067", "text": "Parses a block of text indiscriminately"}
{"_id": "q_15068", "text": "Parses a block of text rendering links that occur on their own line\n normally but rendering inline links using a special template dir"}
{"_id": "q_15069", "text": "Creates connection to the Google Drive API, sets the connection attribute to make requests, and creates the Music folder if it doesn't exist."}
{"_id": "q_15070", "text": "Uploads the file associated with the file_name passed to Google Drive in the Music folder.\n\n :param str file_name: Filename of the file to be uploaded\n :return str: Original filename passed as an argument (in order for the worker to send it to the delete queue)"}
{"_id": "q_15071", "text": "Writes the params to file that skytool_Free needs to generate the sky radiance distribution."}
{"_id": "q_15072", "text": "Read the phytoplankton absorption file from a csv formatted file\n\n :param file_name: filename and path of the csv file"}
{"_id": "q_15073", "text": "Read the pure water absorption from a csv formatted file\n\n :param file_name: filename and path of the csv file"}
{"_id": "q_15074", "text": "Read the pure water scattering from a csv formatted file\n\n :param file_name: filename and path of the csv file"}
{"_id": "q_15075", "text": "Calculates the total scattering from back-scattering\n\n :param scattering_fraction: the fraction of back-scattering to total scattering default = 0.01833\n\n b = ( bb[sea water] + bb[p] ) /0.01833"}
{"_id": "q_15076", "text": "Calculates the total absorption from water, phytoplankton and CDOM\n\n a = awater + acdom + aphi"}
{"_id": "q_15077", "text": "Calculates the total attenuation from the total absorption and total scattering\n\n c = a + b"}
{"_id": "q_15078", "text": "Meta method that calls all of the build methods in the correct order\n\n self.build_a()\n self.build_bb()\n self.build_b()\n self.build_c()"}
{"_id": "q_15079", "text": "Takes lists for parameters and saves them as class properties\n\n :param saa: <list> Sun Azimuth Angle (deg)\n :param sza: <list> Sun Zenith Angle (deg)\n :param p: <list> Phytoplankton linear scalling factor\n :param x: <list> Scattering scaling factor\n :param y: <list> Scattering slope factor\n :param g: <list> CDOM absorption scaling factor\n :param s: <list> CDOM absorption slope factor\n :param z: <list> depth (m)"}
{"_id": "q_15080", "text": "Loads a text file to a python dictionary using '=' as the delimiter\n\n :param file_name: the name and path of the text file"}
{"_id": "q_15081", "text": "Pull comma separated string values out of a text file and converts them to float list"}
{"_id": "q_15082", "text": "Reads in a PlanarRad generated report\n\n Saves the single line reported parameters as a python dictionary\n\n :param filename: The name and path of the PlanarRad generated file\n :returns self.data_dictionary: python dictionary with the key and values from the report"}
{"_id": "q_15083", "text": "Do the legwork of logging into the Midas Server instance, storing the API\n key and token.\n\n :param email: (optional) Email address to login with. If not set, the\n console will be prompted.\n :type email: None | string\n :param password: (optional) User password to login with. If not set and no\n 'api_key' is set, the console will be prompted.\n :type password: None | string\n :param api_key: (optional) API key to login with. If not set, password\n login with be used.\n :type api_key: None | string\n :param application: (optional) Application name to be used with 'api_key'.\n :type application: string\n :param url: (optional) URL address of the Midas Server instance to login\n to. If not set, the console will be prompted.\n :type url: None | string\n :param verify_ssl_certificate: (optional) If True, the SSL certificate will\n be verified\n :type verify_ssl_certificate: bool\n :returns: API token.\n :rtype: string"}
{"_id": "q_15084", "text": "Renew or get a token to use for transactions with the Midas Server\n instance.\n\n :returns: API token.\n :rtype: string"}
{"_id": "q_15085", "text": "Create a folder from the local file in the midas folder corresponding to\n the parent folder id.\n\n :param local_folder: full path to a directory on the local file system\n :type local_folder: string\n :param parent_folder_id: id of parent folder on the Midas Server instance,\n where the folder will be added\n :type parent_folder_id: int | long\n :param reuse_existing: (optional) whether to accept an existing folder of\n the same name in the same location, or create a new one instead\n :type reuse_existing: bool"}
{"_id": "q_15086", "text": "Create and return a hex checksum using the MD5 sum of the passed in file.\n This will stream the file, rather than load it all into memory.\n\n :param file_path: full path to the file\n :type file_path: string\n :returns: a hex checksum\n :rtype: string"}
{"_id": "q_15087", "text": "Create a bitstream in the given item.\n\n :param file_path: full path to the local file\n :type file_path: string\n :param local_file: name of the local file\n :type local_file: string\n :param log_ind: (optional) any additional message to log upon creation of\n the bitstream\n :type log_ind: None | string"}
{"_id": "q_15088", "text": "Function for creating a remote folder and returning the id. This should be\n a building block for user-level functions.\n\n :param local_folder: full path to a local folder\n :type local_folder: string\n :param parent_folder_id: id of parent folder on the Midas Server instance,\n where the new folder will be added\n :type parent_folder_id: int | long\n :returns: id of the remote folder that was created\n :rtype: int | long"}
{"_id": "q_15089", "text": "Return whether a folder contains only files. This will be False if the\n folder contains any subdirectories.\n\n :param local_folder: full path to the local folder\n :type local_folder: string\n :returns: True if the folder contains only files\n :rtype: bool"}
{"_id": "q_15090", "text": "Find an item or folder matching the name. A folder will be found first if\n both are present.\n\n :param name: The name of the resource\n :type name: string\n :param folder_id: The folder to search within\n :type folder_id: int | long\n :returns: A tuple indicating whether the resource is an item an the id of\n said resource. i.e. (True, item_id) or (False, folder_id). Note that in\n the event that we do not find a result return (False, -1)\n :rtype: (bool, int | long)"}
{"_id": "q_15091", "text": "Download the requested item to the specified path.\n\n :param item_id: The id of the item to be downloaded\n :type item_id: int | long\n :param path: (optional) the location to download the item\n :type path: string\n :param item: The dict of item info\n :type item: dict | None"}
{"_id": "q_15092", "text": "Recursively download a file or item from the Midas Server instance.\n\n :param server_path: The location on the server to find the resource to\n download\n :type server_path: string\n :param local_path: The location on the client to store the downloaded data\n :type local_path: string"}
{"_id": "q_15093", "text": "Login and get a token. If you do not specify a specific application,\n 'Default' will be used.\n\n :param email: Email address of the user\n :type email: string\n :param api_key: API key assigned to the user\n :type api_key: string\n :param application: (optional) Application designated for this API key\n :type application: string\n :returns: Token to be used for interaction with the API until\n expiration\n :rtype: string"}
{"_id": "q_15094", "text": "List the folders in the users home area.\n\n :param token: A valid token for the user in question.\n :type token: string\n :returns: List of dictionaries containing folder information.\n :rtype: list[dict]"}
{"_id": "q_15095", "text": "Get a user by the email of that user.\n\n :param email: The email of the desired user.\n :type email: string\n :returns: The user requested.\n :rtype: dict"}
{"_id": "q_15096", "text": "Create a new community or update an existing one using the uuid.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param name: The community name.\n :type name: string\n :param description: (optional) The community description.\n :type description: string\n :param uuid: (optional) uuid of the community. If none is passed, will\n generate one.\n :type uuid: string\n :param privacy: (optional) Default 'Public', possible values\n [Public|Private].\n :type privacy: string\n :param can_join: (optional) Default 'Everyone', possible values\n [Everyone|Invitation].\n :type can_join: string\n :returns: The community dao that was created.\n :rtype: dict"}
{"_id": "q_15097", "text": "Get a community based on its id.\n\n :param community_id: The id of the target community.\n :type community_id: int | long\n :param token: (optional) A valid token for the user in question.\n :type token: None | string\n :returns: The requested community.\n :rtype: dict"}
{"_id": "q_15098", "text": "Get the non-recursive children of the passed in community_id.\n\n :param community_id: The id of the requested community.\n :type community_id: int | long\n :param token: (optional) A valid token for the user in question.\n :type token: None | string\n :returns: List of the folders in the community.\n :rtype: dict[string, list]"}
{"_id": "q_15099", "text": "List all communities visible to a user.\n\n :param token: (optional) A valid token for the user in question.\n :type token: None | string\n :returns: The list of communities.\n :rtype: list[dict]"}
{"_id": "q_15100", "text": "Get the attributes of the specified folder.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param folder_id: The id of the requested folder.\n :type folder_id: int | long\n :returns: Dictionary of the folder attributes.\n :rtype: dict"}
{"_id": "q_15101", "text": "Get the non-recursive children of the passed in folder_id.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param folder_id: The id of the requested folder.\n :type folder_id: int | long\n :returns: Dictionary of two lists: 'folders' and 'items'.\n :rtype: dict[string, list]"}
{"_id": "q_15102", "text": "Move a folder to the destination folder.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param folder_id: The id of the folder to be moved.\n :type folder_id: int | long\n :param dest_folder_id: The id of destination (new parent) folder.\n :type dest_folder_id: int | long\n :returns: Dictionary containing the details of the moved folder.\n :rtype: dict"}
{"_id": "q_15103", "text": "Get the attributes of the specified item.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param item_id: The id of the requested item.\n :type item_id: int | string\n :returns: Dictionary of the item attributes.\n :rtype: dict"}
{"_id": "q_15104", "text": "Delete the item with the passed in item_id.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param item_id: The id of the item to be deleted.\n :type item_id: int | long\n :returns: None.\n :rtype: None"}
{"_id": "q_15105", "text": "Get the metadata associated with an item.\n\n :param item_id: The id of the item for which metadata will be returned\n :type item_id: int | long\n :param token: (optional) A valid token for the user in question.\n :type token: None | string\n :param revision: (optional) Revision of the item. Defaults to latest\n revision.\n :type revision: int | long\n :returns: List of dictionaries containing item metadata.\n :rtype: list[dict]"}
{"_id": "q_15106", "text": "Share an item to the destination folder.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param item_id: The id of the item to be shared.\n :type item_id: int | long\n :param dest_folder_id: The id of destination folder where the item is\n shared to.\n :type dest_folder_id: int | long\n :returns: Dictionary containing the details of the shared item.\n :rtype: dict"}
{"_id": "q_15107", "text": "Move an item from the source folder to the destination folder.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param item_id: The id of the item to be moved\n :type item_id: int | long\n :param src_folder_id: The id of source folder where the item is located\n :type src_folder_id: int | long\n :param dest_folder_id: The id of destination folder where the item is\n moved to\n :type dest_folder_id: int | long\n :returns: Dictionary containing the details of the moved item\n :rtype: dict"}
{"_id": "q_15108", "text": "Create a link bitstream.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param folder_id: The id of the folder in which to create a new item\n that will contain the link. The new item will have the same name as\n the URL unless an item name is supplied.\n :type folder_id: int | long\n :param url: The URL of the link you will create, will be used as the\n name of the bitstream and of the item unless an item name is\n supplied.\n :type url: string\n :param item_name: (optional) The name of the newly created item, if\n not supplied, the item will have the same name as the URL.\n :type item_name: string\n :param length: (optional) The length in bytes of the file to which the\n link points.\n :type length: int | long\n :param checksum: (optional) The MD5 checksum of the file to which the\n link points.\n :type checksum: string\n :returns: The item information of the item created.\n :rtype: dict"}
{"_id": "q_15109", "text": "Generate a token to use for upload.\n\n Midas Server uses a individual token for each upload. The token\n corresponds to the file specified and that file only. Passing the MD5\n checksum allows the server to determine if the file is already in the\n asset store.\n\n If :param:`checksum` is passed and the token returned is blank, the\n server already has this file and there is no need to follow this\n call with a call to `perform_upload`, as the passed in file will have\n been added as a bitstream to the item's latest revision, creating a\n new revision if one doesn't exist.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param item_id: The id of the item in which to upload the file as a\n bitstream.\n :type item_id: int | long\n :param filename: The name of the file to generate the upload token for.\n :type filename: string\n :param checksum: (optional) The checksum of the file to upload.\n :type checksum: None | string\n :returns: String of the upload token.\n :rtype: string"}
{"_id": "q_15110", "text": "Upload a file into a given item (or just to the public folder if the\n item is not specified.\n\n :param upload_token: The upload token (returned by\n generate_upload_token)\n :type upload_token: string\n :param filename: The upload filename. Also used as the path to the\n file, if 'filepath' is not set.\n :type filename: string\n :param mode: (optional) Stream or multipart. Default is stream.\n :type mode: string\n :param folder_id: (optional) The id of the folder to upload into.\n :type folder_id: int | long\n :param item_id: (optional) If set, will append item ``bitstreams`` to\n the latest revision (or the one set using :param:`revision` ) of\n the existing item.\n :type item_id: int | long\n :param revision: (optional) If set, will add a new file into an\n existing revision. Set this to 'head' to add to the most recent\n revision.\n :type revision: string | int | long\n :param filepath: (optional) The path to the file.\n :type filepath: string\n :param create_additional_revision: (optional) If set, will create a\n new revision in the existing item.\n :type create_additional_revision: bool\n :returns: Dictionary containing the details of the item created or\n changed.\n :rtype: dict"}
{"_id": "q_15111", "text": "Add a Condor DAG to the given Batchmake task.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param batchmaketaskid: id of the Batchmake task for this DAG\n :type batchmaketaskid: int | long\n :param dagfilename: Filename of the DAG file\n :type dagfilename: string\n :param dagmanoutfilename: Filename of the DAG processing output\n :type dagmanoutfilename: string\n :returns: The created Condor DAG DAO\n :rtype: dict"}
{"_id": "q_15112", "text": "Add a Condor DAG job to the Condor DAG associated with this\n Batchmake task\n\n :param token: A valid token for the user in question.\n :type token: string\n :param batchmaketaskid: id of the Batchmake task for this DAG\n :type batchmaketaskid: int | long\n :param jobdefinitionfilename: Filename of the definition file for the\n job\n :type jobdefinitionfilename: string\n :param outputfilename: Filename of the output file for the job\n :type outputfilename: string\n :param errorfilename: Filename of the error file for the job\n :type errorfilename: string\n :param logfilename: Filename of the log file for the job\n :type logfilename: string\n :param postfilename: Filename of the post script log file for the job\n :type postfilename: string\n :return: The created Condor job DAO.\n :rtype: dict"}
{"_id": "q_15113", "text": "Log in to get the real token using the temporary token and otp.\n\n :param temp_token: The temporary token or id returned from normal login\n :type temp_token: string\n :param one_time_pass: The one-time pass to be sent to the underlying\n multi-factor engine.\n :type one_time_pass: string\n :returns: A standard token for interacting with the web api.\n :rtype: string"}
{"_id": "q_15114", "text": "Create a big thumbnail for the given bitstream with the given width.\n It is used as the main image of the given item and shown in the item\n view page.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param bitstream_id: The bitstream from which to create the thumbnail.\n :type bitstream_id: int | long\n :param item_id: The item on which to set the thumbnail.\n :type item_id: int | long\n :param width: (optional) The width in pixels to which to resize (aspect\n ratio will be preserved). Defaults to 575.\n :type width: int | long\n :returns: The ItemthumbnailDao object that was created.\n :rtype: dict"}
{"_id": "q_15115", "text": "Create a 100x100 small thumbnail for the given item. It is used for\n preview purpose and displayed in the 'preview' and 'thumbnails'\n sidebar sections.\n\n :param token: A valid token for the user in question.\n :type token: string\n :param item_id: The item on which to set the thumbnail.\n :type item_id: int | long\n :returns: The item object (with the new thumbnail id) and the path\n where the newly created thumbnail is stored.\n :rtype: dict"}
{"_id": "q_15116", "text": "Upload a JSON file containing numeric scoring results to be added as\n scalars. File is parsed and then deleted from the server.\n\n :param token: A valid token for the user in question.\n :param filepath: The path to the JSON file.\n :param community_id: The id of the community that owns the producer.\n :param producer_display_name: The display name of the producer.\n :param producer_revision: The repository revision of the producer\n that produced this value.\n :param submit_time: The submit timestamp. Must be parsable with PHP\n strtotime().\n :param config_item_id: (optional) If this value pertains to a specific\n configuration item, pass its id here.\n :param test_dataset_id: (optional) If this value pertains to a\n specific test dataset, pass its id here.\n :param truth_dataset_id: (optional) If this value pertains to a\n specific ground truth dataset, pass its id here.\n :param parent_keys: (optional) Semicolon-separated list of parent keys\n to look for numeric results under. Use '.' to denote nesting, like\n in normal javascript syntax.\n :param silent: (optional) If true, do not perform threshold-based email\n notifications for this scalar.\n :param unofficial: (optional) If true, creates an unofficial scalar\n visible only to the user performing the submission.\n :param build_results_url: (optional) A URL for linking to build results\n for this submission.\n :param branch: (optional) The branch name in the source repository for\n this submission.\n :param params: (optional) Any key/value pairs that should be displayed\n with this scalar result.\n :type params: dict\n :param extra_urls: (optional) Other URL's that should be displayed with\n with this scalar result. Each element of the list should be a dict\n with the following keys: label, text, href\n :type extra_urls: list of dicts\n :returns: The list of scalars that were created."}
{"_id": "q_15117", "text": "Find a hash value for the linear combination of invocation methods."}
{"_id": "q_15118", "text": "Connects to a Siemens S7 PLC.\n\n Connects to a Siemens S7 using the Snap7 library.\n See [the snap7 documentation](http://snap7.sourceforge.net/) for\n supported models and more details.\n\n It's not currently possible to query the device for available pins,\n so `available_pins()` returns an empty list. Instead, you should use\n `map_pin()` to map to a Merker, Input or Output in the PLC. The\n internal id you should use is a string following this format:\n '[DMQI][XBWD][0-9]+.?[0-9]*' where:\n\n * [DMQI]: D for DB, M for Merker, Q for Output, I for Input\n * [XBWD]: X for bit, B for byte, W for word, D for dword\n * [0-9]+: Address of the resource\n * [0-9]*: Bit of the address (type X only, ignored in others)\n\n For example: 'IB100' will read a byte from an input at address 100 and\n 'MX50.2' will read/write bit 2 of the Merker at address 50. It's not\n allowed to write to inputs (I), but you can read/write Outpus, DBs and\n Merkers. If it's disallowed by the PLC, an exception will be thrown by\n python-snap7 library.\n\n For this library to work, it might be needed to change some settings\n in the PLC itself. See\n [the snap7 documentation](http://snap7.sourceforge.net/) for more\n information. You also need to put the PLC in RUN mode. Not however that\n having a Ladder program downloaded, running and modifying variables\n will probably interfere with inputs and outputs, so put it in RUN mode,\n but preferably without a downloaded program.\n\n @arg address IP address of the module.\n @arg rack rack where the module is installed.\n @arg slot slot in the rack where the module is installed.\n @arg port port the PLC is listenning to.\n\n @throw RuntimeError if something went wrong\n @throw any exception thrown by `snap7`'s methods."}
{"_id": "q_15119", "text": "Returns a map of nodename to average fitness value for this block.\n Assumes that required resources have been checked on all nodes."}
{"_id": "q_15120", "text": "Pseudo handler placeholder while signal is beind processed"}
{"_id": "q_15121", "text": "Pause execution, execution will resume in X seconds or when the\n appropriate resume signal is received. Execution will jump to the\n callback_function, the default callback function is the handler\n method which will run all tasks registered with the reg_on_resume\n methodi.\n Returns True if timer expired, otherwise returns False"}
{"_id": "q_15122", "text": "Run all status tasks, then run all tasks in the resume queue"}
{"_id": "q_15123", "text": "Returns a list of available drivers names."}
{"_id": "q_15124", "text": "Fetch time series data from OpenTSDB\n\n Parameters:\n metric:\n A string representing a valid OpenTSDB metric.\n\n tags:\n A dict mapping tag names to tag values. Tag names and values are\n always strings.\n\n { 'user_id': '44' }\n\n start:\n A datetime.datetime-like object representing the start of the\n range to query over.\n\n end:\n A datetime.datetime-like object representing the end of the\n range to query over.\n\n aggregator:\n The function for merging multiple time series together. For\n example, if the \"user_id\" tag is not specified, this aggregator\n function is used to combine all heart rate time series into one\n time series. (Yes, this isn't very useful.)\n\n For queries that return only one time series, this parameter is\n not relevant.\n\n Valid values: \"sum\", \"min\", \"max\", \"avg\", \"dev\"\n\n See: http://opentsdb.net/docs/build/html/user_guide/query/aggregators.html\n\n downsampling:\n A relative time interval to \"downsample\". This isn't true\n downsampling; rather, if you specify a downsampling of \"5m\"\n (five minutes), OpenTSDB will split data into five minute\n intervals, and return one data point in the middle of each\n interval whose value is the average of all data points within\n that interval.\n\n Valid relative time values are strings of the following format:\n\n \"<amount><time_unit>\"\n\n Valid time units: \"ms\", \"s\", \"m\", \"h\", \"d\", \"w\", \"n\", \"y\"\n\n Date and time format: http://opentsdb.net/docs/build/html/user_guide/query/dates.html\n\n ms_resolution:\n Whether or not to output data point timestamps in milliseconds\n or seconds. If this flag is false and there are multiple\n data points within a second, those data points will be down\n sampled using the query's aggregation function.\n\n Returns:\n A dict mapping timestamps to data points"}
{"_id": "q_15125", "text": "Fetch and sort time series data from OpenTSDB\n\n Takes the same parameters as `fetch_metric`, but returns a list of\n (timestamp, value) tuples sorted by timestamp."}
{"_id": "q_15126", "text": "Extract function signature from an existing partial instance."}
{"_id": "q_15127", "text": "Calculate new argv and extra_argv values resulting from adding\n the specified positional and keyword arguments."}
{"_id": "q_15128", "text": "Maps a pin number to a physical device pin.\n\n To make it easy to change drivers without having to refactor a lot of\n code, this library does not use the names set by the driver to identify\n a pin. This function will map a number, that will be used by other\n functions, to a physical pin represented by the drivers pin id. That\n way, if you need to use another pin or change the underlying driver\n completly, you only need to redo the mapping.\n\n If you're developing a driver, keep in mind that your driver will not\n know about this. The other functions will translate the mapped pin to\n your id before calling your function.\n\n @arg abstract_pin_id the id that will identify this pin in the\n other function calls. You can choose what you want.\n\n @arg physical_pin_id the id returned in the driver.\n See `AbstractDriver.available_pins`. Setting it to None removes the\n mapping."}
{"_id": "q_15129", "text": "Sets pin `pin` to `direction`.\n\n The pin should support the requested mode. Calling this function\n on a unmapped pin does nothing. Calling it with a unsupported direction\n throws RuntimeError.\n\n If you're developing a driver, you should implement\n _set_pin_direction(self, pin, direction) where `pin` will be one of\n your internal IDs. If a pin is set to OUTPUT, put it on LOW state.\n\n @arg pin pin id you've set using `AbstractDriver.map_pin`\n @arg mode a value from `AbstractDriver.Direction`\n\n @throw KeyError if pin isn't mapped.\n @throw RuntimeError if direction is not supported by pin."}
{"_id": "q_15130", "text": "Gets the `ahio.Direction` this pin was set to.\n\n If you're developing a driver, implement _pin_direction(self, pin)\n\n @arg pin the pin you want to see the mode\n @returns the `ahio.Direction` the pin is set to\n\n @throw KeyError if pin isn't mapped."}
{"_id": "q_15131", "text": "Sets pin `pin` to `type`.\n\n The pin should support the requested mode. Calling this function\n on a unmapped pin does nothing. Calling it with a unsupported mode\n throws RuntimeError.\n\n If you're developing a driver, you should implement\n _set_pin_type(self, pin, ptype) where `pin` will be one of your\n internal IDs. If a pin is set to OUTPUT, put it on LOW state.\n\n @arg pin pin id you've set using `AbstractDriver.map_pin`\n @arg mode a value from `AbstractDriver.PortType`\n\n @throw KeyError if pin isn't mapped.\n @throw RuntimeError if type is not supported by pin."}
{"_id": "q_15132", "text": "Gets the `ahio.PortType` this pin was set to.\n\n If you're developing a driver, implement _pin_type(self, pin)\n\n @arg pin the pin you want to see the mode\n @returns the `ahio.PortType` the pin is set to\n\n @throw KeyError if pin isn't mapped."}
{"_id": "q_15133", "text": "Sets the output to the given value.\n\n Sets `pin` output to given value. If the pin is in INPUT mode, do\n nothing. If it's an analog pin, value should be in write_range.\n If it's not in the allowed range, it will be clamped. If pin is in\n digital mode, value can be `ahio.LogicValue` if `pwm` = False, or a\n number between 0 and 1 if `pwm` = True. If PWM is False, the pin will\n be set to HIGH or LOW, if `pwm` is True, a PWM wave with the given\n cycle will be created. If the pin does not support PWM and `pwm` is\n True, raise RuntimeError. The `pwm` argument should be ignored in case\n the pin is analog. If value is not valid for the given\n pwm/analog|digital combination, raise TypeError.\n\n If you're developing a driver, implement _write(self, pin, value, pwm)\n\n @arg pin the pin to write to\n @arg value the value to write on the pin\n @arg pwm wether the output should be a pwm wave\n\n @throw RuntimeError if the pin does not support PWM and `pwm` is True.\n @throw TypeError if value is not valid for this pin's mode and pwm\n value.\n @throw KeyError if pin isn't mapped."}
{"_id": "q_15134", "text": "Reads value from pin `pin`.\n\n Returns the value read from pin `pin`. If it's an analog pin, returns\n a number in analog.input_range. If it's digital, returns\n `ahio.LogicValue`.\n\n If you're developing a driver, implement _read(self, pin)\n\n @arg pin the pin to read from\n @returns the value read from the pin\n\n @throw KeyError if pin isn't mapped."}
{"_id": "q_15135", "text": "Sets the analog reference to `reference`\n\n If the driver supports per pin reference setting, set pin to the\n desired reference. If not, passing None means set to all, which is the\n default in most hardware. If only per pin reference is supported and\n pin is None, raise RuntimeError.\n\n If you're developing a driver, implement\n _set_analog_reference(self, reference, pin). Raise RuntimeError if pin\n was set but is not supported by the platform.\n\n @arg reference the value that describes the analog reference. See\n `AbstractDriver.analog_references`\n @arg pin if the the driver supports it, the pin that will use\n `reference` as reference. None for all.\n\n @throw RuntimeError if pin is None on a per pin only hardware, or if\n it's a valid pin on a global only analog reference hardware.\n @throw KeyError if pin isn't mapped."}
{"_id": "q_15136", "text": "Integrate SIR epidemic model\n\n Simulate a very basic deterministic SIR system.\n\n :param 2x1 numpy array y0: initial conditions\n :param Ntimestep length numpy array time: Vector of time points that \\\n solution is returned at\n :param float beta: transmission rate\n :param float gamma: recovery rate\n\n :returns: (2)x(Ntimestep) numpy array Xsim: first row S(t), second row I(t)"}
{"_id": "q_15137", "text": "Return the URL of the server.\n\n :returns: URL of the server\n :rtype: string"}
{"_id": "q_15138", "text": "Returns an estimate for the maximum amount of memory to be consumed by numpy arrays."}
{"_id": "q_15139", "text": "Start a Modbus server.\n\n The following classes are available with their respective named\n parameters:\n \n ModbusTcpClient\n host: The host to connect to (default 127.0.0.1)\n port: The modbus port to connect to (default 502)\n source_address: The source address tuple to bind to (default ('', 0))\n timeout: The timeout to use for this socket (default Defaults.Timeout)\n\n ModbusUdpClient\n host: The host to connect to (default 127.0.0.1)\n port: The modbus port to connect to (default 502)\n timeout: The timeout to use for this socket (default None)\n\n ModbusSerialClient\n method: The method to use for connection (asii, rtu, binary)\n port: The serial port to attach to\n stopbits: The number of stop bits to use (default 1)\n bytesize: The bytesize of the serial messages (default 8 bits)\n parity: Which kind of parity to use (default None)\n baudrate: The baud rate to use for the serial device\n timeout: The timeout between serial requests (default 3s)\n\n When configuring the ports, the following convention should be\n respected:\n \n portname: C1:13 -> Coil on device 1, address 13\n\n The letters can be:\n\n C = Coil\n I = Input\n R = Register\n H = Holding\n\n @arg configuration a string that instantiates one of those classes.\n\n @throw RuntimeError can't connect to Arduino"}
{"_id": "q_15140", "text": "We do not support multiple signatures in XPI signing because the client\n side code makes some pretty reasonable assumptions about a single signature\n on any given JAR. This function returns True if the file name given is one\n that we dispose of to prevent multiple signatures."}
{"_id": "q_15141", "text": "Return an exception given status and error codes.\n\n :param status_code: HTTP status code.\n :type status_code: None | int\n :param error_code: Midas Server error code.\n :type error_code: None | int\n :param value: Message to display.\n :type value: string\n :returns: Exception.\n :rtype : pydas.exceptions.ResponseError"}
{"_id": "q_15142", "text": "Read one VLQ-encoded integer value from an input data stream."}
{"_id": "q_15143", "text": "Read a table structure.\n\n These are used by Blizzard to collect pieces of data together. Each\n value is prefixed by two bytes, first denoting (doubled) index and the\n second denoting some sort of key -- so far it has always been '09'. The\n actual value follows as a Variable-Length Quantity, also known as uintvar.\n The actual value is also doubled.\n\n In some tables the keys might jump from 0A 09 to 04 09 for example.\n I have no idea why this happens, as the next logical key is 0C. Perhaps\n it's a table in a table? Some sort of headers might exist for these\n tables, I'd imagine at least denoting length. Further research required."}
{"_id": "q_15144", "text": "Print a summary of the game details."}
{"_id": "q_15145", "text": "This function once the file found, display data's file and the graphic associated."}
{"_id": "q_15146", "text": "This function separates data, from the file to display curves, and will put them in the good arrays."}
{"_id": "q_15147", "text": "The following permits to attribute the function \"display_the_graphic\" to the slider.\n Because, to make a connection, we can not have parameters for the function, but \"display_the_graphic\" has some."}
{"_id": "q_15148", "text": "This function displays information about curves.\n Inputs ; num_curve ; The index of the curve's line that we have to display.\n information ; The array which contains the information, of all curves to display."}
{"_id": "q_15149", "text": "This function displays an error message when a wrong value is typed."}
{"_id": "q_15150", "text": "This function executes planarRad using the batch file."}
{"_id": "q_15151", "text": "This function cancels PlanarRad."}
{"_id": "q_15152", "text": "This function quits PlanarRad, checking if PlanarRad is running before."}
{"_id": "q_15153", "text": "This function programs the button to save the figure displayed\n and save it in a png file in the current repository."}
{"_id": "q_15154", "text": "The following opens the documentation file."}
{"_id": "q_15155", "text": "This function does all required actions at the beginning when we run the GUI."}
{"_id": "q_15156", "text": "The following gets back coordinates of the mouse on the canvas."}
{"_id": "q_15157", "text": "In the IOU fungible the supply is set by Issuer, who issue funds."}
{"_id": "q_15158", "text": "highest lock on height"}
{"_id": "q_15159", "text": "highest valid lockset on height"}
{"_id": "q_15160", "text": "setup a timeout for waiting for a proposal"}
{"_id": "q_15161", "text": "called to inform about synced peers"}
{"_id": "q_15162", "text": "make privkeys that support coloring, see utils.cstr"}
{"_id": "q_15163", "text": "bandwidths are inaccurate, as we don't account for parallel transfers here"}
{"_id": "q_15164", "text": "returns class._on_msg_unsafe, use x.im_self to get class"}
{"_id": "q_15165", "text": "registers NativeContract classes"}
{"_id": "q_15166", "text": "returns True if unknown"}
{"_id": "q_15167", "text": "Disables analog reporting for a single analog pin.\n\n :param pin: Analog pin number. For example for A0, the number is 0.\n\n :return: No return value"}
{"_id": "q_15168", "text": "Enables analog reporting. By turning reporting on for a single pin.\n\n :param pin: Analog pin number. For example for A0, the number is 0.\n\n :return: No return value"}
{"_id": "q_15169", "text": "Enables digital reporting. By turning reporting on for all 8 bits in the \"port\" -\n this is part of Firmata's protocol specification.\n\n :param pin: Pin and all pins for this port\n\n :return: No return value"}
{"_id": "q_15170", "text": "This method will send an extended data analog output command to the selected pin\n\n :param pin: 0 - 127\n\n :param data: 0 - 0xfffff"}
{"_id": "q_15171", "text": "This method stops an I2C_READ_CONTINUOUSLY operation for the i2c device address specified.\n\n :param address: address of i2c device"}
{"_id": "q_15172", "text": "This method \"arms\" an analog pin for its data to be latched and saved in the latching table\n If a callback method is provided, when latching criteria is achieved, the callback function is called\n with latching data notification. In that case, the latching table is not updated.\n\n :param pin: Analog pin number (value following an 'A' designator, i.e. A5 = 5\n\n :param threshold_type: ANALOG_LATCH_GT | ANALOG_LATCH_LT | ANALOG_LATCH_GTE | ANALOG_LATCH_LTE\n\n :param threshold_value: numerical value - between 0 and 1023\n\n :param cb: callback method\n\n :return: True if successful, False if parameter data is invalid"}
{"_id": "q_15173", "text": "This method \"arms\" a digital pin for its data to be latched and saved in the latching table\n If a callback method is provided, when latching criteria is achieved, the callback function is called\n with latching data notification. In that case, the latching table is not updated.\n\n :param pin: Digital pin number\n\n :param threshold_type: DIGITAL_LATCH_HIGH | DIGITAL_LATCH_LOW\n\n :param cb: callback function\n\n :return: True if successful, False if parameter data is invalid"}
{"_id": "q_15174", "text": "Configure a pin as a servo pin. Set pulse min, max in ms.\n\n :param pin: Servo Pin.\n\n :param min_pulse: Min pulse width in ms.\n\n :param max_pulse: Max pulse width in ms.\n\n :return: No return value"}
{"_id": "q_15175", "text": "Configure stepper motor prior to operation.\n\n :param steps_per_revolution: number of steps per motor revolution\n\n :param stepper_pins: a list of control pin numbers - either 4 or 2"}
{"_id": "q_15176", "text": "Request the stepper library version from the Arduino.\n To retrieve the version after this command is called, call\n get_stepper_version"}
{"_id": "q_15177", "text": "open the serial port using the configuration data\n returns a reference to this instance"}
{"_id": "q_15178", "text": "This method continually runs. If an incoming character is available on the serial port\n it is read and placed on the _command_deque\n @return: Never Returns"}
{"_id": "q_15179", "text": "Set the brightness level for the entire display\n @param brightness: brightness level (0 -15)"}
{"_id": "q_15180", "text": "Write the entire buffer to the display"}
{"_id": "q_15181", "text": "Set all led's to off."}
{"_id": "q_15182", "text": "This method handles the incoming digital message.\n It stores the data values in the digital response table.\n Data is stored for all 8 bits of a digital port\n\n :param data: Message data from Firmata\n\n :return: No return value."}
{"_id": "q_15183", "text": "This method handles the incoming sonar data message and stores\n the data in the response table.\n\n :param data: Message data from Firmata\n\n :return: No return value."}
{"_id": "q_15184", "text": "This method will send a Sysex command to Firmata with any accompanying data\n\n :param sysex_command: sysex command\n\n :param sysex_data: data for command\n\n :return : No return value."}
{"_id": "q_15185", "text": "This method is used to transmit a non-sysex command.\n\n :param command: Command to send to firmata includes command + data formatted by caller\n\n :return : No return value."}
{"_id": "q_15186", "text": "Send the reset command to the Arduino.\n It resets the response tables to their initial values\n\n :return: No return value"}
{"_id": "q_15187", "text": "This method handles the incoming string data message from Firmata.\n The string is printed to the console\n\n :param data: Message data from Firmata\n\n :return: No return value.s"}
{"_id": "q_15188", "text": "Combine finder_image_urls and extender_image_urls,\n remove duplicate but keep order"}
{"_id": "q_15189", "text": "Find image URL in background-image\n\n Example:\n <div style=\"width: 100%; height: 100%; background-image: url(http://distilleryimage10.ak.instagram.com/bde04558a43b11e28e5d22000a1f979a_7.jpg);\" class=\"Image iLoaded iWithTransition Frame\" src=\"http://distilleryimage10.ak.instagram.com/bde04558a43b11e28e5d22000a1f979a_7.jpg\"></div>\n to\n http://distilleryimage10.ak.instagram.com/bde04558a43b11e28e5d22000a1f979a_7.jpg"}
{"_id": "q_15190", "text": "Condition an image for use with the VGG16 model."}
{"_id": "q_15191", "text": "Create a function for the response of a layer."}
{"_id": "q_15192", "text": "Load from a file into the target table, handling each step of the\n load process.\n\n Can load from text files, and properly formatted giraffez archive\n files. In both cases, if Gzip compression is detected the file will be\n decompressed while reading and handled appropriately. The encoding is\n determined automatically by the contents of the file.\n\n It is not necessary to set the columns in use prior to loading from a file.\n In the case of a text file, the header is used to determine column names\n and their order. Valid delimiters include '|', ',', and '\\\\t' (tab). When\n loading an archive file, the column information is decoded alongside the data.\n\n :param str filename: The location of the file to be loaded\n :param str table: The name of the target table, if it was not specified\n to the constructor for the isntance\n :param str null: The string that indicates a null value in the rows being\n inserted from a file. Defaults to 'NULL'\n :param str delimiter: When loading a file, indicates that fields are\n separated by this delimiter. Defaults to :code:`None`, which causes the\n delimiter to be determined from the header of the file. In most\n cases, this behavior is sufficient\n :param str quotechar: The character used to quote fields containing special characters,\n like the delimiter.\n :param bool panic: If :code:`True`, when an error is encountered it will be\n raised. Otherwise, the error will be logged and :code:`self.error_count`\n is incremented.\n :return: The output of the call to\n :meth:`~giraffez.load.TeradataBulkLoad.finish`\n :raises `giraffez.errors.GiraffeError`: if table was not set and :code:`table`\n is :code:`None`, or if a Teradata error ocurred while retrieving table info.\n :raises `giraffez.errors.GiraffeEncodeError`: if :code:`panic` is :code:`True` and there\n are format errors in the row values."}
{"_id": "q_15193", "text": "Load a single row into the target table.\n\n :param list items: A list of values in the row corresponding to the\n fields specified by :code:`self.columns`\n :param bool panic: If :code:`True`, when an error is encountered it will be\n raised. Otherwise, the error will be logged and :code:`self.error_count`\n is incremented.\n :raises `giraffez.errors.GiraffeEncodeError`: if :code:`panic` is :code:`True` and there\n are format errors in the row values.\n :raises `giraffez.errors.GiraffeError`: if table name is not set.\n :raises `giraffez.TeradataPTError`: if there is a problem\n connecting to Teradata."}
{"_id": "q_15194", "text": "Attempt release of target mload table.\n\n :raises `giraffez.errors.GiraffeError`: if table was not set by\n the constructor, the :code:`TeradataBulkLoad.table`, or\n :meth:`~giraffez.load.TeradataBulkLoad.from_file`."}
{"_id": "q_15195", "text": "The names of the work tables used for loading.\n\n :return: A list of four tables, each the name of the target table\n with the added suffixes, \"_wt\", \"_log\", \"_e1\", and \"_e2\"\n :raises `giraffez.errors.GiraffeError`: if table was not set by\n the constructor, the :code:`TeradataBulkLoad.table`, or\n :meth:`~giraffez.load.TeradataBulkLoad.from_file`."}
{"_id": "q_15196", "text": "Retrieve the decrypted value of a key in a giraffez\n configuration file.\n\n :param str key: The key used to lookup the encrypted value"}
{"_id": "q_15197", "text": "Display results in table format"}
{"_id": "q_15198", "text": "Execute commands using CLIv2.\n\n :param str command: The SQL command to be executed\n :param bool coerce_floats: Coerce Teradata decimal types into Python floats\n :param bool parse_dates: Parses Teradata datetime types into Python datetimes\n :param bool header: Include row header\n :param bool sanitize: Whether or not to call :func:`~giraffez.sql.prepare_statement`\n on the command\n :param bool silent: Silence console logging (within this function only)\n :param bool panic: If :code:`True`, when an error is encountered it will be\n raised.\n :param bool multi_statement: Execute in multi-statement mode\n :param bool prepare_only: Only prepare the command (no results)\n :return: a cursor over the results of each statement in the command\n :rtype: :class:`~giraffez.cmd.Cursor`\n :raises `giraffez.TeradataError`: if the query is invalid\n :raises `giraffez.errors.GiraffeError`: if the return data could not be decoded"}
{"_id": "q_15199", "text": "A class method to write a default configuration file structure to a file.\n Note that the contents of the file will be overwritten if it already exists.\n\n :param str conf: The name of the file to write to. Defaults to :code:`None`, for ~/.girafferc\n :return: The content written to the file\n :rtype: str"}
{"_id": "q_15200", "text": "Set the names of columns to be used when iterating through the list,\n retrieving names, etc.\n\n :param list names: A list of names to be used, or :code:`None` for all"}
{"_id": "q_15201", "text": "Writes export archive files in the Giraffez archive format.\n This takes a `giraffez.io.Writer` and writes archive chunks to\n file until all rows for a given statement have been exhausted.\n\n .. code-block:: python\n\n with giraffez.BulkExport(\"database.table_name\") as export:\n with giraffez.Writer(\"database.table_name.tar.gz\", 'wb', use_gzip=True) as out:\n for n in export.to_archive(out):\n print(\"Rows: {}\".format(n))\n\n :param `giraffez.io.Writer` writer: A writer handling the archive output\n\n :rtype: iterator (yields ``int``)"}
{"_id": "q_15202", "text": "Sets the current encoder output to Python `str` and returns\n a row iterator.\n\n :param str null: The string representation of null values\n :param str delimiter: The string delimiting values in the output\n string\n\n :rtype: iterator (yields ``str``)"}
{"_id": "q_15203", "text": "Convert string with optional k, M, G, T multiplier to float"}
{"_id": "q_15204", "text": "Convert string with gains of individual amplification elements to dict"}
{"_id": "q_15205", "text": "Wrap text to terminal width with default indentation"}
{"_id": "q_15206", "text": "Returns detected SoapySDR devices"}
{"_id": "q_15207", "text": "Return freqs and averaged PSD for given center frequency"}
{"_id": "q_15208", "text": "Wait for all PSD threads to finish and return result"}
{"_id": "q_15209", "text": "Compute PSD from samples and update average for given center frequency"}
{"_id": "q_15210", "text": "Submits a callable to be executed with the given arguments.\n\n Count maximum reached work queue size in ThreadPoolExecutor.max_queue_size_reached."}
{"_id": "q_15211", "text": "Convert integration time to number of repeats"}
{"_id": "q_15212", "text": "Prepare samples buffer and start streaming samples from device"}
{"_id": "q_15213", "text": "Tune to specified center frequency and compute Power Spectral Density"}
{"_id": "q_15214", "text": "Sweep spectrum using frequency hopping"}
{"_id": "q_15215", "text": "private helper method"}
{"_id": "q_15216", "text": "Forcing to run cmake"}
{"_id": "q_15217", "text": "Return the node name where the ``name`` would land to"}
{"_id": "q_15218", "text": "Return the encoding, idletime, or refcount about the key"}
{"_id": "q_15219", "text": "Pop a value off the tail of ``src``, push it on the head of ``dst``\n and then return it.\n\n This command blocks until a value is in ``src`` or until ``timeout``\n seconds elapse, whichever is first. A ``timeout`` value of 0 blocks\n forever.\n Not atomic"}
{"_id": "q_15220", "text": "Move ``value`` from set ``src`` to set ``dst``\n not atomic"}
{"_id": "q_15221", "text": "Returns the members of the set resulting from the union between\n the first set and all the successive sets."}
{"_id": "q_15222", "text": "Store the union of sets ``src``, ``args`` into a new\n set named ``dest``. Returns the number of keys in the new set."}
{"_id": "q_15223", "text": "Sets each key in the ``mapping`` dict to its corresponding value if\n none of the keys are already set"}
{"_id": "q_15224", "text": "Returns a list of keys matching ``pattern``"}
{"_id": "q_15225", "text": "Return a set of datetimes, after filtering ``datetimes``.\n\n The result will be the ``datetimes`` which are ``number`` of\n units before ``now``, until ``now``, with approximately one\n unit between each of them. The first datetime for any unit is\n kept, later duplicates are removed.\n\n If there are ``datetimes`` after ``now``, they will be\n returned unfiltered."}
{"_id": "q_15226", "text": "Return a datetime with the same value as ``dt``, to a\n resolution of weeks.\n\n ``firstweekday`` determines when the week starts. It defaults\n to Saturday."}
{"_id": "q_15227", "text": "Return a set of datetimes that should be deleted, out of ``datetimes``.\n\n See ``to_keep`` for a description of arguments."}
{"_id": "q_15228", "text": "Prepare the date in the instance state for serialization."}
{"_id": "q_15229", "text": "Waits for a port event. When a port event occurs it is placed onto the\n event queue.\n\n :param port: The port we are waiting for interrupts on (GPIOA/GPIOB).\n :type port: int\n :param chip: The chip we are waiting for interrupts on.\n :type chip: :class:`pifacecommon.mcp23s17.MCP23S17`\n :param pin_function_maps: A list of classes that have inheritted from\n :class:`FunctionMap`\\ s describing what to do with events.\n :type pin_function_maps: list\n :param event_queue: A queue to put events on.\n :type event_queue: :py:class:`multiprocessing.Queue`"}
{"_id": "q_15230", "text": "Wait until a file exists.\n\n :param filename: The name of the file to wait for.\n :type filename: string"}
{"_id": "q_15231", "text": "Registers a pin number and direction to a callback function.\n\n :param pin_num: The pin pin number.\n :type pin_num: int\n :param direction: The event direction\n (use: IODIR_ON/IODIR_OFF/IODIR_BOTH)\n :type direction: int\n :param callback: The function to run when event is detected.\n :type callback: function\n :param settle_time: Time within which subsequent events are ignored.\n :type settle_time: int"}
{"_id": "q_15232", "text": "Sends bytes via the SPI bus.\n\n :param bytes_to_send: The bytes to send on the SPI device.\n :type bytes_to_send: bytes\n :returns: bytes -- returned bytes from SPI device\n :raises: InitError"}
{"_id": "q_15233", "text": "Re-implement almost the same code from crispy_forms but passing\n ``form`` instance to item ``render_link`` method."}
{"_id": "q_15234", "text": "Find tab fields listed as invalid"}
{"_id": "q_15235", "text": "Render the link for the tab-pane. It must be called after render so\n ``css_class`` is updated with ``active`` class name if needed."}
{"_id": "q_15236", "text": "Get package version from installed distribution or configuration file if not\n installed"}
{"_id": "q_15237", "text": "Add number of photos to each gallery."}
{"_id": "q_15238", "text": "Set currently authenticated user as the author of the gallery."}
{"_id": "q_15239", "text": "For each photo set it's author to currently authenticated user."}
{"_id": "q_15240", "text": "Outputs a list of tuples with ranges or the empty list\n According to the rfc, start or end values can be omitted"}
{"_id": "q_15241", "text": "Converts to valid byte ranges"}
{"_id": "q_15242", "text": "Sorts and removes overlaps"}
{"_id": "q_15243", "text": "Used by every other method, it makes a GET request with the given params.\n\n Args:\n url (str): relative path of a specific service (account_info, ...).\n params (:obj:`dict`, optional): contains parameters to be sent in the GET request.\n\n Returns:\n dict: results of the response of the GET request."}
{"_id": "q_15244", "text": "Requests direct download link for requested file,\n this method makes use of the response of prepare_download, prepare_download must be called first.\n\n Args:\n file_id (str): id of the file to be downloaded.\n\n ticket (str): preparation ticket is found in prepare_download response,\\\n this is why we need to call prepare_download before get_download_link.\n\n captcha_response (:obj:`str`, optional): sometimes prepare_download will have captcha url to be solved, \\\n first, this is the solution of the captcha.\n\n Returns:\n dict: dictionary containing (file info, download url, ...). ::\n\n {\n \"name\": \"The quick brown fox.txt\",\n \"size\": 12345,\n \"sha1\": \"2fd4e1c67a2d28fced849ee1bb76e7391b93eb12\",\n \"content_type\": \"plain/text\",\n \"upload_at\": \"2011-01-26 13:33:37\",\n \"url\": \"https://abvzps.example.com/dl/l/4spxX_-cSO4/The+quick+brown+fox.txt\",\n \"token\": \"4spxX_-cSO4\"\n }"}
{"_id": "q_15245", "text": "Makes a request to prepare for file upload.\n\n Note:\n If folder_id is not provided, it will make and upload link to the ``Home`` folder.\n\n Args:\n folder_id (:obj:`str`, optional): folder-ID to upload to.\n sha1 (:obj:`str`, optional): expected sha1 If sha1 of uploaded file doesn't match this value, upload fails.\n httponly (:obj:`bool`, optional): If this is set to true, use only http upload links.\n\n Returns:\n dict: dictionary containing (url: will be used in actual upload, valid_until). ::\n\n {\n \"url\": \"https://1fiafqj.oloadcdn.net/uls/nZ8H3X9e0AotInbU\",\n \"valid_until\": \"2017-08-19 19:06:46\"\n }"}
{"_id": "q_15246", "text": "Used to make a remote file upload to openload.co\n\n Note:\n If folder_id is not provided, the file will be uploaded to ``Home`` folder.\n\n Args:\n remote_url (str): direct link of file to be remotely downloaded.\n folder_id (:obj:`str`, optional): folder-ID to upload to.\n headers (:obj:`dict`, optional): additional HTTP headers (e.g. Cookies or HTTP Basic-Auth)\n\n Returns:\n dict: dictionary containing (\"id\": uploaded file id, \"folderid\"). ::\n\n {\n \"id\": \"12\",\n \"folderid\": \"4248\"\n }"}
{"_id": "q_15247", "text": "Request a list of files and folders in specified folder.\n\n Note:\n if folder_id is not provided, ``Home`` folder will be listed\n\n Args:\n folder_id (:obj:`str`, optional): id of the folder to be listed.\n\n Returns:\n dict: dictionary containing only two keys (\"folders\", \"files\"), \\\n each key represents a list of dictionaries. ::\n\n {\n \"folders\": [\n {\n \"id\": \"5144\",\n \"name\": \".videothumb\"\n },\n {\n \"id\": \"5792\",\n \"name\": \".subtitles\"\n },\n ...\n ],\n \"files\": [\n {\n \"name\": \"big_buck_bunny.mp4.mp4\",\n \"sha1\": \"c6531f5ce9669d6547023d92aea4805b7c45d133\",\n \"folderid\": \"4258\",\n \"upload_at\": \"1419791256\",\n \"status\": \"active\",\n \"size\": \"5114011\",\n \"content_type\": \"video/mp4\",\n \"download_count\": \"48\",\n \"cstatus\": \"ok\",\n \"link\": \"https://openload.co/f/UPPjeAk--30/big_buck_bunny.mp4.mp4\",\n \"linkextid\": \"UPPjeAk--30\"\n },\n ...\n ]\n }"}
{"_id": "q_15248", "text": "Shows running file converts by folder\n\n Note:\n If folder_id is not provided, ``Home`` folder will be used.\n\n Args:\n folder_id (:obj:`str`, optional): id of the folder to list conversions of files exist in it.\n\n Returns:\n list: list of dictionaries, each dictionary represents a file conversion info. ::\n\n [\n {\n \"name\": \"Geysir.AVI\",\n \"id\": \"3565411\",\n \"status\": \"pending\",\n \"last_update\": \"2015-08-23 19:41:40\",\n \"progress\": 0.32,\n \"retries\": \"0\",\n \"link\": \"https://openload.co/f/f02JFG293J8/Geysir.AVI\",\n \"linkextid\": \"f02JFG293J8\"\n },\n ....\n ]"}
{"_id": "q_15249", "text": "calculates the humidity via the formula from weatherwise.org\n return the relative humidity"}
{"_id": "q_15250", "text": "Perform HTTP session to transmit defined weather values."}
{"_id": "q_15251", "text": "return CRC calc value from raw serial data"}
{"_id": "q_15252", "text": "perform CRC check on raw serial data, return true if valid.\n a valid CRC == 0."}
{"_id": "q_15253", "text": "given a packed storm date field, unpack and return 'YYYY-MM-DD' string."}
{"_id": "q_15254", "text": "return True if weather station returns Rev.B archives"}
{"_id": "q_15255", "text": "issue wakeup command to device to take out of standby mode."}
{"_id": "q_15256", "text": "returns a dictionary of fields from the newest archive record in the\n device. return None when no records are new."}
{"_id": "q_15257", "text": "read and parse a set of data read from the console. after the\n data is parsed it is available in the fields variable."}
{"_id": "q_15258", "text": "setup system logging to desired verbosity."}
{"_id": "q_15259", "text": "return gust data, if above threshold value and current time is inside\n reporting window period"}
{"_id": "q_15260", "text": "Useful for defining weather data published to the server. Parameters\n not set will be reset and not sent to server. Unknown keyword args will\n be silently ignored, so be careful. This is necessary for publishers\n that support more fields than others."}
{"_id": "q_15261", "text": "Store keyword args to be written to output file."}
{"_id": "q_15262", "text": "Write output file."}
{"_id": "q_15263", "text": "Helper decorator for transitioning to user-only requirements, this aids\n in situations where the request may be marked optional and causes an\n incorrect flow into user-only requirements.\n\n This decorator causes the requirement to look like a user-only requirement\n but passes the current request context internally to the requirement.\n\n This decorator is intended only to assist during a transitionary phase\n and will be removed in flask-allows 1.0\n\n See: :issue:`20,27`"}
{"_id": "q_15264", "text": "Initializes the Flask-Allows object against the provided application"}
{"_id": "q_15265", "text": "Pops the latest override context.\n\n If the override context was pushed by a different override manager,\n a ``RuntimeError`` is raised."}
{"_id": "q_15266", "text": "Allows temporarily pushing an override context, yields the new context\n into the following block."}
{"_id": "q_15267", "text": "Binds an additional to the current context, optionally use the\n current additionals in conjunction with this additional\n\n If ``use_parent`` is true, a new additional is created from the\n parent and child additionals rather than manipulating either\n directly."}
{"_id": "q_15268", "text": "Pops the latest additional context.\n\n If the additional context was pushed by a different additional manager,\n a ``RuntimeError`` is raised."}
{"_id": "q_15269", "text": "Allows temporarily pushing an additional context, yields the new context\n into the following block."}
{"_id": "q_15270", "text": "Append a number to duplicate field names to make them unique."}
{"_id": "q_15271", "text": "Generates the string to be shown as updates after the execution of a\n Cypher query\n\n :param results: ``ResultSet`` with the raw results of the execution of\n the Cypher query"}
{"_id": "q_15272", "text": "Generates a dictionary with safe keys and values to pass onto Neo4j\n\n :param query: string with the Cypher query to execute\n :param user_ns: dictionary with the IPython user space"}
{"_id": "q_15273", "text": "Executes a query and depending on the options of the extensions will\n return raw data, a ``ResultSet``, a Pandas ``DataFrame`` or a\n NetworkX graph.\n\n :param query: string with the Cypher query\n :param params: dictionary with parameters for the query (default=``None``)\n :param config: Configurable or NamedTuple with extra IPython configuration\n details. If ``None``, a new object will be created\n (defaults=``None``)\n :param conn: connection dictionary or string for the Neo4j backend.\n If ``None``, a new connection will be created\n (default=``None``)\n :param **kwargs: Any of the cell configuration options."}
{"_id": "q_15274", "text": "Returns a Pandas DataFrame instance built from the result set."}
{"_id": "q_15275", "text": "Generates a pylab pie chart from the result set.\n\n ``matplotlib`` must be installed, and in an\n IPython Notebook, inlining must be on::\n\n %%matplotlib inline\n\n Values (pie slice sizes) are taken from the\n rightmost column (numerical values required).\n All other columns are used to label the pie slices.\n\n :param key_word_sep: string used to separate column values\n from each other in pie labels\n :param title: plot title, defaults to name of value column\n :kwargs: any additional keyword arguments will be passsed\n through to ``matplotlib.pylab.pie``."}
{"_id": "q_15276", "text": "Generates a pylab plot from the result set.\n\n ``matplotlib`` must be installed, and in an\n IPython Notebook, inlining must be on::\n\n %%matplotlib inline\n\n The first and last columns are taken as the X and Y\n values. Any columns between are ignored.\n\n :param title: plot title, defaults to names of Y value columns\n\n Any additional keyword arguments will be passsed\n through to ``matplotlib.pylab.plot``."}
{"_id": "q_15277", "text": "Re-implementation of the permission_required decorator, honors settings.\n\n If ``DASHBOARD_REQUIRE_LOGIN`` is False, this decorator will always return\n ``True``, otherwise it will check for the permission as usual."}
{"_id": "q_15278", "text": "Returns the widgets sorted by position."}
{"_id": "q_15279", "text": "Returns all widgets that need an update.\n\n This should be scheduled every minute via crontab."}
{"_id": "q_15280", "text": "Unregisters the given widget."}
{"_id": "q_15281", "text": "Gets or creates the last update object for this widget."}
{"_id": "q_15282", "text": "Returns the setting for this widget from the database.\n\n :setting_name: The name of the setting.\n :default: Optional default value if the setting cannot be found."}
{"_id": "q_15283", "text": "Checks if an update is needed.\n\n Checks against ``self.update_interval`` and this widgets\n ``DashboardWidgetLastUpdate`` instance if an update is overdue.\n\n This should be called by\n ``DashboardWidgetPool.get_widgets_that_need_update()``, which in turn\n should be called by an admin command which should be scheduled every\n minute via crontab."}
{"_id": "q_15284", "text": "Create a spark bolt array from a local array.\n\n Parameters\n ----------\n a : array-like\n An array, any object exposing the array interface, an\n object whose __array__ method returns an array, or any\n (nested) sequence.\n\n context : SparkContext\n A context running Spark. (see pyspark)\n\n axis : tuple, optional, default=(0,)\n Which axes to distribute the array along. The resulting\n distributed object will use keys to represent these axes,\n with the remaining axes represented by values.\n\n dtype : data-type, optional, default=None\n The desired data-type for the array. If None, will\n be determined from the data. (see numpy)\n\n npartitions : int\n Number of partitions for parallization.\n\n Returns\n -------\n BoltArraySpark"}
{"_id": "q_15285", "text": "Create a spark bolt array of ones.\n\n Parameters\n ----------\n shape : tuple\n The desired shape of the array.\n\n context : SparkContext\n A context running Spark. (see pyspark)\n\n axis : tuple, optional, default=(0,)\n Which axes to distribute the array along. The resulting\n distributed object will use keys to represent these axes,\n with the remaining axes represented by values.\n\n dtype : data-type, optional, default=float64\n The desired data-type for the array. If None, will\n be determined from the data. (see numpy)\n\n npartitions : int\n Number of partitions for parallization.\n\n Returns\n -------\n BoltArraySpark"}
{"_id": "q_15286", "text": "Format target axes given an array shape"}
{"_id": "q_15287", "text": "Wrap an existing numpy constructor in a parallelized construction"}
{"_id": "q_15288", "text": "Align local bolt array so that axes for iteration are in the keys.\n\n This operation is applied before most functional operators.\n It ensures that the specified axes are valid, and might transpose/reshape\n the underlying array so that the functional operators can be applied\n over the correct records.\n\n Parameters\n ----------\n axes: tuple[int]\n One or more axes that will be iterated over by a functional operator\n\n Returns\n -------\n BoltArrayLocal"}
{"_id": "q_15289", "text": "Converts a BoltArrayLocal into a BoltArraySpark\n\n Parameters\n ----------\n sc : SparkContext\n The SparkContext which will be used to create the BoltArraySpark\n\n axis : tuple or int, optional, default=0\n The axis (or axes) across which this array will be parallelized\n\n Returns\n -------\n BoltArraySpark"}
{"_id": "q_15290", "text": "Converts a BoltArrayLocal into an RDD\n\n Parameters\n ----------\n sc : SparkContext\n The SparkContext which will be used to create the BoltArraySpark\n\n axis : tuple or int, optional, default=0\n The axis (or axes) across which this array will be parallelized\n\n Returns\n -------\n RDD[(tuple, ndarray)]"}
{"_id": "q_15291", "text": "Apply a function on each subarray.\n\n Parameters\n ----------\n func : function \n This is applied to each value in the intermediate RDD.\n\n Returns\n -------\n StackedArray"}
{"_id": "q_15292", "text": "Split values of distributed array into chunks.\n\n Transforms an underlying pair RDD of (key, value) into\n records of the form: (key, chunk id), (chunked value).\n Here, chunk id is a tuple identifying the chunk and\n chunked value is a subset of the data from each original value,\n that has been divided along the specified dimensions.\n\n Parameters\n ----------\n size : str or tuple or int\n If str, the average size (in KB) of the chunks in all value dimensions.\n If int or tuple, an explicit specification of the number chunks in\n each value dimension.\n\n axis : tuple, optional, default=None\n One or more axes to estimate chunks for, if provided any\n other axes will use one chunk.\n\n padding: tuple or int, default = None\n Number of elements per dimension that will overlap with the adjacent chunk.\n If a tuple, specifies padding along each chunked dimension; if a int, same\n padding will be applied to all chunked dimensions."}
{"_id": "q_15293", "text": "Apply an array -> array function on each subarray.\n\n The function can change the shape of the subarray, but only along\n dimensions that are not chunked.\n\n Parameters\n ----------\n func : function\n Function of a single subarray to apply\n\n value_shape:\n Known shape of chunking plan after the map\n\n dtype: numpy.dtype, optional, default=None\n Known dtype of values resulting from operation\n\n Returns\n -------\n ChunkedArray"}
{"_id": "q_15294", "text": "Identify a plan for chunking values along each dimension.\n\n Generates an ndarray with the size (in number of elements) of chunks\n in each dimension. If provided, will estimate chunks for only a\n subset of axes, leaving all others to the full size of the axis.\n\n Parameters\n ----------\n size : string or tuple\n If str, the average size (in KB) of the chunks in all value dimensions.\n If int/tuple, an explicit specification of the number chunks in\n each moving value dimension.\n\n axes : tuple, optional, default=None\n One or more axes to estimate chunks for, if provided any\n other axes will use one chunk.\n\n padding : tuple or int, option, default=None\n Size over overlapping padding between chunks in each dimension.\n If tuple, specifies padding along each chunked dimension; if int,\n all dimensions use same padding; if None, no padding"}
{"_id": "q_15295", "text": "Obtain number of chunks for the given dimensions and chunk sizes.\n\n Given a plan for the number of chunks along each dimension,\n calculate the number of chunks that this will lead to.\n\n Parameters\n ----------\n plan: tuple or array-like\n Size of chunks (in number of elements) along each dimensions.\n Length must be equal to the number of dimensions.\n\n shape : tuple\n Shape of array to be chunked."}
{"_id": "q_15296", "text": "Obtain a binary mask by setting a subset of entries to true.\n\n Parameters\n ----------\n inds : array-like\n Which indices to set as true.\n\n n : int\n The length of the target mask."}
{"_id": "q_15297", "text": "Repartitions the underlying RDD\n\n Parameters\n ----------\n npartitions : int\n Number of partitions to repartion the underlying RDD to"}
{"_id": "q_15298", "text": "Return the mean of the array over the given axis.\n\n Parameters\n ----------\n axis : tuple or int, optional, default=None\n Axis to compute statistic over, if None\n will compute over all axes\n\n keepdims : boolean, optional, default=False\n Keep axis remaining after operation with size 1."}
{"_id": "q_15299", "text": "Return the variance of the array over the given axis.\n\n Parameters\n ----------\n axis : tuple or int, optional, default=None\n Axis to compute statistic over, if None\n will compute over all axes\n\n keepdims : boolean, optional, default=False\n Keep axis remaining after operation with size 1."}
{"_id": "q_15300", "text": "Return the standard deviation of the array over the given axis.\n\n Parameters\n ----------\n axis : tuple or int, optional, default=None\n Axis to compute statistic over, if None\n will compute over all axes\n\n keepdims : boolean, optional, default=False\n Keep axis remaining after operation with size 1."}
{"_id": "q_15301", "text": "Return the sum of the array over the given axis.\n\n Parameters\n ----------\n axis : tuple or int, optional, default=None\n Axis to compute statistic over, if None\n will compute over all axes\n\n keepdims : boolean, optional, default=False\n Keep axis remaining after operation with size 1."}
{"_id": "q_15302", "text": "Chunks records of a distributed array.\n\n Chunking breaks arrays into subarrays, using an specified\n size of chunks along each value dimension. Can alternatively\n specify an average chunk byte size (in kilobytes) and the size of\n chunks (as ints) will be computed automatically.\n\n Parameters\n ----------\n size : tuple, int, or str, optional, default = \"150\"\n A string giving the size in kilobytes, or a tuple with the size\n of chunks along each dimension.\n\n axis : int or tuple, optional, default = None\n One or more axis to chunk array along, if None\n will use all axes,\n\n padding: tuple or int, default = None\n Number of elements per dimension that will overlap with the adjacent chunk.\n If a tuple, specifies padding along each chunked dimension; if a int, same\n padding will be applied to all chunked dimensions.\n\n Returns\n -------\n ChunkedArray"}
{"_id": "q_15303", "text": "Swap axes from keys to values.\n\n This is the core operation underlying shape manipulation\n on the Spark bolt array. It exchanges an arbitrary set of axes\n between the keys and the valeus. If either is None, will only\n move axes in one direction (from keys to values, or values to keys).\n Keys moved to values will be placed immediately after the split;\n values moved to keys will be placed immediately before the split.\n\n Parameters\n ----------\n kaxes : tuple\n Axes from keys to move to values\n\n vaxes : tuple\n Axes from values to move to keys\n\n size : tuple or int, optional, default = \"150\"\n Can either provide a string giving the size in kilobytes,\n or a tuple with the number of chunks along each\n value dimension being moved\n\n Returns\n -------\n BoltArraySpark"}
{"_id": "q_15304", "text": "Return the array with two axes interchanged.\n\n Parameters\n ----------\n axis1 : int\n The first axis to swap\n\n axis2 : int\n The second axis to swap"}
{"_id": "q_15305", "text": "Return an array with the same data but a new shape.\n\n Currently only supports reshaping that independently\n reshapes the keys, or the values, or both.\n\n Parameters\n ----------\n shape : tuple of ints, or n ints\n New shape"}
{"_id": "q_15306", "text": "Remove one or more single-dimensional axes from the array.\n\n Parameters\n ----------\n axis : tuple or int\n One or more singleton axes to remove."}
{"_id": "q_15307", "text": "Cast the array to a specified type.\n\n Parameters\n ----------\n dtype : str or dtype\n Typecode or data-type to cast the array to (see numpy)"}
{"_id": "q_15308", "text": "Returns the contents as a local array.\n\n Will likely cause memory problems for large objects."}
{"_id": "q_15309", "text": "Coerce singletons and lists and ndarrays to tuples.\n\n Parameters\n ----------\n arg : tuple, list, ndarray, or singleton\n Item to coerce"}
{"_id": "q_15310", "text": "Coerce a list of arguments to a tuple.\n\n Parameters\n ----------\n args : tuple or nested tuple\n Pack arguments into a tuple, converting ((,...),) or (,) -> (,)"}
{"_id": "q_15311", "text": "Checks to see if a list of axes are contained within an array shape.\n\n Parameters\n ----------\n shape : tuple[int]\n the shape of a BoltArray\n\n axes : tuple[int]\n the axes to check against shape"}
{"_id": "q_15312", "text": "Test that a and b are close and match in shape.\n\n Parameters\n ----------\n a : ndarray\n First array to check\n\n b : ndarray\n First array to check"}
{"_id": "q_15313", "text": "Force a slice to have defined start, stop, and step from a known dim.\n Start and stop will always be positive. Step may be negative.\n\n There is an exception where a negative step overflows the stop needs to have\n the default value set to -1. This is the only case of a negative start/stop\n value.\n\n Parameters\n ----------\n slc : slice or int\n The slice to modify, or int to convert to a slice\n\n dim : tuple\n Bound for slice"}
{"_id": "q_15314", "text": "Check to see if a proposed tuple of axes is a valid permutation\n of an old set of axes. Checks length, axis repetion, and bounds.\n\n Parameters\n ----------\n new : tuple\n tuple of proposed axes\n\n old : tuple\n tuple of old axes"}
{"_id": "q_15315", "text": "Check to see if a proposed tuple of axes is a valid reshaping of\n the old axes by ensuring that they can be factored.\n\n Parameters\n ----------\n new : tuple\n tuple of proposed axes\n\n old : tuple\n tuple of old axes"}
{"_id": "q_15316", "text": "If an ndarray has been split into multiple chunks by splitting it along\n each axis at a number of locations, this function rebuilds the\n original array from chunks.\n\n Parameters\n ----------\n vals : nested lists of ndarrays\n each level of nesting of the lists representing a dimension of\n the original array."}
{"_id": "q_15317", "text": "Decorator to append routed docstrings"}
{"_id": "q_15318", "text": "Reshape just the keys of a BoltArraySpark, returning a\n new BoltArraySpark.\n\n Parameters\n ----------\n shape : tuple\n New proposed axes."}
{"_id": "q_15319", "text": "Transpose just the keys of a BoltArraySpark, returning a\n new BoltArraySpark.\n\n Parameters\n ----------\n axes : tuple\n New proposed axes."}
{"_id": "q_15320", "text": "Create a local bolt array of ones.\n\n Parameters\n ----------\n shape : tuple\n Dimensions of the desired array\n\n dtype : data-type, optional, default=float64\n The desired data-type for the array. (see numpy)\n\n order : {'C', 'F', 'A'}, optional, default='C'\n The order of the array. (see numpy)\n\n Returns\n -------\n BoltArrayLocal"}
{"_id": "q_15321", "text": "Create a local bolt array of zeros.\n\n Parameters\n ----------\n shape : tuple\n Dimensions of the desired array.\n\n dtype : data-type, optional, default=float64\n The desired data-type for the array. (see numpy)\n\n order : {'C', 'F', 'A'}, optional, default='C'\n The order of the array. (see numpy)\n\n Returns\n -------\n BoltArrayLocal"}
{"_id": "q_15322", "text": "Join a sequence of arrays together.\n\n Parameters\n ----------\n arrays : tuple\n A sequence of array-like e.g. (a1, a2, ...)\n\n axis : int, optional, default=0\n The axis along which the arrays will be joined.\n\n Returns\n -------\n BoltArrayLocal"}
{"_id": "q_15323", "text": "Equation B.8 in Clauset\n\n Given a data set, an xmin value, and an alpha \"scaling parameter\", computes\n the log-likelihood (the value to be maximized)"}
{"_id": "q_15324", "text": "Return the most likely alpha for the data given an xmin"}
{"_id": "q_15325", "text": "Use the maximum L to determine the most likely value of alpha\n\n *alpharangemults* [ 2-tuple ]\n Pair of values indicating multiplicative factors above and below the\n approximate alpha from the MLE alpha to use when determining the\n \"exact\" alpha (by directly maximizing the likelihood function)"}
{"_id": "q_15326", "text": "Use the maximum likelihood to determine the most likely value of alpha\n\n *alpharangemults* [ 2-tuple ]\n Pair of values indicating multiplicative factors above and below the\n approximate alpha from the MLE alpha to use when determining the\n \"exact\" alpha (by directly maximizing the likelihood function)\n *n_alpha* [ int ]\n Number of alpha values to use when measuring. Larger number is more accurate.\n *approximate* [ bool ]\n If False, try to \"zoom-in\" around the MLE alpha and get the exact\n best alpha value within some range around the approximate best\n *vebose* [ bool ]\n *finite* [ bool ]\n Correction for finite data?"}
{"_id": "q_15327", "text": "Plots the power-law-predicted value on the Y-axis against the real\n values along the X-axis. Can be used as a diagnostic of the fit\n quality."}
{"_id": "q_15328", "text": "Use the maximum likelihood estimator for a lognormal distribution to\n produce the best-fit lognormal parameters"}
{"_id": "q_15329", "text": "Configure Yandex Metrika analytics counter.\n\n :param str|unicode ident: Metrika counter ID.\n\n :param dict params: Additional params."}
{"_id": "q_15330", "text": "Generates a list of tags identifying those previously selected.\n\n Returns a list of tuples of the form (<tag name>, <CSS class name>).\n\n Uses the string names rather than the tags themselves in order to work\n with tag lists built from forms not fully submitted."}
{"_id": "q_15331", "text": "Calculate md5 fingerprint.\n\n Shamelessly copied from http://stackoverflow.com/questions/6682815/deriving-an-ssh-fingerprint-from-a-public-key-in-python\n\n For specification, see RFC4716, section 4."}
{"_id": "q_15332", "text": "Calculate sha256 fingerprint."}
{"_id": "q_15333", "text": "Parses ssh-rsa public keys."}
{"_id": "q_15334", "text": "Parses ecdsa-sha public keys."}
{"_id": "q_15335", "text": "Validates SSH public key.\n\n Throws exception for invalid keys. Otherwise returns None.\n\n Populates key_type, bits and bits fields.\n\n For rsa keys, see field \"rsa\" for raw public key data.\n For dsa keys, see field \"dsa\".\n For ecdsa keys, see field \"ecdsa\"."}
{"_id": "q_15336", "text": "Performs a step to establish the context as an initiator.\n\n This method should be called in a loop and fed input tokens from the acceptor, and its\n output tokens should be sent to the acceptor, until this context's :attr:`established`\n attribute is True.\n\n :param input_token: The input token from the acceptor (omit this param or pass None on\n the first call).\n :type input_token: bytes\n :returns: either a byte string with the next token to send to the acceptor,\n or None if there is no further token to send to the acceptor.\n :raises: :exc:`~gssapi.error.GSSException` if there is an error establishing the context."}
{"_id": "q_15337", "text": "The set of mechanisms supported by the credential.\n\n :type: :class:`~gssapi.oids.OIDSet`"}
{"_id": "q_15338", "text": "Stores this credential into a 'credential store'. It can either store this credential in\n the default credential store, or into a specific credential store specified by a set of\n mechanism-specific key-value pairs. The former method of operation requires that the\n underlying GSSAPI implementation supports the ``gss_store_cred`` C function, the latter\n method requires support for the ``gss_store_cred_into`` C function.\n\n :param usage: Optional parameter specifying whether to store the initiator, acceptor, or\n both usages of this credential. Defaults to the value of this credential's\n :attr:`usage` property.\n :type usage: One of :data:`~gssapi.C_INITIATE`, :data:`~gssapi.C_ACCEPT` or\n :data:`~gssapi.C_BOTH`\n :param mech: Optional parameter specifying a single mechanism to store the credential\n element for. If not supplied, all mechanisms' elements in this credential will be\n stored.\n :type mech: :class:`~gssapi.oids.OID`\n :param overwrite: If True, indicates that any credential for the same principal in the\n credential store should be overwritten with this credential.\n :type overwrite: bool\n :param default: If True, this credential should be made available as the default\n credential when stored, for acquisition when no `desired_name` parameter is passed\n to :class:`Credential` or for use when no credential is passed to\n :class:`~gssapi.ctx.InitContext` or :class:`~gssapi.ctx.AcceptContext`. This is only\n an advisory parameter to the GSSAPI implementation.\n :type default: bool\n :param cred_store: Optional dict or list of (key, value) pairs indicating the credential\n store to use. The interpretation of these values will be mechanism-specific.\n :type cred_store: dict, or list of (str, str)\n :returns: A pair of values indicating the set of mechanism OIDs for which credential\n elements were successfully stored, and the usage of the credential that was stored.\n :rtype: tuple(:class:`~gssapi.oids.OIDSet`, int)\n :raises: :exc:`~gssapi.error.GSSException` if there is a problem with storing the\n credential.\n\n :exc:`NotImplementedError` if the underlying GSSAPI implementation does not\n support the ``gss_store_cred`` or ``gss_store_cred_into`` C functions."}
{"_id": "q_15339", "text": "In-place addition\n\n :param addend_mat: A matrix to be added on the Sparse3DMatrix object\n :param axis: The dimension along the addend_mat is added\n :return: Nothing (as it performs in-place operations)"}
{"_id": "q_15340", "text": "Updates the probability of read origin at read level\n\n :param model: Normalization model (1: Gene->Allele->Isoform, 2: Gene->Isoform->Allele, 3: Gene->Isoform*Allele, 4: Gene*Isoform*Allele)\n :return: Nothing (as it performs in-place operations)"}
{"_id": "q_15341", "text": "Writes the posterior probability of read origin\n\n :param filename: File name for output\n :param title: The title of the posterior probability matrix\n :return: Nothing but the method writes a file in EMASE format (PyTables)"}
{"_id": "q_15342", "text": "Prints nonzero rows of the read wanted"}
{"_id": "q_15343", "text": "Imports and runs setup function with given properties."}
{"_id": "q_15344", "text": "Transliterate `data` with the given `scheme_map`. This function is used\n when the source scheme is a Brahmic scheme.\n\n :param data: the data to transliterate\n :param scheme_map: a dict that maps between characters in the old scheme\n and characters in the new scheme"}
{"_id": "q_15345", "text": "Detect the input's transliteration scheme.\n\n :param text: some text data, either a `unicode` or a `str` encoded\n in UTF-8."}
{"_id": "q_15346", "text": "converts an array of integers to utf8 string"}
{"_id": "q_15347", "text": "set the value of delta to reflect the current codepage"}
{"_id": "q_15348", "text": "Handle unrecognised characters."}
{"_id": "q_15349", "text": "Transliterate a Latin character equivalent to Devanagari.\n \n Add VIRAMA for ligatures.\n Convert standalone to dependent vowels."}
{"_id": "q_15350", "text": "Returns a file handle which is used to record audio"}
{"_id": "q_15351", "text": "Returns Normalize CSS file.\n Included in HTML5 Boilerplate."}
{"_id": "q_15352", "text": "Returns Font Awesome CSS file.\n TEMPLATE_DEBUG returns full file, otherwise returns minified file."}
{"_id": "q_15353", "text": "Returns Modernizr JavaScript file according to version number.\n TEMPLATE_DEBUG returns full file, otherwise returns minified file.\n Included in HTML5 Boilerplate."}
{"_id": "q_15354", "text": "Returns jQuery JavaScript file according to version number.\n TEMPLATE_DEBUG returns full file, otherwise returns minified file from Google CDN with local fallback.\n Included in HTML5 Boilerplate."}
{"_id": "q_15355", "text": "Returns the jQuery UI plugin file according to version number.\n TEMPLATE_DEBUG returns full file, otherwise returns minified file from Google CDN with local fallback."}
{"_id": "q_15356", "text": "Returns the jQuery DataTables plugin file according to version number.\n TEMPLATE_DEBUG returns full file, otherwise returns minified file."}
{"_id": "q_15357", "text": "Returns the jQuery DataTables CSS file according to version number."}
{"_id": "q_15358", "text": "Returns the jQuery Smooth Scroll plugin file according to version number.\n TEMPLATE_DEBUG returns full file, otherwise returns minified file."}
{"_id": "q_15359", "text": "Returns Google Analytics asynchronous snippet.\n Use DJFRONTEND_GA_SETDOMAINNAME to set domain for multiple, or cross-domain tracking.\n Set DJFRONTEND_GA_SETALLOWLINKER to use _setAllowLinker method on target site for cross-domain tracking.\n Included in HTML5 Boilerplate."}
{"_id": "q_15360", "text": "u\"\"\"Render CodeMirrorTextarea"}
{"_id": "q_15361", "text": "Load and generate ``num`` number of top-level rules from the specified grammar.\n\n :param list grammar: The grammar file to load and generate data from\n :param int num: The number of times to generate data\n :param output: The output destination (an open, writable stream-type object. default=``sys.stdout``)\n :param int max_recursion: The maximum reference-recursion when generating data (default=``10``)\n :param int seed: The seed to initialize the PRNG with. If None, will not initialize it."}
{"_id": "q_15362", "text": "Build the ``Quote`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."}
{"_id": "q_15363", "text": "Make the list of verbs into present participles\n\n E.g.:\n\n empower -> empowering\n drive -> driving"}
{"_id": "q_15364", "text": "Return a dict with the data of an encoding Namelist file.\n\n This is an implementation detail of readNamelist."}
{"_id": "q_15365", "text": "Returns list of CharsetInfo about supported orthographies"}
{"_id": "q_15366", "text": "Generates header for oauth2"}
{"_id": "q_15367", "text": "Parse oauth2 access"}
{"_id": "q_15368", "text": "Refresh access token"}
{"_id": "q_15369", "text": "Write json data into a file"}
{"_id": "q_15370", "text": "Get data from json file"}
{"_id": "q_15371", "text": "Get data from .yml file"}
{"_id": "q_15372", "text": "Write data into a .yml file"}
{"_id": "q_15373", "text": "Generate auth tokens tied to user and specified purpose.\n\n The hash expires at midnight on the minute of now + minutes_valid, such\n that when minutes_valid=1 you get *at least* 1 minute to use the token."}
{"_id": "q_15374", "text": "Return specific time an auth_hash will expire."}
{"_id": "q_15375", "text": "Return login token info for given user."}
{"_id": "q_15376", "text": "Serialize user as per Meteor accounts serialization."}
{"_id": "q_15377", "text": "De-serialize user profile fields into concrete model fields."}
{"_id": "q_15378", "text": "Update user data."}
{"_id": "q_15379", "text": "Consistent fail so we don't provide attackers with valuable info."}
{"_id": "q_15380", "text": "Resolve and validate auth token, returns user object."}
{"_id": "q_15381", "text": "Check request, return False if using SSL or local connection."}
{"_id": "q_15382", "text": "Retrieve username from user selector."}
{"_id": "q_15383", "text": "Register a new user account."}
{"_id": "q_15384", "text": "Login a user."}
{"_id": "q_15385", "text": "Logout a user."}
{"_id": "q_15386", "text": "Login either with resume token or password."}
{"_id": "q_15387", "text": "Authenticate using credentials supplied in params."}
{"_id": "q_15388", "text": "Login with existing resume token.\n\n Either the token is valid and the user is logged in, or the token is\n invalid and a non-specific ValueError(\"Login failed.\") exception is\n raised - don't be tempted to give clues to attackers as to why their\n logins are invalid!"}
{"_id": "q_15389", "text": "Change password."}
{"_id": "q_15390", "text": "Request password reset email."}
{"_id": "q_15391", "text": "Reset password using a token received in email then logs user in."}
{"_id": "q_15392", "text": "Recursive dict merge.\n\n Recursively merges dict's. not just simple lft['key'] = rgt['key'], if\n both lft and rgt have a key who's value is a dict then dict_merge is\n called on both values and the result stored in the returned dictionary."}
{"_id": "q_15393", "text": "Return an Alea ID for the given object."}
{"_id": "q_15394", "text": "Return Alea ID mapping for all given ids of specified model."}
{"_id": "q_15395", "text": "Return an object ID for the given meteor_id."}
{"_id": "q_15396", "text": "Return all object IDs for the given meteor_ids."}
{"_id": "q_15397", "text": "Return an object for the given meteor_id."}
{"_id": "q_15398", "text": "Set default value for AleaIdField."}
{"_id": "q_15399", "text": "Use schema_editor to apply any forward changes."}
{"_id": "q_15400", "text": "Use schema_editor to apply any reverse changes."}
{"_id": "q_15401", "text": "Update command options."}
{"_id": "q_15402", "text": "Peform build."}
{"_id": "q_15403", "text": "Convert a UNIX-style path into platform specific directory spec."}
{"_id": "q_15404", "text": "Seed internal state from supplied values."}
{"_id": "q_15405", "text": "Return internal state, useful for testing."}
{"_id": "q_15406", "text": "Clear out cache for api_path_map."}
{"_id": "q_15407", "text": "Validate arguments to be supplied to func."}
{"_id": "q_15408", "text": "Handle closing of websocket connection."}
{"_id": "q_15409", "text": "Yield DDP messages from a raw WebSocket message."}
{"_id": "q_15410", "text": "Dispatch msg to appropriate recv_foo handler."}
{"_id": "q_15411", "text": "DDP sub handler."}
{"_id": "q_15412", "text": "Inform client that WebSocket service is available."}
{"_id": "q_15413", "text": "Spawn greenlets for handling websockets and PostgreSQL calls."}
{"_id": "q_15414", "text": "Stop all green threads."}
{"_id": "q_15415", "text": "Spawn sub tasks, wait for stop signal."}
{"_id": "q_15416", "text": "Poll DB socket and process async tasks."}
{"_id": "q_15417", "text": "Patch threading and psycopg2 modules for green threads."}
{"_id": "q_15418", "text": "Generate a new ID, optionally using namespace of given `name`."}
{"_id": "q_15419", "text": "Import all `ddp` submodules from `settings.INSTALLED_APPS`."}
{"_id": "q_15420", "text": "Return an error dict for self.args and kwargs."}
{"_id": "q_15421", "text": "Get attribute, creating if required using specified factory."}
{"_id": "q_15422", "text": "Emit a formatted log record via DDP."}
{"_id": "q_15423", "text": "Middleware which selects a renderer for a given request then renders\n a handler's data to a `aiohttp.web.Response`."}
{"_id": "q_15424", "text": "Run an `aiohttp.web.Application` using gunicorn.\n\n :param app: The app to run.\n :param str app_uri: Import path to `app`. Takes the form\n ``$(MODULE_NAME):$(VARIABLE_NAME)``.\n The module name can be a full dotted path.\n The variable name refers to the `aiohttp.web.Application` instance.\n This argument is required if ``reload=True``.\n :param str host: Hostname to listen on.\n :param int port: Port of the server.\n :param bool reload: Whether to reload the server on a code change.\n If not set, will take the same value as ``app.debug``.\n **EXPERIMENTAL**.\n :param \\*\\*kwargs: Extra configuration options to set on the\n ``GunicornApp's`` config object."}
{"_id": "q_15425", "text": "Sends an APNS notification to one or more registration_ids.\n The registration_ids argument needs to be a list.\n\n Note that if set alert should always be a string. If it is not set,\n it won't be included in the notification. You will need to pass None\n to this for silent notifications."}
{"_id": "q_15426", "text": "Queries the APNS server for id's that are no longer active since\n the last fetch"}
{"_id": "q_15427", "text": "Standalone method to send a single gcm notification"}
{"_id": "q_15428", "text": "Sends a GCM message with the given content type"}
{"_id": "q_15429", "text": "Returns the instance of the given module location."}
{"_id": "q_15430", "text": "Fast forward selection algorithm\n\n Parameters\n ----------\n scenarios : numpy.array\n Contain the input scenarios.\n The columns representing the individual scenarios\n The rows are the vector of values in each scenario\n number_of_reduced_scenarios : int\n final number of scenarios that\n the reduced scenarios contain.\n If number of scenarios is equal to or greater than the input scenarios,\n then the original input scenario set is returned as the reduced set\n probability : numpy.array (default=None)\n probability is a numpy.array with length equal to number of scenarios.\n if probability is not defined, all scenarios get equal probabilities\n\n Returns\n -------\n reduced_scenarios : numpy.array\n reduced set of scenarios\n reduced_probability : numpy.array\n probability of reduced set of scenarios\n reduced_scenario_set : list\n scenario numbers of reduced set of scenarios\n\n Example\n -------\n Scenario reduction can be performed as shown below::\n\n >>> import numpy as np\n >>> import random\n >>> scenarios = np.array([[random.randint(500,1000) for i in range(0,24)],\n >>> [random.randint(500,1000) for i in range(0,24)],\n >>> [random.randint(500,1000) for i in range(0,24)],\n >>> [random.randint(500,1000) for i in range(0,24)],\n >>> [random.randint(500,1000) for i in range(0,24)],\n >>> [random.randint(500,1000) for i in range(0,24)],\n >>> [random.randint(500,1000) for i in range(0,24)],\n >>> [random.randint(500,1000) for i in range(0,24)],\n >>> [random.randint(500,1000) for i in range(0,24)],\n >>> [random.randint(500,1000) for i in range(0,24)]])\n >>> import psst.scenario\n >>> reduced_scenarios, reduced_probability, reduced_scenario_numbers = psst.scenario.fast_forward_selection(scenarios, probability, 2)"}
{"_id": "q_15431", "text": "Shorthand for creating a Giphy api wrapper with the given api key\n and then calling the search method. Note that this will return a generator"}
{"_id": "q_15432", "text": "Shorthand for creating a Giphy api wrapper with the given api key\n and then calling the translate method."}
{"_id": "q_15433", "text": "Shorthand for creating a Giphy api wrapper with the given api key\n and then calling the trending method. Note that this will return\n a generator"}
{"_id": "q_15434", "text": "Shorthand for creating a Giphy api wrapper with the given api key\n and then calling the gif method."}
{"_id": "q_15435", "text": "Does a normalization of sorts on image type data so that values\n that should be integers are converted from strings"}
{"_id": "q_15436", "text": "Retrieve a single image that represents a transalation of a term or\n phrase into an animated gif. Punctuation is ignored. By default, this\n will perform a `term` translation. If you want to translate by phrase,\n use the `phrase` keyword argument.\n\n :param term: Search term or terms\n :type term: string\n :param phrase: Search phrase\n :type phrase: string\n :param strict: Whether an exception should be raised when no results\n :type strict: boolean\n :param rating: limit results to those rated (y,g, pg, pg-13 or r).\n :type rating: string"}
{"_id": "q_15437", "text": "Retrieves a specifc gif from giphy based on unique id\n\n :param gif_id: Unique giphy gif ID\n :type gif_id: string\n :param strict: Whether an exception should be raised when no results\n :type strict: boolean"}
{"_id": "q_15438", "text": "Uploads a gif from the filesystem to Giphy.\n\n :param tags: Tags to apply to the uploaded image\n :type tags: list\n :param file_path: Path at which the image can be found\n :type file_path: string\n :param username: Your channel username if not using public API key"}
{"_id": "q_15439", "text": "Turns distances into RBF values.\n\n Parameters\n ----------\n X : array\n The raw pairwise distances.\n\n Returns\n -------\n X_rbf : array of same shape as X\n The distances in X passed through the RBF kernel."}
{"_id": "q_15440", "text": "Learn the linear transformation to clipped eigenvalues.\n\n Note that if min_eig isn't zero and any of the original eigenvalues\n were exactly zero, this will leave those eigenvalues as zero.\n\n Parameters\n ----------\n X : array, shape [n, n]\n The *symmetric* input similarities. If X is asymmetric, it will be\n treated as if it were symmetric based on its lower-triangular part."}
{"_id": "q_15441", "text": "Learn the transformation to shifted eigenvalues. Only depends\n on the input dimension.\n\n Parameters\n ----------\n X : array, shape [n, n]\n The *symmetric* input similarities."}
{"_id": "q_15442", "text": "Transforms X according to the linear transformation corresponding to\n shifting the input eigenvalues to all be at least ``self.min_eig``.\n\n Parameters\n ----------\n X : array, shape [n_test, n]\n The test similarities to training points.\n\n Returns\n -------\n Xt : array, shape [n_test, n]\n The transformed test similarites to training points. Only different\n from X if X is the training data."}
{"_id": "q_15443", "text": "Picks the elements of the basis to use for the given data.\n\n Only depends on the dimension of X. If it's more convenient, you can\n pass a single integer for X, which is the dimension to use.\n\n Parameters\n ----------\n X : an integer, a :class:`Features` instance, or a list of bag features\n The input data, or just its dimension, since only the dimension is\n needed here."}
{"_id": "q_15444", "text": "Transform a list of bag features into its projection series\n representation.\n\n Parameters\n ----------\n X : :class:`skl_groups.features.Features` or list of bag feature arrays\n New data to transform. The data should all lie in [0, 1];\n use :class:`skl_groups.preprocessing.BagMinMaxScaler` if not.\n\n Returns\n -------\n X_new : integer array, shape ``[len(X), dim_]``\n X transformed into the new space."}
{"_id": "q_15445", "text": "Transform the stacked points.\n\n Parameters\n ----------\n X : :class:`Features` or list of bag feature arrays\n New data to transform.\n\n any other keyword argument :\n Passed on as keyword arguments to the transformer's ``transform()``.\n\n Returns\n -------\n X_new : :class:`Features`\n Transformed features."}
{"_id": "q_15446", "text": "Scaling features of X according to feature_range.\n\n Parameters\n ----------\n X : array-like with shape [n_samples, n_features]\n Input data that will be transformed."}
{"_id": "q_15447", "text": "Undo the scaling of X according to feature_range.\n\n Note that if truncate is true, any truncated points will not\n be restored exactly.\n\n Parameters\n ----------\n X : array-like with shape [n_samples, n_features]\n Input data that will be transformed."}
{"_id": "q_15448", "text": "Transform a list of bag features into its bag-of-words representation.\n\n Parameters\n ----------\n X : :class:`skl_groups.features.Features` or list of bag feature arrays\n New data to transform.\n\n Returns\n -------\n X_new : integer array, shape [len(X), kmeans.n_clusters]\n X transformed into the new space."}
{"_id": "q_15449", "text": "Checks whether the array is either integral or boolean."}
{"_id": "q_15450", "text": "Returns argument as an integer array, converting floats if convertable.\n Raises ValueError if it's a float array with nonintegral values."}
{"_id": "q_15451", "text": "Builds FLANN indices for each bag."}
{"_id": "q_15452", "text": "r'''\n Estimates the linear inner product \\int p q between two distributions,\n based on kNN distances."}
{"_id": "q_15453", "text": "r'''\n Estimates \\int p^2 based on kNN distances.\n\n In here because it's used in the l2 distance, above.\n\n Returns array of shape (num_Ks,)."}
{"_id": "q_15454", "text": "Topologically sort a DAG, represented by a dict of child => set of parents.\n The dependency dict is destroyed during operation.\n\n Uses the Kahn algorithm: http://en.wikipedia.org/wiki/Topological_sorting\n Not a particularly good implementation, but we're just running it on tiny\n graphs."}
{"_id": "q_15455", "text": "Ks as an array and type-checked."}
{"_id": "q_15456", "text": "The dictionary of arguments to give to FLANN."}
{"_id": "q_15457", "text": "Sets up for divergence estimation \"from\" new data \"to\" X.\n Builds FLANN indices for each bag, and maybe gets within-bag distances.\n\n Parameters\n ----------\n X : list of arrays or :class:`skl_groups.features.Features`\n The bags to search \"to\".\n\n get_rhos : boolean, optional, default False\n Compute within-bag distances :attr:`rhos_`. These are only needed\n for some divergence functions or if do_sym is passed, and they'll\n be computed (and saved) during :meth:`transform` if they're not\n computed here.\n\n If you're using Jensen-Shannon divergence, a higher max_K may\n be needed once it sees the number of points in the transformed bags,\n so the computation here might be wasted."}
{"_id": "q_15458", "text": "Copies the Feature object. Makes a copy of the features array.\n\n Parameters\n ----------\n stack : boolean, optional, default False\n Whether to stack the copy if this one is unstacked.\n\n copy_meta : boolean, optional, default False\n Also copy the metadata. If False, metadata in both points to the\n same object."}
{"_id": "q_15459", "text": "Make a Features object with no metadata; points to the same features."}
{"_id": "q_15460", "text": "Specify the data to which kernel values should be computed.\n\n Parameters\n ----------\n X : list of arrays or :class:`skl_groups.features.Features`\n The bags to compute \"to\"."}
{"_id": "q_15461", "text": "Transform a list of bag features into a matrix of its mean features.\n\n Parameters\n ----------\n X : :class:`skl_groups.features.Features` or list of bag feature arrays\n Data to transform.\n\n Returns\n -------\n X_new : array, shape ``[len(X), X.dim]``\n X transformed into its means."}
{"_id": "q_15462", "text": "Start listening to the server"}
{"_id": "q_15463", "text": "Connect to the server\n\n :raise ConnectionError: If socket cannot establish a connection"}
{"_id": "q_15464", "text": "Read a line from the server. Data is read from the socket until a character ``\\n`` is found\n\n :return: the read line\n :rtype: string"}
{"_id": "q_15465", "text": "Read a block from the server. Lines are read until a character ``.`` is found\n\n :return: the read block\n :rtype: string"}
{"_id": "q_15466", "text": "Read a block and return the result as XML\n\n :return: block as xml\n :rtype: xml.etree.ElementTree"}
{"_id": "q_15467", "text": "Prepares the extension element for access control\n Extension element is the optional parameter for the YouTubeVideoEntry\n We use extension element to modify access control settings\n\n Returns:\n tuple of extension elements"}
{"_id": "q_15468", "text": "Browser based upload\n Creates the video entry and meta data to initiate a browser upload\n\n Authentication is needed\n\n Params:\n title: string\n description: string\n keywords: comma seperated string\n developer_tags: tuple\n\n Return:\n dict contains post_url and youtube_token. i.e { 'post_url': post_url, 'youtube_token': youtube_token }\n\n Raises:\n ApiError: on no authentication"}
{"_id": "q_15469", "text": "Updates the video\n\n Authentication is required\n\n Params:\n entry: video entry fetch via 'fetch_video()'\n title: string\n description: string\n keywords: string\n\n Returns:\n a video entry on success\n None otherwise"}
{"_id": "q_15470", "text": "Controls the availability of the video. Newly uploaded videos are in processing stage.\n And others might be rejected.\n\n Returns:\n json response"}
{"_id": "q_15471", "text": "Displays a video in an embed player"}
{"_id": "q_15472", "text": "list of videos of a user\n if username does not set, shows the currently logged in user"}
{"_id": "q_15473", "text": "direct upload method\n starts with uploading video to our server\n then sends the video file to youtube\n\n param:\n (optional) `only_data`: if set, a json response is returns i.e. {'video_id':'124weg'}\n\n return:\n if `only_data` set, a json object.\n otherwise redirects to the video display page"}
{"_id": "q_15474", "text": "Displays an upload form\n Creates upload url and token from youtube api and uses them on the form"}
{"_id": "q_15475", "text": "The upload result page\n Youtube will redirect to this page after upload is finished\n Saves the video data and redirects to the next page\n\n Params:\n status: status of the upload (200 for success)\n id: id number of the video"}
{"_id": "q_15476", "text": "Connects to Youtube Api and retrieves the video entry object\n\n Return:\n gdata.youtube.YouTubeVideoEntry"}
{"_id": "q_15477", "text": "Generic method for a resource's Update Metadata endpoint.\n\n Example endpoints:\n\n * `Update Device Metadata <https://m2x.att.com/developer/documentation/v2/device#Update-Device-Metadata>`_\n * `Update Distribution Metadata <https://m2x.att.com/developer/documentation/v2/distribution#Update-Distribution-Metadata>`_\n * `Update Collection Metadata <https://m2x.att.com/developer/documentation/v2/collections#Update-Collection-Metadata>`_\n\n :param params: The metadata being updated\n\n :return: The API response, see M2X API docs for details\n :rtype: dict\n\n :raises: :class:`~requests.exceptions.HTTPError` if an error occurs when sending the HTTP request"}
{"_id": "q_15478", "text": "Generic method for a resource's Update Metadata Field endpoint.\n\n Example endpoints:\n\n * `Update Device Metadata Field <https://m2x.att.com/developer/documentation/v2/device#Update-Device-Metadata-Field>`_\n * `Update Distribution Metadata Field <https://m2x.att.com/developer/documentation/v2/distribution#Update-Distribution-Metadata-Field>`_\n * `Update Collection Metadata Field <https://m2x.att.com/developer/documentation/v2/collections#Update-Collection-Metadata-Field>`_\n\n :param field: The metadata field to be updated\n :param value: The value to update\n\n :return: The API response, see M2X API docs for details\n :rtype: dict\n\n :raises: :class:`~requests.exceptions.HTTPError` if an error occurs when sending the HTTP request"}
{"_id": "q_15479", "text": "Get information about number of changesets, blocks and mapping days of a\n user, using both the OSM API and the Mapbox comments APIself."}
{"_id": "q_15480", "text": "Return a dictionary with id, user, user_id, bounds, date of creation\n and all the tags of the changeset.\n\n Args:\n changeset: the XML string of the changeset."}
{"_id": "q_15481", "text": "Get the metadata of a changeset using the OSM API and return it as a XML\n ElementTree.\n\n Args:\n changeset: the id of the changeset."}
{"_id": "q_15482", "text": "Read the first feature from the geojson and return it as a Polygon\n object."}
{"_id": "q_15483", "text": "Filter the changesets that intersects with the geojson geometry."}
{"_id": "q_15484", "text": "Set the fields of this class with the metadata of the analysed\n changeset."}
{"_id": "q_15485", "text": "Add suspicion reason and set the suspicious flag."}
{"_id": "q_15486", "text": "Execute the count and verify_words methods."}
{"_id": "q_15487", "text": "Verify the fields source, imagery_used and comment of the changeset\n for some suspect words."}
{"_id": "q_15488", "text": "Verify if the software used in the changeset is a powerfull_editor."}
{"_id": "q_15489", "text": "Count the number of elements created, modified and deleted by the\n changeset and analyses if it is a possible import, mass modification or\n a mass deletion."}
{"_id": "q_15490", "text": "Load a list of trees from a Newick formatted string.\n\n :param s: Newick formatted string.\n :param strip_comments: Flag signaling whether to strip comments enclosed in square \\\n brackets.\n :param kw: Keyword arguments are passed through to `Node.create`.\n :return: List of Node objects."}
{"_id": "q_15491", "text": "Load a list of trees from a Newick formatted file.\n\n :param fname: file path.\n :param strip_comments: Flag signaling whether to strip comments enclosed in square \\\n brackets.\n :param kw: Keyword arguments are passed through to `Node.create`.\n :return: List of Node objects."}
{"_id": "q_15492", "text": "Create a new `Node` object.\n\n :param name: Node label.\n :param length: Branch length from the new node to its parent.\n :param descendants: list of descendants or `None`.\n :param kw: Additonal keyword arguments are passed through to `Node.__init__`.\n :return: `Node` instance."}
{"_id": "q_15493", "text": "The representation of the Node in Newick format."}
{"_id": "q_15494", "text": "Return a unicode string representing a tree in ASCII art fashion.\n\n :param strict: Use ASCII characters strictly (for the tree symbols).\n :param show_internal: Show labels of internal nodes.\n :return: unicode string\n\n >>> node = loads('((A,B)C,((D,E)F,G,H)I)J;')[0]\n >>> print(node.ascii_art(show_internal=False, strict=True))\n /-A\n /---|\n | \\-B\n ----| /-D\n | /---|\n | | \\-E\n \\---|\n |-G\n \\-H"}
{"_id": "q_15495", "text": "Gets the specified node by name.\n\n :return: Node or None if name does not exist in tree"}
{"_id": "q_15496", "text": "Remove all those nodes in the specified list, or if inverse=True,\n remove all those nodes not in the specified list. The specified nodes\n must be leaves and distinct from the root node.\n\n :param nodes: A list of Node objects\n :param inverse: Specifies whether to remove nodes in the list or not\\\n in the list."}
{"_id": "q_15497", "text": "Set the name of all non-leaf nodes in the subtree to None."}
{"_id": "q_15498", "text": "Get a stream URI from a playlist URI, ``uri``.\n Unwraps nested playlists until something that's not a playlist is found or\n the ``timeout`` is reached."}
{"_id": "q_15499", "text": "Raises an exception if the given app setting is not defined."}
{"_id": "q_15500", "text": "Returns the value of the argument with the given name.\n\n If default is not provided, the argument is considered to be\n required, and we throw an HTTP 400 exception if it is missing.\n\n If the argument appears in the url more than once, we return the\n last value.\n\n The returned value is always unicode."}
{"_id": "q_15501", "text": "Returns a list of the arguments with the given name.\n\n If the argument is not present, returns an empty list.\n\n The returned values are always unicode."}
{"_id": "q_15502", "text": "Obsolete - catches exceptions from the wrapped function.\n\n This function is unnecessary since Tornado 1.1."}
{"_id": "q_15503", "text": "Gets the value of the cookie with the given name, else default."}
{"_id": "q_15504", "text": "Deletes the cookie with the given name."}
{"_id": "q_15505", "text": "Gets the OAuth authorized user and access token on callback.\n\n This method should be called from the handler for your registered\n OAuth Callback URL to complete the registration process. We call\n callback with the authenticated user, which in addition to standard\n attributes like 'name' includes the 'access_key' attribute, which\n contains the OAuth access you can use to make authorized requests\n to this service on behalf of the user."}
{"_id": "q_15506", "text": "Returns the OAuth parameters as a dict for the given request.\n\n parameters should include all POST arguments and query string arguments\n that will be sent with the request."}
{"_id": "q_15507", "text": "Authenticates and authorizes for the given Google resource.\n\n Some of the available resources are:\n\n * Gmail Contacts - http://www.google.com/m8/feeds/\n * Calendar - http://www.google.com/calendar/feeds/\n * Finance - http://finance.google.com/finance/feeds/\n\n You can authorize multiple resources by separating the resource\n URLs with a space."}
{"_id": "q_15508", "text": "Makes a Facebook API REST request.\n\n We automatically include the Facebook API key and signature, but\n it is the callers responsibility to include 'session_key' and any\n other required arguments to the method.\n\n The available Facebook methods are documented here:\n http://wiki.developers.facebook.com/index.php/API\n\n Here is an example for the stream.get() method::\n\n class MainHandler(tornado.web.RequestHandler,\n tornado.auth.FacebookMixin):\n @tornado.web.authenticated\n @tornado.web.asynchronous\n def get(self):\n self.facebook_request(\n method=\"stream.get\",\n callback=self.async_callback(self._on_stream),\n session_key=self.current_user[\"session_key\"])\n\n def _on_stream(self, stream):\n if stream is None:\n # Not authorized to read the stream yet?\n self.redirect(self.authorize_redirect(\"read_stream\"))\n return\n self.render(\"stream.html\", stream=stream)"}
{"_id": "q_15509", "text": "Handles the login for the Facebook user, returning a user object.\n\n Example usage::\n\n class FacebookGraphLoginHandler(LoginHandler, tornado.auth.FacebookGraphMixin):\n @tornado.web.asynchronous\n def get(self):\n if self.get_argument(\"code\", False):\n self.get_authenticated_user(\n redirect_uri='/auth/facebookgraph/',\n client_id=self.settings[\"facebook_api_key\"],\n client_secret=self.settings[\"facebook_secret\"],\n code=self.get_argument(\"code\"),\n callback=self.async_callback(\n self._on_login))\n return\n self.authorize_redirect(redirect_uri='/auth/facebookgraph/',\n client_id=self.settings[\"facebook_api_key\"],\n extra_params={\"scope\": \"read_stream,offline_access\"})\n\n def _on_login(self, user):\n log.error(user)\n self.finish()"}
{"_id": "q_15510", "text": "Concatenate url and argument dictionary regardless of whether\n url has existing query parameters.\n\n >>> url_concat(\"http://example.com/foo?a=b\", dict(c=\"d\"))\n 'http://example.com/foo?a=b&c=d'"}
{"_id": "q_15511", "text": "Parse a Content-type like header.\n\n Return the main content-type and a dictionary of options."}
{"_id": "q_15512", "text": "Adds a new value for the given key."}
{"_id": "q_15513", "text": "Returns all values for the given header as a list."}
{"_id": "q_15514", "text": "Updates the dictionary with a single header line.\n\n >>> h = HTTPHeaders()\n >>> h.parse_line(\"Content-Type: text/html\")\n >>> h.get('content-type')\n 'text/html'"}
{"_id": "q_15515", "text": "Returns a dictionary from HTTP header text.\n\n >>> h = HTTPHeaders.parse(\"Content-Type: text/html\\\\r\\\\nContent-Length: 42\\\\r\\\\n\")\n >>> sorted(h.iteritems())\n [('Content-Length', '42'), ('Content-Type', 'text/html')]"}
{"_id": "q_15516", "text": "Converts a name to Http-Header-Case.\n\n >>> HTTPHeaders._normalize_name(\"coNtent-TYPE\")\n 'Content-Type'"}
{"_id": "q_15517", "text": "Converts a string argument to a byte string.\n\n If the argument is already a byte string or None, it is returned unchanged.\n Otherwise it must be a unicode string and is encoded as utf8."}
{"_id": "q_15518", "text": "Converts a string argument to a unicode string.\n\n If the argument is already a unicode string or None, it is returned\n unchanged. Otherwise it must be a byte string and is decoded as utf8."}
{"_id": "q_15519", "text": "Converts a string argument to a subclass of basestring.\n\n In python2, byte and unicode strings are mostly interchangeable,\n so functions that deal with a user-supplied argument in combination\n with ascii string constants can use either and should return the type\n the user supplied. In python3, the two types are not interchangeable,\n so this method is needed to convert byte strings to unicode."}
{"_id": "q_15520", "text": "Walks a simple data structure, converting byte strings to unicode.\n\n Supports lists, tuples, and dictionaries."}
{"_id": "q_15521", "text": "Make sure that other installed plugins don't affect the same\n keyword argument and check if metadata is available."}
{"_id": "q_15522", "text": "Grow this Pantheon by multiplying Gods."}
{"_id": "q_15523", "text": "This model recognizes that sex chromosomes don't always line up with\n gender. Assign M, F, or NB according to the probabilities in p_gender."}
{"_id": "q_15524", "text": "Produce two gametes, an egg and a sperm, from input Gods. Combine\n them to produce a genome a la sexual reproduction. Assign divinity\n according to probabilities in p_divinity. The more divine the parents,\n the more divine their offspring."}
{"_id": "q_15525", "text": "Extract 23 'chromosomes' aka words from 'gene pool' aka list of tokens\n by searching the list of tokens for words that are related to the given\n egg_or_sperm_word."}
{"_id": "q_15526", "text": "Print parents' names and epithets."}
{"_id": "q_15527", "text": "Returns all the information regarding a specific stage run\n\n See the `Go stage instance documentation`__ for examples.\n\n .. __: http://api.go.cd/current/#get-stage-instance\n\n Args:\n counter (int): The stage instance to fetch.\n If falsey returns the latest stage instance from :meth:`history`.\n pipeline_counter (int): The pipeline instance for which to fetch\n the stage. If falsey returns the latest pipeline instance.\n\n Returns:\n Response: :class:`gocd.api.response.Response` object"}
{"_id": "q_15528", "text": "Performs a HTTP request to the Go server\n\n Args:\n path (str): The full path on the Go server to request.\n This includes any query string attributes.\n data (str, dict, bool, optional): If any data is present this\n request will become a POST request.\n headers (dict, optional): Headers to set for this particular\n request\n\n Raises:\n HTTPError: when the HTTP request fails.\n\n Returns:\n file like object: The response from a\n :func:`urllib2.urlopen` call"}
{"_id": "q_15529", "text": "Make the request appear to be coming from a browser\n\n This is to interact with older parts of Go that doesn't have a\n proper API call to be made. What will be done:\n\n 1. If no response passed in a call to `go/api/pipelines.xml` is\n made to get a valid session\n 2. `JSESSIONID` will be populated from this request\n 3. A request to `go/pipelines` will be so the\n `authenticity_token` (CSRF) can be extracted. It will then\n silently be injected into `post_args` on any POST calls that\n doesn't start with `go/api` from this point.\n\n Args:\n response: a :class:`Response` object from a previously successful\n API call. So we won't have to query `go/api/pipelines.xml`\n unnecessarily.\n\n Raises:\n HTTPError: when the HTTP request fails.\n AuthenticationFailed: when failing to get the `session_id`\n or the `authenticity_token`."}
{"_id": "q_15530", "text": "Return a dict as a list of lists.\n\n >>> flatten({\"a\": \"b\"})\n [['a', 'b']]\n >>> flatten({\"a\": [1, 2, 3]})\n [['a', [1, 2, 3]]]\n >>> flatten({\"a\": {\"b\": \"c\"}})\n [['a', 'b', 'c']]\n >>> flatten({\"a\": {\"b\": {\"c\": \"e\"}}})\n [['a', 'b', 'c', 'e']]\n >>> flatten({\"a\": {\"b\": \"c\", \"d\": \"e\"}})\n [['a', 'b', 'c'], ['a', 'd', 'e']]\n >>> flatten({\"a\": {\"b\": \"c\", \"d\": \"e\"}, \"b\": {\"c\": \"d\"}})\n [['a', 'b', 'c'], ['a', 'd', 'e'], ['b', 'c', 'd']]"}
{"_id": "q_15531", "text": "Returns all the information regarding a specific pipeline run\n\n See the `Go pipeline instance documentation`__ for examples.\n\n .. __: http://api.go.cd/current/#get-pipeline-instance\n\n Args:\n counter (int): The pipeline instance to fetch.\n If falsey returns the latest pipeline instance from :meth:`history`.\n\n Returns:\n Response: :class:`gocd.api.response.Response` object"}
{"_id": "q_15532", "text": "Schedule a pipeline run\n\n Aliased as :meth:`run`, :meth:`schedule`, and :meth:`trigger`.\n\n Args:\n variables (dict, optional): Variables to set/override\n secure_variables (dict, optional): Secure variables to set/override\n materials (dict, optional): Material revisions to be used for\n this pipeline run. The exact format for this is a bit iffy,\n have a look at the official\n `Go pipeline scheduling documentation`__ or inspect a call\n from triggering manually in the UI.\n return_new_instance (bool): Returns a :meth:`history` compatible\n response for the newly scheduled instance. This is primarily so\n users easily can get the new instance number. **Note:** This is done\n in a very naive way, it just checks that the instance number is\n higher than before the pipeline was triggered.\n backoff_time (float): How long between each check for\n :arg:`return_new_instance`.\n\n .. __: http://api.go.cd/current/#scheduling-pipelines\n\n Returns:\n Response: :class:`gocd.api.response.Response` object"}
{"_id": "q_15533", "text": "Yields the output and metadata from all jobs in the pipeline\n\n Args:\n instance: The result of a :meth:`instance` call, if not supplied\n the latest of the pipeline will be used.\n\n Yields:\n tuple: (metadata (dict), output (str)).\n\n metadata contains:\n - pipeline\n - pipeline_counter\n - stage\n - stage_counter\n - job\n - job_result"}
{"_id": "q_15534", "text": "Delete template config for specified template name.\n\n .. __: https://api.go.cd/current/#delete-a-template\n\n Returns:\n Response: :class:`gocd.api.response.Response` object"}
{"_id": "q_15535", "text": "Returns a set of all pipelines from the last response\n\n Returns:\n set: Response success: all the pipelines available in the response\n Response failure: an empty set"}
{"_id": "q_15536", "text": "Gets an artifact directory by its path.\n\n See the `Go artifact directory documentation`__ for example responses.\n\n .. __: http://api.go.cd/current/#get-artifact-directory\n\n .. note::\n Getting a directory relies on Go creating a zip file of the\n directory in question. Because of this Go will zip the file in\n the background and return a 202 Accepted response. It's then up\n to the client to check again later and get the final file.\n\n To work with normal assumptions this :meth:`get_directory` will\n retry itself up to ``timeout`` seconds to get a 200 response to\n return. At that point it will then return the response as is, no\n matter whether it's still 202 or 200. The retry is done with an\n exponential backoff with a max value between retries. See the\n ``backoff`` and ``max_wait`` variables.\n\n If you want to handle the retry logic yourself then use :meth:`get`\n and add '.zip' as a suffix on the directory.\n\n Args:\n path_to_directory (str): The path to the directory to get.\n It can be nested eg ``target/dist.zip``\n timeout (int): How many seconds we will wait in total for a\n successful response from Go when we're receiving 202\n backoff (float): The initial value used for backoff, raises\n exponentially until it reaches ``max_wait``\n max_wait (int): The max time between retries\n\n Returns:\n Response: :class:`gocd.api.response.Response` object\n A successful response is a zip-file."}
{"_id": "q_15537", "text": "Based on the matching strategy and the origin and optionally the requested method a tuple of policyname and origin to pass back is returned."}
{"_id": "q_15538", "text": "Write a GRO file.\n\n Parameters\n ----------\n outfile\n The stream to write in.\n title\n The title of the GRO file. Must be a single line.\n atoms\n An instance of Structure containing the atoms to write.\n box\n The periodic box as a 3x3 matrix."}
{"_id": "q_15539", "text": "Write a PDB file.\n\n Parameters\n ----------\n outfile\n The stream to write in.\n title\n The title of the GRO file. Must be a single line.\n atoms\n An instance of Structure containing the atoms to write.\n box\n The periodic box as a 3x3 matrix."}
{"_id": "q_15540", "text": "Adapt the size of the box to accomodate the lipids.\n\n The PBC is changed **in place**."}
{"_id": "q_15541", "text": "Return a stream for a given resource file in the module.\n\n The resource file has to be part of the module and its filenane given\n relative to the module."}
{"_id": "q_15542", "text": "Send a message to a group of users.\n\n :param users: Users queryset\n :param message: Message to show\n :param level: Message level"}
{"_id": "q_15543", "text": "Fetch messages for given user. Returns None if no such message exists.\n\n :param user: User instance"}
{"_id": "q_15544", "text": "Check for messages for this user and, if it exists,\n call the messages API with it"}
{"_id": "q_15545", "text": "Checks the config.json file for default settings and auth values.\n\n Args:\n :msg: (Message class) an instance of a message class."}
{"_id": "q_15546", "text": "Verifies the profile name exists in the config.json file.\n\n Args:\n :msg: (Message class) an instance of a message class.\n :cfg: (jsonconfig.Config) config instance."}
{"_id": "q_15547", "text": "Display the required items needed to configure a profile for the given\n message type.\n\n Args:\n :msg_type: (str) message type to create config entry."}
{"_id": "q_15548", "text": "Get the required 'auth' from the user and return as a dict."}
{"_id": "q_15549", "text": "Create the profile entry.\n\n Args:\n :msg_type: (str) message type to create config entry.\n :profile_name: (str) name of the profile entry\n :data: (dict) dict values for the 'settings'\n :auth: (dict) auth parameters"}
{"_id": "q_15550", "text": "Write the settings into the data portion of the cfg.\n\n Args:\n :msg_type: (str) message type to create config entry.\n :profile_name: (str) name of the profile entry\n :data: (dict) dict values for the 'settings'\n :cfg: (jsonconfig.Config) config instance."}
{"_id": "q_15551", "text": "Write the settings into the auth portion of the cfg.\n\n Args:\n :msg_type: (str) message type to create config entry.\n :profile_name: (str) name of the profile entry\n :auth: (dict) auth parameters\n :cfg: (jsonconfig.Config) config instance."}
{"_id": "q_15552", "text": "Add attachments."}
{"_id": "q_15553", "text": "Factory function to return the specified message instance.\n\n Args:\n :msg_type: (str) the type of message to send, i.e. 'Email'\n :msg_types: (str, list, or set) the supported message types\n :kwargs: (dict) keywords arguments that are required for the\n various message types. See docstrings for each type.\n i.e. help(messages.Email), help(messages.Twilio), etc."}
{"_id": "q_15554", "text": "A credential property factory for each message class that will set\n private attributes and return obfuscated credentials when requested."}
{"_id": "q_15555", "text": "Base function to validate input, dispatched via message type."}
{"_id": "q_15556", "text": "Twilio input validator function."}
{"_id": "q_15557", "text": "SlackPost input validator function."}
{"_id": "q_15558", "text": "WhatsApp input validator function."}
{"_id": "q_15559", "text": "Add a message to the futures executor."}
{"_id": "q_15560", "text": "Reads message body if specified via filepath."}
{"_id": "q_15561", "text": "Gets rid of args with value of None, as well as select keys."}
{"_id": "q_15562", "text": "send via HTTP Post."}
{"_id": "q_15563", "text": "Start sending the message and attachments."}
{"_id": "q_15564", "text": "Return an SMTP servername guess from outgoing email address."}
{"_id": "q_15565", "text": "Put the parts of the email together."}
{"_id": "q_15566", "text": "Add email header info."}
{"_id": "q_15567", "text": "Add body content of email."}
{"_id": "q_15568", "text": "Add required attachments."}
{"_id": "q_15569", "text": "Get an SMTP session with TLS."}
{"_id": "q_15570", "text": "Configuration loader.\n\n Adds support for loading templates from the Flask application's instance\n folder (``<instance_folder>/templates``)."}
{"_id": "q_15571", "text": "Covert name from CamelCase to \"Normal case\".\n\n >>> camel2word('CamelCase')\n 'Camel case'\n >>> camel2word('CaseWithSpec')\n 'Case with spec'"}
{"_id": "q_15572", "text": "Format a time in seconds."}
{"_id": "q_15573", "text": "Indent representation of a dict"}
{"_id": "q_15574", "text": "Save metadata tags."}
{"_id": "q_15575", "text": "Test for existence of ``needle`` regex within ``haystack``.\n\n Say ``escape`` to escape the ``needle`` if you aren't really using the\n regex feature & have special characters in it."}
{"_id": "q_15576", "text": "Get an image that refers to the given rectangle within this image. The image data is not actually\n copied; if the image region is rendered into, it will affect this image.\n\n :param int x1: left edge of the image region to return\n :param int y1: top edge of the image region to return\n :param int x2: right edge of the image region to return\n :param int y2: bottom edge of the image region to return\n :return: :class:`Image`"}
{"_id": "q_15577", "text": "Validate keys and values.\n\n Check to make sure every key used is a valid Vorbis key, and\n that every value used is a valid Unicode or UTF-8 string. If\n any invalid keys or values are found, a ValueError is raised.\n\n In Python 3 all keys and values have to be a string."}
{"_id": "q_15578", "text": "Clear all keys from the comment."}
{"_id": "q_15579", "text": "Return a string representation of the data.\n\n Validation is always performed, so calling this function on\n invalid data may raise a ValueError.\n\n Keyword arguments:\n\n * framing -- if true, append a framing bit (see load)"}
{"_id": "q_15580", "text": "Read the chunks data"}
{"_id": "q_15581", "text": "Update the size of the chunk"}
{"_id": "q_15582", "text": "Insert a new chunk at the end of the IFF file"}
{"_id": "q_15583", "text": "Completely removes the ID3 chunk from the AIFF file"}
{"_id": "q_15584", "text": "parse a C source file, and add its blocks to the processor's list"}
{"_id": "q_15585", "text": "add the current accumulated lines and create a new block"}
{"_id": "q_15586", "text": "Draw a string with the given font.\n\n :note: Text alignment and word-wrapping is not yet implemented. The text is rendered with the left edge and\n baseline at ``(x, y)``.\n\n :param font: the :class:`Font` to render text with\n :param text: a string of text to render."}
{"_id": "q_15587", "text": "Parses a standard ISO 8601 time string. The Route53 API uses these here\n and there.\n\n :param str time_str: An ISO 8601 time string.\n :rtype: datetime.datetime\n :returns: A timezone aware (UTC) datetime.datetime instance."}
{"_id": "q_15588", "text": "convert a series of simple words into some HTML text"}
{"_id": "q_15589", "text": "convert words of a paragraph into tagged HTML text, handle xrefs"}
{"_id": "q_15590", "text": "convert a code sequence to HTML"}
{"_id": "q_15591", "text": "convert a field's content into some valid HTML"}
{"_id": "q_15592", "text": "Update all parent atoms with the new size."}
{"_id": "q_15593", "text": "Register a mapping for controllers with the given vendor and product IDs. The mapping will\n replace any existing mapping for these IDs for controllers not yet connected.\n\n :param vendor_id: the vendor ID of the controller, as reported by :attr:`Controller.vendor_id`\n :param product_id: the vendor ID of the controller, as reported by :attr:`Controller.product_id`\n :param mapping: a :class:`ControllerMapping` to apply"}
{"_id": "q_15594", "text": "Mutates any attributes on ``obj`` which are classes, with link to ``obj``.\n\n Adds a convenience accessor which instantiates ``obj`` and then calls its\n ``setup`` method.\n\n Recurses on those objects as well."}
{"_id": "q_15595", "text": "Route53 uses AWS an HMAC-based authentication scheme, involving the\n signing of a date string with the user's secret access key. More details\n on the specifics can be found in their documentation_.\n\n .. documentation:: http://docs.amazonwebservices.com/Route53/latest/DeveloperGuide/RESTAuthentication.html\n\n This method is used to sign said time string, for use in the request\n headers.\n\n\n :param str string_to_sign: The time string to sign.\n :rtype: str\n :returns: An HMAC signed string."}
{"_id": "q_15596", "text": "All outbound requests go through this method. It defers to the\n transport's various HTTP method-specific methods.\n\n :param str path: The path to tack on to the endpoint URL for\n the query.\n :param data: The params to send along with the request.\n :type data: Either a dict or bytes, depending on the request type.\n :param str method: One of 'GET', 'POST', or 'DELETE'.\n\n :rtype: str\n :returns: The body of the response."}
{"_id": "q_15597", "text": "Sends the GET request to the Route53 endpoint.\n\n :param str path: The path to tack on to the endpoint URL for\n the query.\n :param dict params: Key/value pairs to send.\n :param dict headers: A dict of headers to send with the request.\n :rtype: str\n :returns: The body of the response."}
{"_id": "q_15598", "text": "Sends the POST request to the Route53 endpoint.\n\n :param str path: The path to tack on to the endpoint URL for\n the query.\n :param data: Either a dict, or bytes.\n :type data: dict or bytes\n :param dict headers: A dict of headers to send with the request.\n :rtype: str\n :returns: The body of the response."}
{"_id": "q_15599", "text": "Uses the HTTP transport to query the Route53 API. Runs the response\n through lxml's parser, before we hand it off for further picking\n apart by our call-specific parsers.\n\n :param str path: The RESTful path to tack on to the :py:attr:`endpoint`.\n :param data: The params to send along with the request.\n :type data: Either a dict or bytes, depending on the request type.\n :param str method: One of 'GET', 'POST', or 'DELETE'.\n :rtype: lxml.etree._Element\n :returns: An lxml Element root."}
{"_id": "q_15600", "text": "Given an API method, the arguments passed to it, and a function to\n hand parsing off to, loop through the record sets in the API call\n until all records have been yielded.\n\n\n :param str method: The API method on the endpoint.\n :param dict params: The kwargs from the top-level API method.\n :param callable parser_func: A callable that is used for parsing the\n output from the API call.\n :param str next_marker_param_name: The XPath to the marker tag that\n will determine whether we continue paginating.\n :param str next_marker_param_name: The parameter name to manipulate\n in the request data to bring up the next page on the next\n request loop.\n :keyword str next_type_xpath: For the\n py:meth:`list_resource_record_sets_by_zone_id` method, there's\n an additional paginator token. Specifying this XPath looks for it.\n :keyword dict parser_kwargs: Optional dict of additional kwargs to pass\n on to the parser function.\n :rtype: generator\n :returns: Returns a generator that may be returned by the top-level\n API method."}
{"_id": "q_15601", "text": "Creates and returns a new hosted zone. Once a hosted zone is created,\n its details can't be changed.\n\n :param str name: The name of the hosted zone to create.\n :keyword str caller_reference: A unique string that identifies the\n request and that allows failed create_hosted_zone requests to be\n retried without the risk of executing the operation twice. If no\n value is given, we'll generate a Type 4 UUID for you.\n :keyword str comment: An optional comment to attach to the zone.\n :rtype: tuple\n :returns: A tuple in the form of ``(hosted_zone, change_info)``.\n The ``hosted_zone`` variable contains a\n :py:class:`HostedZone <route53.hosted_zone.HostedZone>`\n instance matching the newly created zone, and ``change_info``\n is a dict with some details about the API request."}
{"_id": "q_15602", "text": "Draw an image.\n\n The image's top-left corner is drawn at ``(x1, y1)``, and its lower-left at ``(x2, y2)``. If ``x2`` and ``y2`` are omitted, they\n are calculated to render the image at its native resoultion.\n\n Note that images can be flipped and scaled by providing alternative values for ``x2`` and ``y2``.\n\n :param image: an :class:`Image` to draw"}
{"_id": "q_15603", "text": "Draw a rectangular region of an image.\n\n The part of the image contained by the rectangle in texel-space by the coordinates ``(ix1, iy1)`` to ``(ix2, iy2)`` is\n drawn at coordinates ``(x1, y1)`` to ``(x2, y2)``. All coordinates have the origin ``(0, 0)`` at the upper-left corner.\n\n For example, to draw the left half of a ``100x100`` image at coordinates ``(x, y)``::\n\n bacon.draw_image_region(image, x, y, x + 50, y + 100,\n 0, 0, 50, 100)\n\n :param image: an :class:`Image` to draw"}
{"_id": "q_15604", "text": "Find the last page of the stream 'serial'.\n\n If the file is not multiplexed this function is fast. If it is,\n it must read the whole the stream.\n\n This finds the last page in the actual file object, or the last\n page in the stream (with eos set), whichever comes first."}
{"_id": "q_15605", "text": "set current section during parsing"}
{"_id": "q_15606", "text": "return the DocMarkup corresponding to a given tag in a block"}
{"_id": "q_15607", "text": "Forms an XML string that we'll send to Route53 in order to create\n a new hosted zone.\n\n :param Route53Connection connection: The connection instance used to\n query the API.\n :param str name: The name of the hosted zone to create."}
{"_id": "q_15608", "text": "Lock a file object 'safely'.\n\n That means a failure to lock because the platform doesn't\n support fcntl or filesystem locks is not considered a\n failure. This call does block.\n\n Returns whether or not the lock was successful, or\n raises an exception in more extreme circumstances (full\n lock table, invalid file)."}
{"_id": "q_15609", "text": "Convert a basestring to a valid UTF-8 str."}
{"_id": "q_15610", "text": "Adds a change to this change set.\n\n :param str action: Must be one of either 'CREATE' or 'DELETE'.\n :param resource_record_set.ResourceRecordSet record_set: The\n ResourceRecordSet object that was created or deleted."}
{"_id": "q_15611", "text": "Determines whether this record set has been modified since the\n last retrieval or save.\n\n :rtype: bool\n :returns: ``True` if the record set has been modified,\n and ``False`` if not."}
{"_id": "q_15612", "text": "Deletes this record set."}
{"_id": "q_15613", "text": "Procesa TCU, CP, FEU diario.\n\n :param df:\n :param verbose:\n :param convert_kwh:\n :return:"}
{"_id": "q_15614", "text": "Return an ID3v1.1 tag string from a dict of ID3v2.4 frames."}
{"_id": "q_15615", "text": "Read a certain number of bytes from the source file."}
{"_id": "q_15616", "text": "Delete all tags of a given kind; see getall."}
{"_id": "q_15617", "text": "Updates done by both v23 and v24 update"}
{"_id": "q_15618", "text": "Release all resources associated with the sound."}
{"_id": "q_15619", "text": "Play the sound as a `one-shot`.\n\n The sound will be played to completion. If the sound is played more than once at a time, it will mix\n with all previous instances of itself. If you need more control over the playback of sounds, see\n :class:`Voice`.\n\n :param gain: optional volume level to play the sound back at, between 0.0 and 1.0 (defaults to 1.0)\n :param pan: optional stereo pan, between -1.0 (left) and 1.0 (right)\n :param pitch: optional sampling rate modification, between 0.4 and 16.0, where 1.0 represents the original pitch"}
{"_id": "q_15620", "text": "Set the loop points within the sound.\n\n The sound must have been created with ``loop=True``. The default parameters cause the loop points to be set to\n the entire sound duration.\n\n :note: There is currently no API for converting sample numbers to times.\n :param start_sample: sample number to loop back to\n :param end_sample: sample number to loop at"}
{"_id": "q_15621", "text": "Compress the log message in order to send less bytes to the wire."}
{"_id": "q_15622", "text": "return the list of glyph names and their unicode values"}
{"_id": "q_15623", "text": "dumps a given encoding"}
{"_id": "q_15624", "text": "builds a list of input files from command-line arguments"}
{"_id": "q_15625", "text": "This a common parser that allows the passing of any valid HostedZone\n tag. It will spit out the appropriate HostedZone object for the tag.\n\n :param lxml.etree._Element e_zone: The root node of the etree parsed\n response from the API.\n :param Route53Connection connection: The connection instance used to\n query the API.\n :rtype: HostedZone\n :returns: An instantiated HostedZone object."}
{"_id": "q_15626", "text": "Parses a DelegationSet tag. These often accompany HostedZone tags in\n responses like CreateHostedZone and GetHostedZone.\n\n :param HostedZone zone: An existing HostedZone instance to populate.\n :param lxml.etree._Element e_delegation_set: A DelegationSet element."}
{"_id": "q_15627", "text": "Consolidate FLAC padding metadata blocks.\n\n The overall size of the rendered blocks does not change, so\n this adds several bytes of padding for each merged block."}
{"_id": "q_15628", "text": "Remove Vorbis comments from a file.\n\n If no filename is given, the one most recently loaded is used."}
{"_id": "q_15629", "text": "Save metadata blocks to a file.\n\n If no filename is given, the one most recently loaded is used."}
{"_id": "q_15630", "text": "Parses an Alias tag beneath a ResourceRecordSet, spitting out the two values\n found within. This is specific to A records that are set to Alias.\n\n :param lxml.etree._Element e_alias: An Alias tag beneath a ResourceRecordSet.\n :rtype: tuple\n :returns: A tuple in the form of ``(alias_hosted_zone_id, alias_dns_name)``."}
{"_id": "q_15631", "text": "Used to parse the various Values from the ResourceRecords tags on\n most rrset types.\n\n :param lxml.etree._Element e_resource_records: A ResourceRecords tag\n beneath a ResourceRecordSet.\n :rtype: list\n :returns: A list of resource record strings."}
{"_id": "q_15632", "text": "This a parser that allows the passing of any valid ResourceRecordSet\n tag. It will spit out the appropriate ResourceRecordSet object for the tag.\n\n :param lxml.etree._Element e_rrset: The root node of the etree parsed\n response from the API.\n :param Route53Connection connection: The connection instance used to\n query the API.\n :param str zone_id: The zone ID of the HostedZone these rrsets belong to.\n :rtype: ResourceRecordSet\n :returns: An instantiated ResourceRecordSet object."}
{"_id": "q_15633", "text": "Internal bookkeeping to handle nested classes"}
{"_id": "q_15634", "text": "Needs to be its own method so it can be called from both wantClass and\n registerGoodClass."}
{"_id": "q_15635", "text": "Convenience method for creating ResourceRecordSets. Most of the calls\n are basically the same, this saves on repetition.\n\n :rtype: tuple\n :returns: A tuple in the form of ``(rrset, change_info)``, where\n ``rrset`` is the newly created ResourceRecordSet sub-class\n instance."}
{"_id": "q_15636", "text": "Creates and returns an A record attached to this hosted zone.\n\n :param str name: The fully qualified name of the record to add.\n :param list values: A list of value strings for the record.\n :keyword int ttl: The time-to-live of the record (in seconds).\n :keyword int weight: *For weighted record sets only*. Among resource record\n sets that have the same combination of DNS name and type, a value\n that determines what portion of traffic for the current resource\n record set is routed to the associated location. Ranges from 0-255.\n :keyword str region: *For latency-based record sets*. The Amazon EC2 region\n where the resource that is specified in this resource record set\n resides.\n :keyword str set_identifier: *For weighted and latency resource record\n sets only*. An identifier that differentiates among multiple\n resource record sets that have the same combination of DNS name\n and type. 1-128 chars.\n :keyword str alias_hosted_zone_id: Alias A records have this specified.\n It appears to be the hosted zone ID for the ELB the Alias points at.\n :keyword str alias_dns_name: Alias A records have this specified. It is\n the DNS name for the ELB that the Alias points to.\n :rtype: tuple\n :returns: A tuple in the form of ``(rrset, change_info)``, where\n ``rrset`` is the newly created\n :py:class:`AResourceRecordSet <route53.resource_record_set.AResourceRecordSet>`\n instance."}
{"_id": "q_15637", "text": "Creates a CNAME record attached to this hosted zone.\n\n :param str name: The fully qualified name of the record to add.\n :param list values: A list of value strings for the record.\n :keyword int ttl: The time-to-live of the record (in seconds).\n :keyword int weight: *For weighted record sets only*. Among resource record\n sets that have the same combination of DNS name and type, a value\n that determines what portion of traffic for the current resource\n record set is routed to the associated location. Ranges from 0-255.\n :keyword str region: *For latency-based record sets*. The Amazon EC2 region\n where the resource that is specified in this resource record set\n resides.\n :keyword str set_identifier: *For weighted and latency resource record\n sets only*. An identifier that differentiates among multiple\n resource record sets that have the same combination of DNS name\n and type. 1-128 chars.\n :rtype: tuple\n :returns: A tuple in the form of ``(rrset, change_info)``, where\n ``rrset`` is the newly created CNAMEResourceRecordSet instance."}
{"_id": "q_15638", "text": "Creates a MX record attached to this hosted zone.\n\n :param str name: The fully qualified name of the record to add.\n :param list values: A list of value strings for the record.\n :keyword int ttl: The time-to-live of the record (in seconds).\n :rtype: tuple\n :returns: A tuple in the form of ``(rrset, change_info)``, where\n ``rrset`` is the newly created MXResourceRecordSet instance."}
{"_id": "q_15639", "text": "Creates a NS record attached to this hosted zone.\n\n :param str name: The fully qualified name of the record to add.\n :param list values: A list of value strings for the record.\n :keyword int ttl: The time-to-live of the record (in seconds).\n :rtype: tuple\n :returns: A tuple in the form of ``(rrset, change_info)``, where\n ``rrset`` is the newly created NSResourceRecordSet instance."}
{"_id": "q_15640", "text": "Creates a SPF record attached to this hosted zone.\n\n :param str name: The fully qualified name of the record to add.\n :param list values: A list of value strings for the record.\n :keyword int ttl: The time-to-live of the record (in seconds).\n :rtype: tuple\n :returns: A tuple in the form of ``(rrset, change_info)``, where\n ``rrset`` is the newly created SPFResourceRecordSet instance."}
{"_id": "q_15641", "text": "Creates a TXT record attached to this hosted zone.\n\n :param str name: The fully qualified name of the record to add.\n :param list values: A list of value strings for the record.\n :keyword int ttl: The time-to-live of the record (in seconds).\n :keyword int weight: *For weighted record sets only*. Among resource record\n sets that have the same combination of DNS name and type, a value\n that determines what portion of traffic for the current resource\n record set is routed to the associated location. Ranges from 0-255.\n :keyword str region: *For latency-based record sets*. The Amazon EC2 region\n where the resource that is specified in this resource record set\n resides.\n :keyword str set_identifier: *For weighted and latency resource record\n sets only*. An identifier that differentiates among multiple\n resource record sets that have the same combination of DNS name\n and type. 1-128 chars.\n :rtype: tuple\n :returns: A tuple in the form of ``(rrset, change_info)``, where\n ``rrset`` is the newly created TXTResourceRecordSet instance."}
{"_id": "q_15642", "text": "Register a user-defined text frame key.\n\n Some ID3 tags are stored in TXXX frames, which allow a\n freeform 'description' which acts as a subkey,\n e.g. TXXX:BARCODE.::\n\n EasyID3.RegisterTXXXKey('barcode', 'BARCODE')."}
{"_id": "q_15643", "text": "Creates an XML element for the change.\n\n :param tuple change: A change tuple from a ChangeSet. Comes in the form\n of ``(action, rrset)``.\n :rtype: lxml.etree._Element\n :returns: A fully baked Change tag."}
{"_id": "q_15644", "text": "Forms an XML string that we'll send to Route53 in order to change\n record sets.\n\n :param Route53Connection connection: The connection instance used to\n query the API.\n :param change_set.ChangeSet change_set: The ChangeSet object to create the\n XML doc from.\n :keyword str comment: An optional comment to go along with the request."}
{"_id": "q_15645", "text": "Initiate log file."}
{"_id": "q_15646", "text": "Gets an item by its alias."}
{"_id": "q_15647", "text": "Joins the map structure into HTML attributes.\n\n The return value is a 2-tuple ``(template, ordered_values)``. It should be\n passed into :class:`markupsafe.Markup` to prevent XSS attacked.\n\n e.g.::\n\n >>> join_html_attrs({'href': '/', 'data-active': 'true'})\n ('data-active=\"{0}\" href=\"{1}\"', ['true', '/'])"}
{"_id": "q_15648", "text": "Initializes an app to work with this extension.\n\n The app-context signals will be subscribed and the template context\n will be initialized.\n\n :param app: the :class:`flask.Flask` app instance."}
{"_id": "q_15649", "text": "Calls the initializers of all bound navigation bars."}
{"_id": "q_15650", "text": "Binds a navigation bar into this extension instance."}
{"_id": "q_15651", "text": "The arguments which will be passed to ``url_for``.\n\n :type: :class:`dict`"}
{"_id": "q_15652", "text": "The final url of this navigation item.\n\n By default, the value is generated by the :attr:`self.endpoint` and\n :attr:`self.args`.\n\n .. note::\n\n The :attr:`url` property require the app context without a provided\n config value :const:`SERVER_NAME`, because of :func:`flask.url_for`.\n\n :type: :class:`str`"}
{"_id": "q_15653", "text": "``True`` if current request has same endpoint with the item.\n\n The property should be used in a bound request context, or the\n :class:`RuntimeError` may be raised."}
{"_id": "q_15654", "text": "Fetches a statistics based on the given class name. Does a look-up\n in the gadgets' registered statistics to find the specified one."}
{"_id": "q_15655", "text": "Calculates all of the metrics associated with the registered gadgets."}
{"_id": "q_15656", "text": "Returns a CSV dump of all of the specified metric's counts\n and cumulative counts."}
{"_id": "q_15657", "text": "Command handler for the \"metrics\" command."}
{"_id": "q_15658", "text": "Tries to extract a boolean variable from the specified request."}
{"_id": "q_15659", "text": "Returns the default GET parameters for a particular Geckoboard\n view request."}
{"_id": "q_15660", "text": "Searches the GET variables for metric UIDs, and displays\n them in a RAG widget."}
{"_id": "q_15661", "text": "Returns the data for a line chart for the specified metric."}
{"_id": "q_15662", "text": "Returns a Geck-o-Meter control for the specified metric."}
{"_id": "q_15663", "text": "Returns a funnel chart for the metrics specified in the GET variables."}
{"_id": "q_15664", "text": "Returns all of the active statistics for the gadgets currently registered."}
{"_id": "q_15665", "text": "Registers a gadget object.\n If a gadget is already registered, this will raise AlreadyRegistered."}
{"_id": "q_15666", "text": "Performs sanitation of the path after validating\n\n :param path: path to sanitize\n :return: path\n :raises:\n - InvalidPath if the path doesn't start with a slash"}
{"_id": "q_15667", "text": "Ensures the passed schema instance is compatible\n\n :param obj: object to validate\n :return: obj\n :raises:\n - IncompatibleSchema if the passed schema is of an incompatible type"}
{"_id": "q_15668", "text": "Attaches a flask.Blueprint to the bundle\n\n :param bp: :class:`flask.Blueprint` object\n :param description: Optional description string\n :raises:\n - InvalidBlueprint if the Blueprint is not of type `flask.Blueprint`"}
{"_id": "q_15669", "text": "Returns the DottedRule that results from moving the dot."}
{"_id": "q_15670", "text": "Computes the intermediate FIRST set using symbols."}
{"_id": "q_15671", "text": "Computes the initial closure using the START_foo production."}
{"_id": "q_15672", "text": "Fills out the entire closure based on some initial dotted rules.\n\n Args:\n rules - an iterable of DottedRules\n\n Returns: frozenset of DottedRules"}
{"_id": "q_15673", "text": "Initializes Journey extension\n\n :param app: App passed from constructor or directly to init_app\n :raises:\n - NoBundlesAttached if no bundles has been attached attached"}
{"_id": "q_15674", "text": "Returns simple info about registered blueprints\n\n :return: Tuple containing endpoint, path and allowed methods for each route"}
{"_id": "q_15675", "text": "Register and return info about the registered blueprint\n\n :param bp: :class:`flask.Blueprint` object\n :param bundle_path: the URL prefix of the bundle\n :param child_path: blueprint relative to the bundle path\n :return: Dict with info about the blueprint"}
{"_id": "q_15676", "text": "Returns detailed information about registered blueprint routes matching the `BlueprintBundle` path\n\n :param app: App instance to obtain rules from\n :param base_path: Base path to return detailed route info for\n :return: List of route detail dicts"}
{"_id": "q_15677", "text": "Computes the precedence of terminal and production.\n\n The precedence of a terminal is it's level in the PRECEDENCE tuple. For\n a production, the precedence is the right-most terminal (if it exists).\n The default precedence is DEFAULT_PREC - (LEFT, 0).\n\n Returns:\n precedence - dict[terminal | production] = (assoc, level)"}
{"_id": "q_15678", "text": "Generates the ACTION and GOTO tables for the grammar.\n\n Returns:\n action - dict[state][lookahead] = (action, ...)\n goto - dict[state][just_reduced] = new_state"}
{"_id": "q_15679", "text": "Return the antecedents and the consequent of a definite clause."}
{"_id": "q_15680", "text": "Auxiliary routine to implement tt_entails."}
{"_id": "q_15681", "text": "Return a list of all propositional symbols in x."}
{"_id": "q_15682", "text": "A variable is an Expr with no args and a lowercase symbol as the op."}
{"_id": "q_15683", "text": "Remove the sentence's clauses from the KB."}
{"_id": "q_15684", "text": "Updates the cache with setting values from the database."}
{"_id": "q_15685", "text": "Search game to determine best action; use alpha-beta pruning.\n This version cuts off search and uses an evaluation function."}
{"_id": "q_15686", "text": "If X wins with this move, return 1; if O return -1; else return 0."}
{"_id": "q_15687", "text": "Return true if there is a line through move on board for player."}
{"_id": "q_15688", "text": "Update a dict, or an object with slots, according to `entries` dict.\n\n >>> update({'a': 1}, a=10, b=20)\n {'a': 10, 'b': 20}\n >>> update(Struct(a=1), a=10, b=20)\n Struct(a=10, b=20)"}
{"_id": "q_15689", "text": "Return a random-sample function that picks from seq weighted by weights."}
{"_id": "q_15690", "text": "Format args with the first argument as format string, and write.\n Return the last arg, or format itself if there are no args."}
{"_id": "q_15691", "text": "Just count how many times each value of each input attribute\n occurs, conditional on the target value. Count the different\n target values too."}
{"_id": "q_15692", "text": "Number of bits to represent the probability distribution in values."}
{"_id": "q_15693", "text": "Given a list of learning algorithms, have them vote."}
{"_id": "q_15694", "text": "Return a predictor that takes a weighted vote."}
{"_id": "q_15695", "text": "Copy dataset, replicating each example in proportion to its weight."}
{"_id": "q_15696", "text": "Leave one out cross-validation over the dataset."}
{"_id": "q_15697", "text": "Generate a DataSet with n examples."}
{"_id": "q_15698", "text": "2 inputs are chosen uniformly from (0.0 .. 2.0]; output is xor of ints."}
{"_id": "q_15699", "text": "Compare various learners on various datasets using cross-validation.\n Print results as a table."}
{"_id": "q_15700", "text": "Check that my fields make sense."}
{"_id": "q_15701", "text": "Raise ValueError if example has any invalid values."}
{"_id": "q_15702", "text": "Returns the number used for attr, which can be a name, or -n .. n-1."}
{"_id": "q_15703", "text": "Return a copy of example, with non-input attributes replaced by None."}
{"_id": "q_15704", "text": "Add an observation o to the distribution."}
{"_id": "q_15705", "text": "Include o among the possible observations, whether or not\n it's been observed yet."}
{"_id": "q_15706", "text": "Minimum-remaining-values heuristic."}
{"_id": "q_15707", "text": "Least-constraining-values heuristic."}
{"_id": "q_15708", "text": "Prune neighbor values inconsistent with var=value."}
{"_id": "q_15709", "text": "Solve a CSP by stochastic hillclimbing on the number of conflicts."}
{"_id": "q_15710", "text": "Return the value that will give var the least number of conflicts.\n If there is a tie, choose at random."}
{"_id": "q_15711", "text": "Start accumulating inferences from assuming var=value."}
{"_id": "q_15712", "text": "Rule out var=value."}
{"_id": "q_15713", "text": "Return a list of variables in current assignment that are in conflict"}
{"_id": "q_15714", "text": "The number of conflicts, as recorded with each assignment.\n Count conflicts in row and in up, down diagonals. If there\n is a queen there, it can't conflict with itself, so subtract 3."}
{"_id": "q_15715", "text": "Record conflicts caused by addition or deletion of a Queen."}
{"_id": "q_15716", "text": "Print error and stop command"}
{"_id": "q_15717", "text": "Build up a random sample of text nwords words long, using\n the conditional probability given the n-1 preceding words."}
{"_id": "q_15718", "text": "Compute a score for this word on this docid."}
{"_id": "q_15719", "text": "Present the results as a list."}
{"_id": "q_15720", "text": "Return a score for text based on how common letters pairs are."}
{"_id": "q_15721", "text": "Score is product of word scores, unigram scores, and bigram scores.\n This can get very small, so we use logs and exp."}
{"_id": "q_15722", "text": "The expected utility of doing a in state s, according to the MDP and U."}
{"_id": "q_15723", "text": "Return the state that results from going in this direction."}
{"_id": "q_15724", "text": "Returns a ``SettingDict`` object for this queryset."}
{"_id": "q_15725", "text": "Creates and returns an object of the appropriate type for ``value``."}
{"_id": "q_15726", "text": "Returns ``True`` if this model should be used to store ``value``.\n\n Checks if ``value`` is an instance of ``value_type``. Override this\n method if you need more advanced behaviour. For example, to distinguish\n between single and multi-line text."}
{"_id": "q_15727", "text": "One possible schedule function for simulated annealing"}
{"_id": "q_15728", "text": "Call genetic_algorithm on the appropriate parts of a problem.\n This requires the problem to have states that can mate and mutate,\n plus a value method that scores states."}
{"_id": "q_15729", "text": "Print the board in a 2-d array."}
{"_id": "q_15730", "text": "Return a list of lists, where the i-th element is the list of indexes\n for the neighbors of square i."}
{"_id": "q_15731", "text": "List the nodes reachable in one step from this node."}
{"_id": "q_15732", "text": "Return a list of nodes forming the path from the root to this node."}
{"_id": "q_15733", "text": "Add a link from A and B of given distance, and also add the inverse\n link if the graph is undirected."}
{"_id": "q_15734", "text": "Set the board, and find all the words in it."}
{"_id": "q_15735", "text": "The total score for the words found, according to the rules."}
{"_id": "q_15736", "text": "Wrap the agent's program to print its input and output. This will let\n you see what the agent is doing in the environment."}
{"_id": "q_15737", "text": "Run the Environment for given number of time steps."}
{"_id": "q_15738", "text": "Return all things exactly at a given location."}
{"_id": "q_15739", "text": "Remove a thing from the environment."}
{"_id": "q_15740", "text": "Return all things within radius of location."}
{"_id": "q_15741", "text": "By default, agent perceives things within a default radius."}
{"_id": "q_15742", "text": "Move a thing to a new location."}
{"_id": "q_15743", "text": "Put walls around the entire perimeter of the grid."}
{"_id": "q_15744", "text": "Parse a list of words; according to the grammar.\n Leave results in the chart."}
{"_id": "q_15745", "text": "Add edge to chart, and see if it extends or predicts another edge."}
{"_id": "q_15746", "text": "Add to chart any rules for B that could help extend this edge."}
{"_id": "q_15747", "text": "See what edges can be extended by this edge."}
{"_id": "q_15748", "text": "Adds a ``SettingDict`` object for the ``Setting`` model to the context as\n ``SETTINGS``. Automatically creates non-existent settings with an empty\n string as the default value."}
{"_id": "q_15749", "text": "Return the factor for var in bn's joint distribution given e.\n That is, bn's full joint distribution, projected to accord with e,\n is the pointwise product of these factors for bn's variables."}
{"_id": "q_15750", "text": "Eliminate var from all factors by summing over its values."}
{"_id": "q_15751", "text": "Yield every way of extending e with values for all vars."}
{"_id": "q_15752", "text": "Sample an event from bn that's consistent with the evidence e;\n return the event and its weight, the likelihood that the event\n accords to the evidence."}
{"_id": "q_15753", "text": "Show the probabilities rounded and sorted by key, for the\n sake of portable doctests."}
{"_id": "q_15754", "text": "Add a node to the net. Its parents must already be in the\n net, and its variable must not."}
{"_id": "q_15755", "text": "Multiply two factors, combining their variables."}
{"_id": "q_15756", "text": "Make a factor eliminating var by summing over its values."}
{"_id": "q_15757", "text": "Takes a hls color and converts to proper hue \n Bulbs use a BGR order instead of RGB"}
{"_id": "q_15758", "text": "Takes your standard rgb color \n and converts it to a proper hue value"}
{"_id": "q_15759", "text": "Takes an HTML hex code\n and converts it to a proper hue value"}
{"_id": "q_15760", "text": "Wait for x seconds\n each wait command is 100ms"}
{"_id": "q_15761", "text": "Return json from querying Web Api\n\n\t\tArgs:\n\t\t\tview: django view function.\n\t\t\trequest: http request object got from django.\n\t\t\t\t\n\t\tReturns: json format dictionary"}
{"_id": "q_15762", "text": "put text on on screen\n a tuple as first argument tells absolute position for the text\n does not change TermCursor position\n args = list of optional position, formatting tokens and strings"}
{"_id": "q_15763", "text": "get user input without echo"}
{"_id": "q_15764", "text": "get character. waiting for key"}
{"_id": "q_15765", "text": "tweaked from source of base"}
{"_id": "q_15766", "text": "getProcessOwner - Get the process owner of a pid\n\n @param pid <int> - process id\n\n @return - None if process not found or can't be determined. Otherwise, a dict: \n {\n uid - Owner UID\n name - Owner name, or None if one cannot be determined\n }"}
{"_id": "q_15767", "text": "scanProcessForCwd - Searches a given pid's cwd for a given pattern\n\n @param pid <int> - A running process ID on this system\n @param searchPortion <str> - Any portion of directory to search\n @param isExactMatch <bool> Default False - If match should be exact, otherwise a partial match is performed.\n\n @return <dict> - If result is found, the following dict is returned. If no match found on the given pid, or pid is not found running, None is returned.\n {\n 'searchPortion' : The passed search pattern\n 'pid' : The passed pid (as an integer)\n 'owner' : String of process owner, or uid if no mapping can be found, or \"unknown\" if neither could be determined.\n 'cmdline' : Commandline string\n 'cwd' : The exact cwd of matched process\n }"}
{"_id": "q_15768", "text": "scanAllProcessesForCwd - Scans all processes on the system for a given search pattern.\n\n @param searchPortion <str> - Any portion of directory to search\n @param isExactMatch <bool> Default False - If match should be exact, otherwise a partial match is performed.\n\n @return - <dict> - A dictionary of pid -> cwdResults for each pid that matched the search pattern. For format of \"cwdResults\", @see scanProcessForCwd"}
{"_id": "q_15769", "text": "scanAllProcessesForMapping - Scans all processes on the system for a given search pattern.\n\n @param searchPortion <str> - A mapping for which to search, example: libc or python or libz.so.1. Give empty string to return all mappings.\n @param isExactMatch <bool> Default False - If match should be exact, otherwise a partial match is performed.\n @param ignoreCase <bool> Default False - If True, search will be performed case-insensitively\n\n @return - <dict> - A dictionary of pid -> mappingResults for each pid that matched the search pattern. For format of \"mappingResults\", @see scanProcessForMapping"}
{"_id": "q_15770", "text": "scanProcessForOpenFile - Scans open FDs for a given pid to see if any are the provided searchPortion\n\n @param searchPortion <str> - Filename to check\n @param isExactMatch <bool> Default True - If match should be exact, otherwise a partial match is performed.\n @param ignoreCase <bool> Default False - If True, search will be performed case-insensitively\n\n @return - If result is found, the following dict is returned. If no match found on the given pid, or the pid is not found running, None is returned.\n {\n 'searchPortion' : The search portion provided\n 'pid' : The passed pid (as an integer)\n 'owner' : String of process owner, or \"unknown\" if one could not be determined\n 'cmdline' : Commandline string\n 'fds' : List of file descriptors assigned to this file (could be mapped several times)\n 'filenames' : List of the filenames matched\n }"}
{"_id": "q_15771", "text": "scanAllProcessessForOpenFile - Scans all processes on the system for a given filename\n\n @param searchPortion <str> - Filename to check\n @param isExactMatch <bool> Default True - If match should be exact, otherwise a partial match is performed.\n @param ignoreCase <bool> Default False - If True, search will be performed case-insensitively\n\n @return - <dict> - A dictionary of pid -> mappingResults for each pid that matched the search pattern. For format of \"mappingResults\", @see scanProcessForOpenFile"}
{"_id": "q_15772", "text": "Get current light data as dictionary with light zids as keys."}
{"_id": "q_15773", "text": "Get current light data, set and return as list of Bulb objects."}
{"_id": "q_15774", "text": "Set color and brightness of bulb."}
{"_id": "q_15775", "text": "Update light objects to their current values."}
{"_id": "q_15776", "text": "This function takes a file path beginning with edgar and stores the form in a directory.\n The default directory is sec_filings but can be changed through a keyword argument."}
{"_id": "q_15777", "text": "read file as is"}
{"_id": "q_15778", "text": "Clean up after ourselves, removing created files.\n @param {[String]} A list of file paths specifying the files we've created\n during run. Will all be deleted.\n @return {None}"}
{"_id": "q_15779", "text": "Crawl the root directory downwards, generating an index HTML file in each\n directory on the way down.\n @param {String} root_dir - The top level directory to crawl down from. In\n normal usage, this will be '.'.\n @param {Boolean=False} force_no_processing - If True, do not attempt to\n actually process thumbnails, PIL images or anything. Simply index\n <img> tags with original file src attributes.\n @return {[String]} Full file paths of all created files."}
{"_id": "q_15780", "text": "Get an instance of PIL.Image from the given file.\n @param {String} dir_path - The directory containing the image file\n @param {String} image_file - The filename of the image file within dir_path\n @return {PIL.Image} An instance of the image file as a PIL Image, or None\n if the functionality is not available. This could be because PIL is not\n present, or because it can't process the given file type."}
{"_id": "q_15781", "text": "Get a PIL.Image from the given image file which has been scaled down to\n THUMBNAIL_WIDTH wide.\n @param {String} dir_path - The directory containing the image file\n @param {String} image_file - The filename of the image file within dir_path\n @return {PIL.Image} An instance of the thumbnail as a PIL Image, or None\n if the functionality is not available. See _get_image_from_file for\n details."}
{"_id": "q_15782", "text": "Run the image server. This is blocking. Will handle user KeyboardInterrupt\n and other exceptions appropriately and return control once the server is\n stopped.\n @return {None}"}
{"_id": "q_15783", "text": "Generate indexes and run server from the given directory downwards.\n @param {String} dir_path - The directory path (absolute, or relative to CWD)\n @return {None}"}
{"_id": "q_15784", "text": "USE carefully ^^"}
{"_id": "q_15785", "text": "Returns the team ID of the winning team. Returns NaN if a tie."}
{"_id": "q_15786", "text": "Returns a DataFrame where each row is an entry in the starters table\n from PFR.\n\n The columns are:\n * player_id - the PFR player ID for the player (note that this column\n is not necessarily all unique; that is, one player can be a starter in\n multiple positions, in theory).\n * playerName - the listed name of the player; this too is not\n necessarily unique.\n * position - the position at which the player started for their team.\n * team - the team for which the player started.\n * home - True if the player's team was at home, False if they were away\n * offense - True if the player is starting on an offensive position,\n False if defense.\n\n :returns: A pandas DataFrame. See the description for details."}
{"_id": "q_15787", "text": "The playing surface on which the game was played.\n\n :returns: string representing the type of surface. Returns np.nan if\n not avaiable."}
{"_id": "q_15788", "text": "Returns a dictionary of weather-related info.\n\n Keys of the returned dict:\n * temp\n * windChill\n * relHumidity\n * windMPH\n\n :returns: Dict of weather data."}
{"_id": "q_15789", "text": "Gets a dictionary of ref positions and the ref IDs of the refs for\n that game.\n\n :returns: A dictionary of ref positions and IDs."}
{"_id": "q_15790", "text": "Returns a list of BoxScore IDs for every game in the season.\n Only needs to handle 'R' or 'P' options because decorator handles 'B'.\n\n :param kind: 'R' for regular season, 'P' for playoffs, 'B' for both.\n Defaults to 'R'.\n :returns: DataFrame of schedule information.\n :rtype: pd.DataFrame"}
{"_id": "q_15791", "text": "Returns a DataFrame containing standings information."}
{"_id": "q_15792", "text": "Helper function for stats tables on season pages. Returns a\n DataFrame."}
{"_id": "q_15793", "text": "Returns a DataFrame containing information about ROY voting."}
{"_id": "q_15794", "text": "Returns a DataFrame of player stats from the game (either basic or\n advanced, depending on the argument.\n\n :param table_id_fmt: Format string for str.format with a placeholder\n for the team ID (e.g. 'box_{}_basic')\n :returns: DataFrame of player stats"}
{"_id": "q_15795", "text": "Decorator that switches to given directory before executing function, and\n then returning to orignal directory."}
{"_id": "q_15796", "text": "Returns a unique identifier for a class instantiation."}
{"_id": "q_15797", "text": "A decorator for memoizing functions.\n\n Only works on functions that take simple arguments - arguments that take\n list-like or dict-like arguments will not be memoized, and this function\n will raise a TypeError."}
{"_id": "q_15798", "text": "Returns a DataFrame of per-game box score stats."}
{"_id": "q_15799", "text": "Returns a DataFrame of per-36-minutes stats."}
{"_id": "q_15800", "text": "Returns a DataFrame of per-100-possession stats."}
{"_id": "q_15801", "text": "Returns a DataFrame of advanced stats."}
{"_id": "q_15802", "text": "Returns a DataFrame of play-by-play stats."}
{"_id": "q_15803", "text": "Returns a table of a player's basic game-by-game stats for a season.\n\n :param year: The year representing the desired season.\n :param kind: specifies regular season, playoffs, or both. One of 'R',\n 'P', 'B'. Defaults to 'R'.\n :returns: A DataFrame of the player's standard boxscore stats from each\n game of the season.\n :rtype: pd.DataFrame"}
{"_id": "q_15804", "text": "Converts a permutation into a permutation matrix.\n\n `matches` is a dictionary whose keys are vertices and whose values are\n partners. For each vertex ``u`` and ``v``, entry (``u``, ``v``) in the\n returned matrix will be a ``1`` if and only if ``matches[u] == v``.\n\n Pre-condition: `matches` must be a permutation on an initial subset of the\n natural numbers.\n\n Returns a permutation matrix as a square NumPy array."}
{"_id": "q_15805", "text": "Convenience function that creates a block matrix with the specified\n blocks.\n\n Each argument must be a NumPy matrix. The two top matrices must have the\n same number of rows, as must the two bottom matrices. The two left matrices\n must have the same number of columns, as must the two right matrices."}
{"_id": "q_15806", "text": "Returns the adjacency matrix of a bipartite graph whose biadjacency\n matrix is `A`.\n\n `A` must be a NumPy array.\n\n If `A` has **m** rows and **n** columns, then the returned matrix has **m +\n n** rows and columns."}
{"_id": "q_15807", "text": "Returns the Boolean matrix in the same shape as `D` with ones exactly\n where there are nonzero entries in `D`.\n\n `D` must be a NumPy array."}
{"_id": "q_15808", "text": "Expands the details column of the given dataframe and returns the\n resulting DataFrame.\n\n :df: The input DataFrame.\n :detailCol: The detail column name.\n :returns: Returns DataFrame with new columns from pbp parsing."}
{"_id": "q_15809", "text": "Adds extra convenience features based on teams with and without\n possession, with the precondition that the there are 'team' and 'opp'\n specified in row.\n\n :df: A DataFrame representing a game's play-by-play data after\n _clean_features has been called and 'team' and 'opp' have been added by\n _add_team_columns.\n :returns: A dict with new features in addition to previous features."}
{"_id": "q_15810", "text": "Gets the initial win probability of a game given its Vegas line.\n\n :line: The Vegas line from the home team's perspective (negative means\n home team is favored).\n :returns: A float in [0., 100.] that represents the win probability."}
{"_id": "q_15811", "text": "Gets yearly passing stats for the player.\n\n :kind: One of 'R', 'P', or 'B'. Case-insensitive; defaults to 'R'.\n :returns: Pandas DataFrame with passing stats."}
{"_id": "q_15812", "text": "Template for simple award functions that simply list years, such as\n pro bowls and first-team all pro.\n\n :award_id: The div ID that is appended to \"leaderboard_\" in selecting\n the table's div.\n :returns: List of years for the award."}
{"_id": "q_15813", "text": "Returns the real name of the franchise given the team ID.\n\n Examples:\n 'nwe' -> 'New England Patriots'\n 'sea' -> 'Seattle Seahawks'\n\n :returns: A string corresponding to the team's full name."}
{"_id": "q_15814", "text": "Returns a PyQuery object containing the info from the meta div at\n the top of the team year page with the given keyword.\n\n :year: Int representing the season.\n :keyword: A keyword to filter to a single p tag in the meta div.\n :returns: A PyQuery object for the selected p element."}
{"_id": "q_15815", "text": "Returns a DataFrame with schedule information for the given year.\n\n :year: The year for the season in question.\n :returns: Pandas DataFrame with schedule information."}
{"_id": "q_15816", "text": "Returns the coach ID for the team's DC in a given year.\n\n :year: An int representing the year.\n :returns: A string containing the coach ID of the DC."}
{"_id": "q_15817", "text": "Returns the ID for the stadium in which the team played in a given\n year.\n\n :year: The year in question.\n :returns: A string representing the stadium ID."}
{"_id": "q_15818", "text": "Returns the name of the defensive alignment the team ran in the\n given year.\n\n :year: Int representing the season year.\n :returns: A string representing the defensive alignment."}
{"_id": "q_15819", "text": "Gets the HTML for the given URL using a GET request.\n\n :url: the absolute URL of the desired page.\n :returns: a string of HTML."}
{"_id": "q_15820", "text": "Flattens relative URLs within text of a table cell to IDs and returns\n the result.\n\n :td: the PyQuery object for the HTML to convert\n :returns: the string with the links flattened to IDs"}
{"_id": "q_15821", "text": "Converts a relative URL to a unique ID.\n\n Here, 'ID' refers generally to the unique ID for a given 'type' that a\n given datum has. For example, 'BradTo00' is Tom Brady's player ID - this\n corresponds to his relative URL, '/players/B/BradTo00.htm'. Similarly,\n '201409070dal' refers to the boxscore of the SF @ DAL game on 09/07/14.\n\n Supported types:\n * player/...\n * boxscores/...\n * teams/...\n * years/...\n * leagues/...\n * awards/...\n * coaches/...\n * officials/...\n * schools/...\n * schools/high_schools.cgi?id=...\n\n :returns: ID associated with the given relative URL."}
{"_id": "q_15822", "text": "Converts kwargs given to PSF to a querystring.\n\n :returns: the querystring."}
{"_id": "q_15823", "text": "Returns the result of incrementing `version`.\n\n If `which` is not specified, the \"patch\" part of the version number will be\n incremented. If `which` is specified, it must be ``'major'``, ``'minor'``,\n or ``'patch'``. If it is one of these three strings, the corresponding part\n of the version number will be incremented instead of the patch number.\n\n Returns a string representing the next version number.\n\n Example::\n\n >>> bump_version('2.7.1')\n '2.7.2'\n >>> bump_version('2.7.1', 'minor')\n '2.8.0'\n >>> bump_version('2.7.1', 'major')\n '3.0.0'"}
{"_id": "q_15824", "text": "Gets the current version from the specified file.\n\n This function assumes the file includes a string of the form::\n\n <pattern> = <version>"}
{"_id": "q_15825", "text": "Prints the specified message and exits the program with the specified\n exit status."}
{"_id": "q_15826", "text": "Main function for the processes that read from the HDF5 file.\n\n :param self: A reference to the streamer object that created these processes.\n :param path: The HDF5 path to the node to be read from.\n :param read_size: The length of the block along the outer dimension to read.\n :param cbuf: The circular buffer to place read elements into.\n :param stop: The Event that signals the process to stop reading.\n :param barrier: The Barrier that synchonises read cycles.\n :param cyclic: True if the process should read cyclically.\n :param offset: Offset into the dataset that this process should start reading at.\n :param read_skip: How many element to skip on each iteration.\n :param sync: GuardSynchonizer to order writes to the buffer.\n :return: Nothing"}
{"_id": "q_15827", "text": "Allows direct access to the buffer element.\n Blocks until there is room to write into the buffer.\n\n :return: A guard object that returns the buffer element."}
{"_id": "q_15828", "text": "Allows direct access to the buffer element.\n Blocks until there is data that can be read.\n\n :return: A guard object that returns the buffer element."}
{"_id": "q_15829", "text": "Close the queue, signalling that no more data can be put into the queue."}
{"_id": "q_15830", "text": "Get a block of data from the node at path.\n\n :param path: The path to the node to read from.\n :param length: The length along the outer dimension to read.\n :param last: True if the remainder elements should be read.\n :return: A copy of the requested block of data as a numpy array."}
{"_id": "q_15831", "text": "Get a generator that allows convenient access to the streamed data.\n Elements from the dataset are returned from the generator one row at a time.\n Unlike the direct access queue, this generator also returns the remainder elements.\n Additional arguments are forwarded to get_queue.\n See the get_queue method for documentation of these parameters.\n\n :param path:\n :return: A generator that iterates over the rows in the dataset."}
{"_id": "q_15832", "text": "initialize with templates' path\n parameters\n templates_path str the position of templates directory\n global_data dict globa data can be got in any templates"}
{"_id": "q_15833", "text": "shortcut to render data with `template`. Just add exception\n catch to `renderer.render`"}
{"_id": "q_15834", "text": "Get the DataFrame for this view.\n Defaults to using `self.dataframe`.\n\n This method should always be used rather than accessing `self.dataframe`\n directly, as `self.dataframe` gets evaluated only once, and those results\n are cached for all subsequent requests.\n\n You may want to override this if you need to provide different\n dataframes depending on the incoming request."}
{"_id": "q_15835", "text": "Indexes the row based on the request parameters."}
{"_id": "q_15836", "text": "Returns the row the view is displaying.\n\n You may want to override this if you need to provide non-standard\n queryset lookups. Eg if objects are referenced using multiple\n keyword arguments in the url conf."}
{"_id": "q_15837", "text": "The paginator instance associated with the view, or `None`."}
{"_id": "q_15838", "text": "Return a single page of results, or `None` if pagination is disabled."}
{"_id": "q_15839", "text": "parse post source files name to datetime object"}
{"_id": "q_15840", "text": "watch files for changes, if changed, rebuild blog. this thread\n will quit if the main process ends"}
{"_id": "q_15841", "text": "Deploy new blog to current directory"}
{"_id": "q_15842", "text": "Parse a stream.\n\n Args:\n ifp (string or file-like object): input stream.\n pb_cls (protobuf.message.Message.__class__): The class object of\n the protobuf message type encoded in the stream."}
{"_id": "q_15843", "text": "Write to a stream.\n\n Args:\n ofp (string or file-like object): output stream.\n pb_objs (*protobuf.message.Message): list of protobuf message objects\n to be written."}
{"_id": "q_15844", "text": "Close the stream."}
{"_id": "q_15845", "text": "Write a group of one or more protobuf objects to the file. Multiple\n object groups can be written by calling this method several times\n before closing stream or exiting the runtime context.\n\n The input protobuf objects get buffered and will be written down when\n the number of buffered objects exceed the `self._buffer_size`.\n\n Args:\n pb2_obj (*protobuf.message.Message): list of protobuf messages."}
{"_id": "q_15846", "text": "Returns joined game directory path relative to Steamapps"}
{"_id": "q_15847", "text": "Temporarily update the context to use the BlockContext for the given alias."}
{"_id": "q_15848", "text": "Find the first matching block in the current block_context"}
{"_id": "q_15849", "text": "Return a list of widget names for the provided field."}
{"_id": "q_15850", "text": "Allow reuse of a block within a template.\n\n {% reuse '_myblock' foo=bar %}\n\n If passed a list of block names, will use the first that matches:\n\n {% reuse list_of_block_names .... %}"}
{"_id": "q_15851", "text": "When dealing with optgroups, ensure that the value is properly force_text'd."}
{"_id": "q_15852", "text": "Waits until conditions is True or returns a non-None value.\n If any of the trait is still not present after timeout, raises a TimeoutException."}
{"_id": "q_15853", "text": "Waits until all traits are present.\n If any of the traits is still not present after timeout, raises a TimeoutException."}
{"_id": "q_15854", "text": "Set a list of exceptions that should be ignored inside the wait loop."}
{"_id": "q_15855", "text": "Message instances are namedtuples of type `Message`.\n The date field is already serialized in datetime.isoformat ECMA-262 format"}
{"_id": "q_15856", "text": "Send a message to a list of users without passing through `django.contrib.messages`\n\n :param users: an iterable containing the recipients of the messages\n :param level: message level\n :param message_text: the string containing the message\n :param extra_tags: like the Django api, a string containing extra tags for the message\n :param date: a date, different than the default timezone.now\n :param url: an optional url\n :param fail_silently: not used at the moment"}
{"_id": "q_15857", "text": "Send a message to all users aka broadcast.\n\n :param level: message level\n :param message_text: the string containing the message\n :param extra_tags: like the Django api, a string containing extra tags for the message\n :param date: a date, different than the default timezone.now\n :param url: an optional url\n :param fail_silently: not used at the moment"}
{"_id": "q_15858", "text": "Mark message instance as read for user.\n Returns True if the message was `unread` and thus actually marked as `read` or False in case\n it is already `read` or it does not exist at all.\n\n :param user: user instance for the recipient\n :param message: a Message instance to mark as read"}
{"_id": "q_15859", "text": "If the message level was configured for being stored and request.user\n is not anonymous, save it to the database. Otherwise, let some other\n class handle the message.\n\n Notice: controls like checking the message is not empty and the level\n is above the filter need to be performed here, but it could happen\n they'll be performed again later if the message does not need to be\n stored."}
{"_id": "q_15860", "text": "persistent messages are already in the database inside the 'archive',\n so we can say they're already \"stored\".\n Here we put them in the inbox, or remove from the inbox in case the\n messages were iterated.\n\n messages contains only new msgs if self.used==True\n else contains both new and unread messages"}
{"_id": "q_15861", "text": "Like the base class method, prepares a list of messages for storage\n but avoid to do this for `models.Message` instances."}
{"_id": "q_15862", "text": "Execute Main.Volume.\n\n Returns int"}
{"_id": "q_15863", "text": "Send a command string to the amplifier."}
{"_id": "q_15864", "text": "Power the device off."}
{"_id": "q_15865", "text": "Power the device on."}
{"_id": "q_15866", "text": "Set volume level of the device. Accepts integer values 0-200."}
{"_id": "q_15867", "text": "Select a source from the list of sources."}
{"_id": "q_15868", "text": "Main entry point for script."}
{"_id": "q_15869", "text": "Generates crc32. Modulo keep the value within int range."}
{"_id": "q_15870", "text": "initializes a base logger\n\n you can use this to init a logger in any of your files.\n this will use config.py's LOGGER param and logging.dictConfig to configure\n the logger for you.\n\n :param int|logging.LEVEL base_level: desired base logging level\n :param int|logging.LEVEL verbose_level: desired verbose logging level\n :param dict logging_dict: dictConfig based configuration.\n used to override the default configuration from config.py\n :rtype: `python logger`"}
{"_id": "q_15871", "text": "Configure an object with a user-supplied factory."}
{"_id": "q_15872", "text": "sets the global verbosity level for console and the jocker_lgr logger.\n\n :param bool is_verbose_output: should be output be verbose"}
{"_id": "q_15873", "text": "generates a Dockerfile, builds an image and pushes it to DockerHub\n\n A `Dockerfile` will be generated by Jinja2 according to the `varsfile`\n imported. If build is true, an image will be generated from the\n `outputfile` which is the generated Dockerfile and committed to the\n image:tag string supplied to `build`.\n If push is true, a build will be triggered and the produced image\n will be pushed to DockerHub upon completion.\n\n :param string varsfile: path to file with variables.\n :param string templatefile: path to template file to use.\n :param string outputfile: path to output Dockerfile.\n :param string configfile: path to yaml file with docker-py config.\n :param bool dryrun: mock run.\n :param build: False or the image:tag to build to.\n :param push: False or the image:tag to build to. (triggers build)\n :param bool verbose: verbose output."}
{"_id": "q_15874", "text": "since the push process outputs a single unicode string consisting of\n multiple JSON formatted \"status\" lines, we need to parse it so that it\n can be read as multiple strings.\n\n This will receive the string as an input, count curly braces and ignore\n any newlines. When the curly braces stack is 0, it will append the\n entire string it has read up until then to a list and so forth.\n\n :param string: the string to parse\n :rtype: list of JSON's"}
{"_id": "q_15875", "text": "Template filter that obfuscates whatever text it is applied to. The text is\n supposed to be a URL, but it will obfuscate anything.\n\n Usage:\n Extremely unfriendly URL:\n {{ \"/my-secret-path/\"|obfuscate }}\n\n Include some SEO juice:\n {{ \"/my-secret-path/\"|obfuscate:\"some SEO friendly text\" }}"}
{"_id": "q_15876", "text": "It will return all hyper links found in the mr-jatt page for download"}
{"_id": "q_15877", "text": "It will the resource URL if song is found,\n\t\tOtherwise it will return the list of songs that can be downloaded"}
{"_id": "q_15878", "text": "It will parse google html response\n\t\t\tand return the first url"}
{"_id": "q_15879", "text": "song_name is a list of strings\n\t\twebsite is a string\n\t\tIt will return the url from where music file needs to be downloaded"}
{"_id": "q_15880", "text": "Uploads an image file to Imgur"}
{"_id": "q_15881", "text": "Return true if the IP address is in dotted decimal notation."}
{"_id": "q_15882", "text": "Return true if the IP address is in binary notation."}
{"_id": "q_15883", "text": "Return true if the netmask is in bits notatation."}
{"_id": "q_15884", "text": "Return true if the netmask is in wildcard bits notatation."}
{"_id": "q_15885", "text": "Decimal to dotted decimal notation conversion."}
{"_id": "q_15886", "text": "Hexadecimal to decimal conversion."}
{"_id": "q_15887", "text": "Generate a table to convert a whole byte to binary.\n This code was taken from the Python Cookbook, 2nd edition - O'Reilly."}
{"_id": "q_15888", "text": "Bits to decimal conversion."}
{"_id": "q_15889", "text": "Internally used to convert IPs and netmasks to other notations."}
{"_id": "q_15890", "text": "Sum two IP addresses."}
{"_id": "q_15891", "text": "Subtract two IP addresses."}
{"_id": "q_15892", "text": "Return the bits notation of the netmask."}
{"_id": "q_15893", "text": "Return the wildcard bits notation of the netmask."}
{"_id": "q_15894", "text": "Set the IP address and the netmask."}
{"_id": "q_15895", "text": "Change the current netmask."}
{"_id": "q_15896", "text": "Return true if the given address in amongst the usable addresses,\n or if the given CIDR is contained in this one."}
{"_id": "q_15897", "text": "Copy a file from one bucket into another"}
{"_id": "q_15898", "text": "Recursively upload a ``folder`` into a backet.\n\n :param bucket: bucket where to upload the folder to\n :param folder: the folder location in the local file system\n :param key: Optional key where the folder is uploaded\n :param skip: Optional list of files to skip\n :param content_types: Optional dictionary mapping suffixes to\n content types\n :return: a coroutine"}
{"_id": "q_15899", "text": "Decode AQICN observation response JSON into python object."}
{"_id": "q_15900", "text": "The list of logical paths which are used to search for an asset.\n This property makes sense only if the attributes was created with\n logical path.\n\n It is assumed that the logical path can be a directory containing a\n file named ``index`` with the same suffix.\n\n Example::\n\n >>> attrs = AssetAttributes(environment, 'js/app.js')\n >>> attrs.search_paths\n ['js/app.js', 'js/app/index.js']\n\n >>> attrs = AssetAttributes(environment, 'js/app/index.js')\n >>> attrs.search_paths\n ['js/models/index.js']"}
{"_id": "q_15901", "text": "MIME type of the asset."}
{"_id": "q_15902", "text": "Implicit MIME type of the asset by its compilers."}
{"_id": "q_15903", "text": "Implicit format extension on the asset by its compilers."}
{"_id": "q_15904", "text": "Remove passed `processor` for passed `mimetype`. If processor for\n this MIME type does not found in the registry, nothing happens."}
{"_id": "q_15905", "text": "Register default compilers, preprocessors and MIME types."}
{"_id": "q_15906", "text": "Trigger an ``event`` on this channel"}
{"_id": "q_15907", "text": "Connect to a Pusher websocket"}
{"_id": "q_15908", "text": "This nasty piece of code is here to force the loading of IDA's\n Qt bindings.\n Without it, Python attempts to load PySide from the site-packages\n directory, and failing, as it does not play nicely with IDA.\n\n via: github.com/tmr232/Cute"}
{"_id": "q_15909", "text": "Add the given plugin name to the list of plugin names registered in\n the current IDB.\n Note that this implicitly uses the open IDB via the idc iterface."}
{"_id": "q_15910", "text": "Export the given settings instance to the given file system path.\n\n type settings: IDASettingsInterface\n type config_path: str"}
{"_id": "q_15911", "text": "Constant time string comparison"}
{"_id": "q_15912", "text": "Decodes a limited set of HTML entities."}
{"_id": "q_15913", "text": "Set signature passphrases"}
{"_id": "q_15914", "text": "Set encryption passphrases"}
{"_id": "q_15915", "text": "Set algorithms used for sealing. Defaults can not be overridden."}
{"_id": "q_15916", "text": "Get algorithms used for sealing"}
{"_id": "q_15917", "text": "Private function for setting options used for sealing"}
{"_id": "q_15918", "text": "Verify sealed data signature"}
{"_id": "q_15919", "text": "Encode data with specific algorithm"}
{"_id": "q_15920", "text": "Decode data with specific algorithm"}
{"_id": "q_15921", "text": "Add signature to data"}
{"_id": "q_15922", "text": "Verify and remove signature"}
{"_id": "q_15923", "text": "Read header from data"}
{"_id": "q_15924", "text": "Remove header from data"}
{"_id": "q_15925", "text": "Get algorithm info"}
{"_id": "q_15926", "text": "Generate and return PBKDF2 key"}
{"_id": "q_15927", "text": "Returns the response that should be used for any given exception.\n\n By default we handle the REST framework `APIException`, and also\n Django's builtin `Http404` and `PermissionDenied` exceptions.\n\n Any unhandled exceptions may return `None`, which will cause a 500 error\n to be raised."}
{"_id": "q_15928", "text": "Returns a list of tables for the given user."}
{"_id": "q_15929", "text": "Fetch packages and summary from Crates.io\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"}
{"_id": "q_15930", "text": "Extracts the identifier from an item depending on its type."}
{"_id": "q_15931", "text": "Extracts the update time from an item.\n\n Depending on the item, the timestamp is extracted from the\n 'updated_at' or 'fetched_on' fields.\n This date is converted to UNIX timestamp format.\n\n :param item: item generated by the backend\n\n :returns: a UNIX timestamp"}
{"_id": "q_15932", "text": "Get crate team owner"}
{"_id": "q_15933", "text": "Get crate user owners"}
{"_id": "q_15934", "text": "Get crate version downloads"}
{"_id": "q_15935", "text": "Get crate data"}
{"_id": "q_15936", "text": "Get Crates.io summary"}
{"_id": "q_15937", "text": "Get crates in alphabetical order"}
{"_id": "q_15938", "text": "Get a crate by its ID"}
{"_id": "q_15939", "text": "Get crate attribute"}
{"_id": "q_15940", "text": "Return the items from Crates.io API using pagination"}
{"_id": "q_15941", "text": "Fetch questions from the Kitsune url.\n\n :param category: the category of items to fetch\n :offset: obtain questions after offset\n :returns: a generator of questions"}
{"_id": "q_15942", "text": "Retrieve questions from older to newer updated starting offset"}
{"_id": "q_15943", "text": "Fetch items from the ReMo url.\n\n The method retrieves, from a ReMo URL, the set of items\n of the given `category`.\n\n :param category: the category of items to fetch\n :param offset: obtain items after offset\n :returns: a generator of items"}
{"_id": "q_15944", "text": "Extracts the category from a ReMo item.\n\n This backend generates items types 'event', 'activity'\n or 'user'. To guess the type of item, the code will look\n for unique fields."}
{"_id": "q_15945", "text": "Retrieve all items for category using pagination"}
{"_id": "q_15946", "text": "The buffer list this instance operates on.\n\n Only available in mode != AIOBLOCK_MODE_POLL.\n\n Changes on a submitted transfer are not fully applied until its\n next submission: kernel will still be using original buffer list."}
{"_id": "q_15947", "text": "IO priority for this instance."}
{"_id": "q_15948", "text": "Cancels all pending IO blocks.\n Waits until all non-cancellable IO blocks finish.\n De-initialises AIO context."}
{"_id": "q_15949", "text": "Submits transfers.\n\n block_list (list of AIOBlock)\n The IO blocks to hand off to kernel.\n\n Returns the number of successfully submitted blocks."}
{"_id": "q_15950", "text": "Returns a list of event data from submitted IO blocks.\n\n min_nr (int, None)\n When timeout is None, minimum number of events to collect before\n returning.\n If None, waits for all submitted events.\n nr (int, None)\n Maximum number of events to return.\n If None, set to maxevents given at construction or to the number of\n currently submitted events, whichever is larger.\n timeout (float, None):\n Time to wait for events.\n If None, become blocking.\n\n Returns a list of 3-tuples, containing:\n - completed AIOBlock instance\n - res, file-object-type-dependent value\n - res2, another file-object-type-dependent value"}
{"_id": "q_15951", "text": "Fetch events from the MozillaClub URL.\n\n The method retrieves, from a MozillaClub URL, the\n events. The data is a Google spreadsheet retrieved using\n the feed API REST.\n\n :param category: the category of items to fetch\n\n :returns: a generator of events"}
{"_id": "q_15952", "text": "Retrieve all cells from the spreadsheet."}
{"_id": "q_15953", "text": "Parse the MozillaClub spreadsheet feed cells json."}
{"_id": "q_15954", "text": "This function will extract a single file from the remote zip without downloading\n the entire zip file. The filename argument should match whatever is in the 'filename'\n key of the tableOfContents."}
{"_id": "q_15955", "text": "Does photometry and estimates uncertainties by calculating the scatter around a linear fit to the data\n in each orientation. This function is called by other functions and generally the user will not need\n to interact with it directly."}
{"_id": "q_15956", "text": "Creates the figure shown in ``adjust_aperture`` for visualization purposes. Called by other functions\n and generally not called by the user directly.\n\n Args: \n img: The data frame to be passed through to be plotted. A cutout of the ``integrated_postcard``"}
{"_id": "q_15957", "text": "Identify the centroid positions for the target star at all epochs. Useful for verifying that there is\n no correlation between flux and position, as might be expected for high proper motion stars."}
{"_id": "q_15958", "text": "Identify the \"expected\" flux value at the time of each observation based on the \n Kepler long-cadence data, to ensure variations observed are not the effects of a single\n large starspot. Only works if the target star was targeted for long or short cadence\n observations during the primary mission."}
{"_id": "q_15959", "text": "Load default permission factory."}
{"_id": "q_15960", "text": "Create Invenio-Records-UI blueprint.\n\n The factory installs one URL route per endpoint defined, and adds an\n error handler for rendering tombstones.\n\n :param endpoints: Dictionary of endpoints to be installed. See usage\n documentation for further details.\n :returns: The initialized blueprint."}
{"_id": "q_15961", "text": "r\"\"\"Display default view.\n\n Sends record_viewed signal and renders template.\n\n :param pid: PID object.\n :param record: Record object.\n :param template: Template to render.\n :param \\*\\*kwargs: Additional view arguments based on URL rule.\n :returns: The rendered template."}
{"_id": "q_15962", "text": "r\"\"\"Record serialization view.\n\n Serializes record with given format and renders record export template.\n\n :param pid: PID object.\n :param record: Record object.\n :param template: Template to render.\n :param \\*\\*kwargs: Additional view arguments based on URL rule.\n :return: The rendered template."}
{"_id": "q_15963", "text": "Send a Timer metric calculating duration of execution of the provided callable"}
{"_id": "q_15964", "text": "Send a Timer metric with the specified duration in milliseconds"}
{"_id": "q_15965", "text": "Send a GaugeDelta metric to change a Gauge by the specified value"}
{"_id": "q_15966", "text": "Override parent by buffering the metric instead of sending now"}
{"_id": "q_15967", "text": "Return a batch client with same settings of the client"}
{"_id": "q_15968", "text": "My permission factory."}
{"_id": "q_15969", "text": "Return a TCP batch client with same settings of the TCP client"}
{"_id": "q_15970", "text": "Return a TCPClient with same settings of the batch TCP client"}
{"_id": "q_15971", "text": "tries to convert a Python object into an OpenMath object\n this is not a replacement for using a Converter for exporting Python objects\n instead, it is used conveniently building OM objects in DSL embedded in Python\n inparticular, it converts Python functions into OMBinding objects using lambdaOM as the binder"}
{"_id": "q_15972", "text": "Converts a term into OpenMath, using either a converter or the interpretAsOpenMath method"}
{"_id": "q_15973", "text": "Convert OpenMath object to Python"}
{"_id": "q_15974", "text": "Convert Python object to OpenMath"}
{"_id": "q_15975", "text": "Register a conversion from Python to OpenMath\n\n :param py_class: A Python class the conversion is attached to, or None\n :type py_class: None, type\n\n :param converter: A conversion function or an OpenMath object\n :type converter: Callable, OMAny\n\n :rtype: None\n\n ``converter`` will used to convert any object of type ``py_class``,\n or any object if ``py_class`` is ``None``. If ``converter`` is an\n OpenMath object, it is returned immediately. If it is a callable, it\n is called with the Python object as paramter; in this case, it must\n either return an OpenMath object, or raise an exception. The\n special exception ``CannotConvertError`` can be used to signify that\n ``converter`` does not know how to convert the current object, and that\n ``to_openmath`` shall continue with the other converters. Any other\n exception stops conversion immediately.\n\n Converters registered by this function are called in order from the\n most recent to the oldest."}
{"_id": "q_15976", "text": "Register a conversion from OpenMath to Python\n\n This function has two forms. A three-arguments one:\n\n :param cd: A content dictionary name\n :type cd: str\n\n :param name: A symbol name\n :type name: str\n\n :param converter: A conversion function, or a Python object\n :type: Callable, Any\n\n Any object of type ``openmath.OMSymbol``, with content\n dictionary equal to ``cd`` and name equal to ``name`` will be converted\n using ``converter``. Also, any object of type ``openmath.OMApplication``\n whose first child is an ``openmath.OMSymbol`` as above will be converted\n using ``converter``. If ``converter`` is a callable, it will be called with the\n OpenMath object as parameter; otherwise ``converter`` will be returned.\n\n In the two-argument form\n\n :param cd: A subclass of ``OMAny``\n :type cd: type\n\n :param name: A conversion function\n :type name: Callable\n\n Any object of type ``cd`` will be passed to ``name()``, and the\n result will be returned. This forms is mainly to override default\n conversions for basic OpenMath tags (OMInteger, OMString, etc.). It\n is discouraged to use it for ``OMSymbol`` and ``OMApplication``."}
{"_id": "q_15977", "text": "Used to initialize redis with app object"}
{"_id": "q_15978", "text": "django_any birds language parser"}
{"_id": "q_15979", "text": "Register form field data function.\n \n Could be used as decorator"}
{"_id": "q_15980", "text": "Lowest value generator.\n\n Separated from __call__, because it seems that python\n cache __call__ reference on module import"}
{"_id": "q_15981", "text": "Dump single field."}
{"_id": "q_15982", "text": "Disassemble serialized protocol buffers file."}
{"_id": "q_15983", "text": "Find all missing imports in list of Pbd instances."}
{"_id": "q_15984", "text": "Sometimes return None if field is not required\n\n >>> result = any_form_field(forms.BooleanField(required=False))\n >>> result in ['', 'True', 'False']\n True"}
{"_id": "q_15985", "text": "Selection from field.choices"}
{"_id": "q_15986", "text": "Return random value for DecimalField\n\n >>> result = any_form_field(forms.DecimalField(max_value=100, min_value=11, max_digits=4, decimal_places = 2))\n >>> type(result)\n <type 'str'>\n >>> from decimal import Decimal\n >>> Decimal(result) >= 11, Decimal(result) <= Decimal('99.99')\n (True, True)"}
{"_id": "q_15987", "text": "Return random value for EmailField\n\n >>> result = any_form_field(forms.EmailField(min_length=10, max_length=30))\n >>> type(result)\n <type 'str'>\n >>> len(result) <= 30, len(result) >= 10\n (True, True)"}
{"_id": "q_15988", "text": "Return random value for DateTimeField\n\n >>> result = any_form_field(forms.DateTimeField())\n >>> type(result)\n <type 'str'>"}
{"_id": "q_15989", "text": "Return random value for FloatField\n\n >>> result = any_form_field(forms.FloatField(max_value=200, min_value=100))\n >>> type(result)\n <type 'str'>\n >>> float(result) >=100, float(result) <=200\n (True, True)"}
{"_id": "q_15990", "text": "Return random value for IntegerField\n\n >>> result = any_form_field(forms.IntegerField(max_value=200, min_value=100))\n >>> type(result)\n <type 'str'>\n >>> int(result) >=100, int(result) <=200\n (True, True)"}
{"_id": "q_15991", "text": "Return random value for TimeField\n\n >>> result = any_form_field(forms.TimeField())\n >>> type(result)\n <type 'str'>"}
{"_id": "q_15992", "text": "Return random value for ChoiceField\n\n >>> CHOICES = [('YNG', 'Child'), ('OLD', 'Parent')]\n >>> result = any_form_field(forms.ChoiceField(choices=CHOICES))\n >>> type(result)\n <type 'str'>\n >>> result in ['YNG', 'OLD']\n True\n >>> typed_result = any_form_field(forms.TypedChoiceField(choices=CHOICES))\n >>> typed_result in ['YNG', 'OLD']\n True"}
{"_id": "q_15993", "text": "Return random value for MultipleChoiceField\n\n >>> CHOICES = [('YNG', 'Child'), ('MIDDLE', 'Parent') ,('OLD', 'GrandParent')]\n >>> result = any_form_field(forms.MultipleChoiceField(choices=CHOICES))\n >>> type(result)\n <type 'str'>"}
{"_id": "q_15994", "text": "Return one of first ten items for field queryset"}
{"_id": "q_15995", "text": "Deploy the app to PYPI.\n\n Args:\n msg (str, optional): Description"}
{"_id": "q_15996", "text": "Deploy a version tag."}
{"_id": "q_15997", "text": "Sometimes return None if field could be blank"}
{"_id": "q_15998", "text": "Evaluate an OpenMath symbol describing a global Python object\n\n EXAMPLES::\n\n >>> from openmath.convert_pickle import to_python\n >>> from openmath.convert_pickle import load_python_global\n >>> load_python_global('math', 'sin')\n <built-in function sin>\n\n >>> from openmath import openmath as om\n >>> o = om.OMSymbol(cdbase=\"http://python.org/\", cd='math', name='sin')\n >>> to_python(o)\n <built-in function sin>"}
{"_id": "q_15999", "text": "Apply the setstate protocol to initialize `inst` from `state`.\n\n INPUT:\n\n - ``inst`` -- a raw instance of a class\n - ``state`` -- the state to restore; typically a dictionary mapping attribute names to their values\n\n EXAMPLES::\n\n >>> from openmath.convert_pickle import cls_build\n >>> class A(object): pass\n >>> inst = A.__new__(A)\n >>> state = {\"foo\": 1, \"bar\": 4}\n >>> inst2 = cls_build(inst,state)\n >>> inst is inst2\n True\n >>> inst.foo\n 1\n >>> inst.bar\n 4"}
{"_id": "q_16000", "text": "Convert a list of OM objects into an OM object\n\n EXAMPLES::\n\n >>> from openmath import openmath as om\n >>> from openmath.convert_pickle import PickleConverter\n >>> converter = PickleConverter()\n >>> o = converter.OMList([om.OMInteger(2), om.OMInteger(2)]); o\n OMApplication(elem=OMSymbol(name='list', cd='Python', id=None, cdbase='http://python.org/'),\n arguments=[OMInteger(integer=2, id=None),\n OMInteger(integer=2, id=None)],\n id=None, cdbase=None)\n >>> converter.to_python(o)\n [2, 2]"}
{"_id": "q_16001", "text": "Decodes a PackBit encoded data."}
{"_id": "q_16002", "text": "Encodes data using PackBits encoding."}
{"_id": "q_16003", "text": "Helper function to record and log an error message\n\n :param line_data: dict\n :param error_info: dict\n :param logger:\n :param log_level: int\n :return:"}
{"_id": "q_16004", "text": "1. get a list of CDS with the same parent\n 2. sort according to strand\n 3. calculate and validate phase"}
{"_id": "q_16005", "text": "Format a given number.\n\n Format a number, with comma-separated thousands and\n custom precision/decimal places\n\n Localise by overriding the precision and thousand / decimal separators\n 2nd parameter `precision` can be an object matching `settings.number`\n\n Args:\n number (TYPE): Description\n precision (TYPE): Description\n thousand (TYPE): Description\n decimal (TYPE): Description\n\n Returns:\n name (TYPE): Description"}
{"_id": "q_16006", "text": "Format a number into currency.\n\n Usage: accounting.formatMoney(number, symbol, precision, thousandsSep,\n decimalSep, format)\n defaults: (0, \"$\", 2, \",\", \".\", \"%s%v\")\n Localise by overriding the symbol, precision,\n thousand / decimal separators and format\n Second param can be an object matching `settings.currency`\n which is the easiest way.\n\n Args:\n number (TYPE): Description\n precision (TYPE): Description\n thousand (TYPE): Description\n decimal (TYPE): Description\n\n Returns:\n name (TYPE): Description"}
{"_id": "q_16007", "text": "given a filename, return the ABFs ID string."}
{"_id": "q_16008", "text": "given the bytestring ABF header, make and launch HTML."}
{"_id": "q_16009", "text": "iterate over every sweep"}
{"_id": "q_16010", "text": "read the header and populate self with information about comments"}
{"_id": "q_16011", "text": "given a sweep, return the protocol as condensed sequence.\n This is better for comparing similarities and determining steps.\n There should be no duplicate numbers."}
{"_id": "q_16012", "text": "return the average of part of the current sweep."}
{"_id": "q_16013", "text": "Return a sweep which is the average of multiple sweeps.\n For now, standard deviation is lost."}
{"_id": "q_16014", "text": "create kernel based on this ABF info."}
{"_id": "q_16015", "text": "Get the filtered sweepY of the current sweep.\n Only works if self.kernel has been generated."}
{"_id": "q_16016", "text": "given a key, return a list of values from the matrix with that key."}
{"_id": "q_16017", "text": "given a recarray, return it as a list of dicts."}
{"_id": "q_16018", "text": "show everything we can about an object's projects and methods."}
{"_id": "q_16019", "text": "Put 2d numpy data into a temporary HTML file."}
{"_id": "q_16020", "text": "mono-exponential curve."}
{"_id": "q_16021", "text": "Try to format anything as a 2D matrix with column names."}
{"_id": "q_16022", "text": "save something to a pickle file"}
{"_id": "q_16023", "text": "determine the comment cooked in the protocol."}
{"_id": "q_16024", "text": "given an ABF file name, return the ABF of its parent."}
{"_id": "q_16025", "text": "given an ABF and the groups dict, return the ID of its parent."}
{"_id": "q_16026", "text": "given an ABF, find the parent, return that line of experiments.txt"}
{"_id": "q_16027", "text": "May be given an ABF object or filename."}
{"_id": "q_16028", "text": "return an \"FTP\" object after logging in."}
{"_id": "q_16029", "text": "upload everything from localFolder into the current FTP folder."}
{"_id": "q_16030", "text": "use the GUI to ask for a string."}
{"_id": "q_16031", "text": "Import a blosc array into a numpy array.\n\n Arguments:\n data: A blosc packed numpy array\n\n Returns:\n A numpy array with data from a blosc compressed array"}
{"_id": "q_16032", "text": "Export a numpy array to a blosc array.\n\n Arguments:\n array: The numpy array to compress to blosc array\n\n Returns:\n Bytes/String. A blosc compressed array"}
{"_id": "q_16033", "text": "Add a workspace entry in user config file."}
{"_id": "q_16034", "text": "List all available workspaces."}
{"_id": "q_16035", "text": "Return True if workspace contains repository name."}
{"_id": "q_16036", "text": "Synchronise workspace's repositories."}
{"_id": "q_16037", "text": "check out the arguments and figure out what to do."}
{"_id": "q_16038", "text": "Clone a repository."}
{"_id": "q_16039", "text": "Converts an array to its voxel list.\n\n Arguments:\n array (numpy.ndarray): A numpy nd array. This must be boolean!\n\n Returns:\n A list of n-tuples"}
{"_id": "q_16040", "text": "Execute update subcommand."}
{"_id": "q_16041", "text": "Print repository update."}
{"_id": "q_16042", "text": "Execute command with os.popen and return output."}
{"_id": "q_16043", "text": "Import a png file into a numpy array.\n\n Arguments:\n png_filename (str): A string filename of a png datafile\n\n Returns:\n A numpy array with data from the png file"}
{"_id": "q_16044", "text": "Export a numpy array to a png file.\n\n Arguments:\n filename (str): A filename to which to save the png data\n numpy_data (numpy.ndarray OR str): The numpy array to save to png.\n OR a string: If a string is provded, it should be a binary png str\n\n Returns:\n str. The expanded filename that now holds the png data\n\n Raises:\n ValueError: If the save fails; for instance if the binary string data\n cannot be coerced into a png, or perhaps your numpy.ndarray is\n ill-formed?"}
{"_id": "q_16045", "text": "provide all stats on the first AP."}
{"_id": "q_16046", "text": "Print workspace status."}
{"_id": "q_16047", "text": "Print repository status."}
{"_id": "q_16048", "text": "easy way to plot a gain function."}
{"_id": "q_16049", "text": "draw vertical lines at comment points. Defaults to seconds."}
{"_id": "q_16050", "text": "stamp the bottom with file info."}
{"_id": "q_16051", "text": "makes a new matplotlib figure with default dims and DPI.\n Also labels it with pA or mV depending on ABF."}
{"_id": "q_16052", "text": "Import a TIFF file into a numpy array.\n\n Arguments:\n tiff_filename: A string filename of a TIFF datafile\n\n Returns:\n A numpy array with data from the TIFF file"}
{"_id": "q_16053", "text": "When dynamic, not all argument values may be available."}
{"_id": "q_16054", "text": "Summarizes the trace of values used to update the DynamicArgs\n and the arguments subsequently returned. May be used to\n implement the summary method."}
{"_id": "q_16055", "text": "Takes as input a list or tuple of two elements. First the\n value returned by incrementing by 'stepsize' followed by the\n value returned after a 'stepsize' decrement."}
{"_id": "q_16056", "text": "given a filename or ABF object, try to analyze it."}
{"_id": "q_16057", "text": "Get version from package resources."}
{"_id": "q_16058", "text": "frame the current matplotlib plot with ABF info, and optionally save it.\n Note that this is entirely independent of the ABFplot class object.\n if saveImage is False, show it instead.\n\n Datatype should be:\n * plot\n * experiment"}
{"_id": "q_16059", "text": "make sure a figure is ready."}
{"_id": "q_16060", "text": "save the existing figure. does not close it."}
{"_id": "q_16061", "text": "plot every sweep of an ABF file."}
{"_id": "q_16062", "text": "plot the current sweep protocol."}
{"_id": "q_16063", "text": "plot the protocol of all sweeps."}
{"_id": "q_16064", "text": "Same as mix_and_match, but using the @option decorator."}
{"_id": "q_16065", "text": "Given ABFs and TIFs formatted long style, rename each of them to prefix their number with a different number.\n\n Example: 2017_10_11_0011.abf\n Becomes: 2017_10_11_?011.abf\n where ? can be any character."}
{"_id": "q_16066", "text": "Returns info regarding a particular dataset.\n\n Arugments:\n name (str): Dataset name\n\n Returns:\n dict: Dataset information"}
{"_id": "q_16067", "text": "Lists datasets in resources. Setting 'get_global_public' to 'True'\n will retrieve all public datasets in cloud. 'False' will get user's\n public datasets.\n\n Arguments:\n get_global_public (bool): True if user wants all public datasets in\n cloud. False if user wants only their\n public datasets.\n\n Returns:\n dict: Returns datasets in JSON format"}
{"_id": "q_16068", "text": "Show specific workspace."}
{"_id": "q_16069", "text": "Show details for all workspaces."}
{"_id": "q_16070", "text": "Get the base URL of the Remote.\n\n Arguments:\n None\n Returns:\n `str` base URL"}
{"_id": "q_16071", "text": "given files and cells, return a dict of files grouped by cell."}
{"_id": "q_16072", "text": "populate class properties relating to files in the folder."}
{"_id": "q_16073", "text": "generate list of cells with links. keep this simple.\n automatically generates splash page and regnerates frames."}
{"_id": "q_16074", "text": "generate a data view for every ABF in the project folder."}
{"_id": "q_16075", "text": "IC steps. See how hyperpol. step affects things."}
{"_id": "q_16076", "text": "repeated membrane tests."}
{"_id": "q_16077", "text": "fast sweeps, 1 step per sweep, for clean IV without fast currents."}
{"_id": "q_16078", "text": "combination of membrane test and IV steps."}
{"_id": "q_16079", "text": "OBSOLETE WAY TO INDEX A FOLDER."}
{"_id": "q_16080", "text": "A custom save method that handles figuring out when something is activated or deactivated."}
{"_id": "q_16081", "text": "It is impossible to delete an activatable model unless force is True. This function instead sets it to inactive."}
{"_id": "q_16082", "text": "Write to file_handle if supplied, othewise print output"}
{"_id": "q_16083", "text": "A helper method that supplies the root directory name given a\n timestamp."}
{"_id": "q_16084", "text": "The log contains the tids and corresponding specifications\n used during launch with the specifications in JSON format."}
{"_id": "q_16085", "text": "All launchers should call this method to write the info file\n at the end of the launch. The .info file is saved given\n setup_info supplied by _setup_launch into the\n root_directory. When called without setup_info, the existing\n info file is updated with the end-time."}
{"_id": "q_16086", "text": "Launches processes defined by process_commands, but only\n executes max_concurrency processes at a time; if a process\n completes and there are still outstanding processes to be\n executed, the next processes are run until max_concurrency is\n reached again."}
{"_id": "q_16087", "text": "A succinct summary of the Launcher configuration. Unlike the\n repr, a summary does not have to be complete but must supply\n key information relevant to the user."}
{"_id": "q_16088", "text": "The method that actually runs qsub to invoke the python\n process with the necessary commands to trigger the next\n collation step and next block of jobs."}
{"_id": "q_16089", "text": "This method handles static argument specifiers and cases where\n the dynamic specifiers cannot be queued before the arguments\n are known."}
{"_id": "q_16090", "text": "Aggregates all process_commands and the designated output files into a\n list, and outputs it as JSON, after which the wrapper script is called."}
{"_id": "q_16091", "text": "Performs consistency checks across all the launchers."}
{"_id": "q_16092", "text": "Launches all available launchers."}
{"_id": "q_16093", "text": "Helper to prompt the user for input on the commandline."}
{"_id": "q_16094", "text": "The implementation in the base class simply checks there is no\n clash between the metadata and data keys."}
{"_id": "q_16095", "text": "Returns the full path for saving the file, adding an extension\n and making the filename unique as necessary."}
{"_id": "q_16096", "text": "Returns a boolean indicating whether the filename has an\n appropriate extension for this class."}
{"_id": "q_16097", "text": "Data may be either a PIL Image object or a Numpy array."}
{"_id": "q_16098", "text": "return \"YYYY-MM-DD\" when the file was modified."}
{"_id": "q_16099", "text": "returns a dict of active folders with days as keys."}
{"_id": "q_16100", "text": "given some data and a list of X posistions, return the normal\n distribution curve as a Y point at each of those Xs."}
{"_id": "q_16101", "text": "return self.dataY around a time point. All units are seconds.\n if thisSweep==False, the time point is considered to be experiment time\n and an appropriate sweep may be selected. i.e., with 10 second\n sweeps and timePint=35, will select the 5s mark of the third sweep"}
{"_id": "q_16102", "text": "RETURNS filtered trace. Desn't filter it in place."}
{"_id": "q_16103", "text": "Reads in a file from disk.\n\n Arguments:\n in_file: The name of the file to read in\n in_fmt: The format of in_file, if you want to be explicit\n\n Returns:\n numpy.ndarray"}
{"_id": "q_16104", "text": "Compute invariants from an existing GraphML file using the remote\n grute graph services.\n\n Arguments:\n graph_file (str): The filename of the graphml file\n input_format (str): One of grute.GraphFormats\n invariants (str[]: Invariants.ALL)*: An array of grute.Invariants\n to compute on the graph\n email (str: self.email)*: The email to notify upon completion\n use_threads (bool: False)*: Whether to use Python threads to run\n computation in the background when waiting for the server to\n return the invariants\n callback (function: None)*: The function to run upon completion of\n the call, if using threads. (Will not be called if use_threads\n is set to False.)\n\n Returns:\n HTTP Response if use_threads is False. Otherwise, None\n\n Raises:\n ValueError: If the graph file does not exist, or if there are\n issues with the passed arguments\n RemoteDataUploadError: If there is an issue packing the file\n RemoteError: If the server experiences difficulty computing invs"}
{"_id": "q_16105", "text": "Convert a graph from one GraphFormat to another.\n\n Arguments:\n graph_file (str): Filename of the file to convert\n input_format (str): A grute.GraphFormats\n output_formats (str[]): A grute.GraphFormats\n email (str: self.email)*: The email to notify\n use_threads (bool: False)*: Whether to use Python threads to run\n computation in the background when waiting for the server\n callback (function: None)*: The function to run upon completion of\n the call, if using threads. (Will not be called if use_threads\n is set to False.)\n\n Returns:\n HTTP Response if use_threads=False. Else, no return value.\n\n Raises:\n RemoteDataUploadError: If there's an issue uploading the data\n RemoteError: If there's a server-side issue\n ValueError: If there's a problem with the supplied arguments"}
{"_id": "q_16106", "text": "Method to define the positional arguments and keyword order\n for pretty printing."}
{"_id": "q_16107", "text": "A succinct summary of the argument specifier. Unlike the repr,\n a summary does not have to be complete but must supply the\n most relevant information about the object to the user."}
{"_id": "q_16108", "text": "Returns the specs, the remaining kwargs and whether or not the\n constructor was called with kwarg or explicit specs."}
{"_id": "q_16109", "text": "Convenience method to inspect the available argument values in\n human-readable format. The ordering of keys is determined by\n how quickly they vary.\n\n The exclude list allows specific keys to be excluded for\n readability (e.g. to hide long, absolute filenames)."}
{"_id": "q_16110", "text": "The lexical sort order is specified by a list of string\n arguments. Each string is a key name prefixed by '+' or '-'\n for ascending and descending sort respectively. If the key is\n not found in the operand's set of varying keys, it is ignored."}
{"_id": "q_16111", "text": "Parses the log file generated by a launcher and returns\n dictionary with tid keys and specification values.\n\n Ordering can be maintained by setting dict_type to the\n appropriate constructor (i.e. OrderedDict). Keys are converted\n from unicode to strings for kwarg use."}
{"_id": "q_16112", "text": "Writes the supplied specifications to the log path. The data\n may be supplied as either as a an Args or as a list of\n dictionaries.\n\n By default, specifications will be appropriately appended to\n an existing log file. This can be disabled by setting\n allow_append to False."}
{"_id": "q_16113", "text": "Load all the files in a given directory selecting only files\n with the given extension if specified. The given kwargs are\n passed through to the normal constructor."}
{"_id": "q_16114", "text": "Loads the files that match the given pattern."}
{"_id": "q_16115", "text": "Convenience method to directly chain a pattern processed by\n FilePattern into a FileInfo instance.\n\n Note that if a default filetype has been set on FileInfo, the\n filetype argument may be omitted."}
{"_id": "q_16116", "text": "Load the file contents into the supplied Table using the\n specified key and filetype. The input table should have the\n filenames as values which will be replaced by the loaded\n data. If data_key is specified, this key will be used to index\n the loaded data to retrive the specified item."}
{"_id": "q_16117", "text": "Load the file contents into the supplied dataframe using the\n specified key and filetype."}
{"_id": "q_16118", "text": "Generates the union of the source.specs and the metadata\n dictionary loaded by the filetype object."}
{"_id": "q_16119", "text": "Push new data into the buffer. Resume looping if paused."}
{"_id": "q_16120", "text": "Inelegant for now, but lets you manually analyze every ABF in a folder."}
{"_id": "q_16121", "text": "create ID_plot.html of just intrinsic properties."}
{"_id": "q_16122", "text": "This applies a kernel to a signal through convolution and returns the result.\n\n Some magic is done at the edges so the result doesn't apprach zero:\n 1. extend the signal's edges with len(kernel)/2 duplicated values\n 2. perform the convolution ('same' mode)\n 3. slice-off the ends we added\n 4. return the same number of points as the original"}
{"_id": "q_16123", "text": "if the value is in the list, move it to the back and return it."}
{"_id": "q_16124", "text": "given a list and a list of items to be first, return the list in the\n same order except that it begins with each of the first items."}
{"_id": "q_16125", "text": "given a list of goofy ABF names, return it sorted intelligently.\n This places things like 16o01001 after 16901001."}
{"_id": "q_16126", "text": "return the semi-temporary user folder"}
{"_id": "q_16127", "text": "Check if the listener limit is hit and warn if needed."}
{"_id": "q_16128", "text": "Bind a listener to a particular event.\n\n Args:\n event (str): The name of the event to listen for. This may be any\n string value.\n listener (def or async def): The callback to execute when the event\n fires. This may be a sync or async function."}
{"_id": "q_16129", "text": "Schedule a coroutine for execution.\n\n Args:\n event (str): The name of the event that triggered this call.\n listener (async def): The async def that needs to be executed.\n *args: Any number of positional arguments.\n **kwargs: Any number of keyword arguments.\n\n The values of *args and **kwargs are passed, unaltered, to the async\n def when generating the coro. If there is an exception generating the\n coro, such as the wrong number of arguments, the emitter's error event\n is triggered. If the triggering event _is_ the emitter's error event\n then the exception is reraised. The reraised exception may show in\n debug mode for the event loop but is otherwise silently dropped."}
{"_id": "q_16130", "text": "Execute a sync function.\n\n Args:\n event (str): The name of the event that triggered this call.\n listener (def): The def that needs to be executed.\n *args: Any number of positional arguments.\n **kwargs: Any number of keyword arguments.\n\n The values of *args and **kwargs are passed, unaltered, to the def\n when exceuting. If there is an exception executing the def, such as the\n wrong number of arguments, the emitter's error event is triggered. If\n the triggering event _is_ the emitter's error event then the exception\n is reraised. The reraised exception may show in debug mode for the\n event loop but is otherwise silently dropped."}
{"_id": "q_16131", "text": "Dispatch an event to a listener.\n\n Args:\n event (str): The name of the event that triggered this call.\n listener (def or async def): The listener to trigger.\n *args: Any number of positional arguments.\n **kwargs: Any number of keyword arguments.\n\n This method inspects the listener. If it is a def it dispatches the\n listener to a method that will execute that def. If it is an async def\n it dispatches it to a method that will schedule the resulting coro with\n the event loop."}
{"_id": "q_16132", "text": "Get the number of listeners for the event.\n\n Args:\n event (str): The event for which to count all listeners.\n\n The resulting count is a combination of listeners added using\n 'on'/'add_listener' and 'once'."}
{"_id": "q_16133", "text": "Converts a RAMON object list to a JSON-style dictionary. Useful for going\n from an array of RAMONs to a dictionary, indexed by ID.\n\n Arguments:\n ramons (RAMON[]): A list of RAMON objects\n flatten (boolean: False): Not implemented\n\n Returns:\n dict: A python dictionary of RAMON objects."}
{"_id": "q_16134", "text": "Takes str or int, returns class type"}
{"_id": "q_16135", "text": "Deletes a channel given its name, name of its project\n , and name of its dataset.\n\n Arguments:\n channel_name (str): Channel name\n project_name (str): Project name\n dataset_name (str): Dataset name\n\n Returns:\n bool: True if channel deleted, False if not"}
{"_id": "q_16136", "text": "Convert each TIF to PNG. Return filenames of new PNGs."}
{"_id": "q_16137", "text": "expects a folder of ABFs."}
{"_id": "q_16138", "text": "Add a new dataset to the ingest.\n\n Arguments:\n dataset_name (str): Dataset Name is the overarching name of the\n research effort. Standard naming convention is to do\n LabNamePublicationYear or LeadResearcherCurrentYear.\n imagesize (int, int, int): Image size is the pixel count\n dimensions of the data. For example is the data is stored\n as a series of 100 slices each 2100x2000 pixel TIFF images,\n the X,Y,Z dimensions are (2100, 2000, 100).\n voxelres (float, float, float): Voxel Resolution is the number\n of voxels per unit pixel. We store X,Y,Z voxel resolution\n separately.\n offset (int, int, int): If your data is not well aligned and\n there is \"excess\" image data you do not wish to examine, but\n are present in your images, offset is how you specify where\n your actual image starts. Offset is provided a pixel\n coordinate offset from origin which specifies the \"actual\"\n origin of the image. The offset is for X,Y,Z dimensions.\n timerange (int, int): Time Range is a parameter to support\n storage of Time Series data, so the value of the tuple is a\n 0 to X range of how many images over time were taken. It\n takes 2 inputs timeStepStart and timeStepStop.\n scalinglevels (int): Scaling levels is the number of levels the\n data is scalable to (how many zoom levels are present in the\n data). The highest resolution of the data is at scaling level\n 0, and for each level up the data is down sampled by 2x2\n (per slice). To learn more about the sampling service used,\n visit the the propagation service page.\n scaling (int): Scaling is the scaling method of the data being\n stored. 0 corresponds to a Z-slice orientation (as in a\n collection of tiff images in which each tiff is a slice on\n the z plane) where data will be scaled only on the xy plane,\n not the z plane. 1 corresponds to an isotropic orientation\n (in which each tiff is a slice on the y plane) where data\n is scaled along all axis.\n\n Returns:\n None"}
{"_id": "q_16139", "text": "Genarate ND json object."}
{"_id": "q_16140", "text": "Generate the project dictionary."}
{"_id": "q_16141", "text": "Genarate the project dictionary."}
{"_id": "q_16142", "text": "Identify the image size using the data location and other parameters"}
{"_id": "q_16143", "text": "Try to post data to the server."}
{"_id": "q_16144", "text": "Find path for given workspace and|or repository."}
{"_id": "q_16145", "text": "Return the project info for a given token.\n\n Arguments:\n token (str): Token to return information for\n\n Returns:\n JSON: representation of proj_info"}
{"_id": "q_16146", "text": "Insert new metadata into the OCP metadata database.\n\n Arguments:\n token (str): Token of the datum to set\n data (str): A dictionary to insert as metadata. Include `secret`.\n\n Returns:\n json: Info of the inserted ID (convenience) or an error message.\n\n Throws:\n RemoteDataUploadError: If the token is already populated, or if\n there is an issue with your specified `secret` key."}
{"_id": "q_16147", "text": "Get a response object for a given url.\n\n Arguments:\n url (str): The url make a get to\n token (str): The authentication token\n\n Returns:\n obj: The response object"}
{"_id": "q_16148", "text": "Returns a post resquest object taking in a url, user token, and\n possible json information.\n\n Arguments:\n url (str): The url to make post to\n token (str): The authentication token\n json (dict): json info to send\n\n Returns:\n obj: Post request object"}
{"_id": "q_16149", "text": "plot X and Y data, then shade its background by variance."}
{"_id": "q_16150", "text": "create some fancy graphs to show color-coded variances."}
{"_id": "q_16151", "text": "runs AP detection on every sweep."}
{"_id": "q_16152", "text": "Return package author and version as listed in `init.py`."}
{"_id": "q_16153", "text": "Create an API subclass with fewer methods than its base class.\n\n Arguments:\n name (:py:class:`str`): The name of the new class.\n docstring (:py:class:`str`): The docstring for the new class.\n remove_methods (:py:class:`dict`): The methods to remove from\n the base class's :py:attr:`API_METHODS` for the subclass. The\n key is the name of the root method (e.g. ``'auth'`` for\n ``'auth.test'``, the value is either a tuple of child method\n names (e.g. ``('test',)``) or, if all children should be\n removed, the special value :py:const:`ALL`.\n base (:py:class:`type`, optional): The base class (defaults to\n :py:class:`SlackApi`).\n\n Returns:\n :py:class:`type`: The new subclass.\n\n Raises:\n :py:class:`KeyError`: If the method wasn't in the superclass."}
{"_id": "q_16154", "text": "Execute a specified Slack Web API method.\n\n Arguments:\n method (:py:class:`str`): The name of the method.\n **params (:py:class:`dict`): Any additional parameters\n required.\n\n Returns:\n :py:class:`dict`: The JSON data from the response.\n\n Raises:\n :py:class:`aiohttp.web_exceptions.HTTPException`: If the HTTP\n request returns a code other than 200 (OK).\n SlackApiError: If the Slack API is reached but the response\n contains an error message."}
{"_id": "q_16155", "text": "Join the real-time messaging service.\n\n Arguments:\n filters (:py:class:`dict`, optional): Dictionary mapping\n message filters to the functions they should dispatch to.\n Use a :py:class:`collections.OrderedDict` if precedence is\n important; only one filter, the first match, will be\n applied to each message."}
{"_id": "q_16156", "text": "Handle an incoming message appropriately.\n\n Arguments:\n message (:py:class:`aiohttp.websocket.Message`): The incoming\n message to handle.\n filters (:py:class:`list`): The filters to apply to incoming\n messages."}
{"_id": "q_16157", "text": "If you send a message directly to me"}
{"_id": "q_16158", "text": "Create a new instance from the API token.\n\n Arguments:\n token (:py:class:`str`, optional): The bot's API token\n (defaults to ``None``, which means looking in the\n environment).\n api_cls (:py:class:`type`, optional): The class to create\n as the ``api`` argument for API access (defaults to\n :py:class:`aslack.slack_api.SlackBotApi`).\n\n Returns:\n :py:class:`SlackBot`: The new instance."}
{"_id": "q_16159", "text": "Format an outgoing message for transmission.\n\n Note:\n Adds the message type (``'message'``) and incremental ID.\n\n Arguments:\n channel (:py:class:`str`): The channel to send to.\n text (:py:class:`str`): The message text to send.\n\n Returns:\n :py:class:`str`: The JSON string of the message."}
{"_id": "q_16160", "text": "Returns list of paths to tested apps"}
{"_id": "q_16161", "text": "Get the options for each task that will be run"}
{"_id": "q_16162", "text": "Adds a character matrix to DendroPy tree and infers gaps using\n Fitch's algorithm.\n\n Infer gaps in sequences at ancestral nodes."}
{"_id": "q_16163", "text": "Write the data from the db to a CLDF dataset according to the metadata in `self.dataset`.\n\n :param dest:\n :param mdname:\n :return: path of the metadata file"}
{"_id": "q_16164", "text": "A user-friendly description of the handler.\n\n Returns:\n :py:class:`str`: The handler's description."}
{"_id": "q_16165", "text": "Use CLDF reference properties to implicitely create foreign key constraints.\n\n :param component: A Table object or `None`."}
{"_id": "q_16166", "text": "Create a URL for the specified endpoint.\n\n Arguments:\n endpoint (:py:class:`str`): The API endpoint to access.\n root: (:py:class:`str`, optional): The root URL for the\n service API.\n params: (:py:class:`dict`, optional): The values for format\n into the created URL (defaults to ``None``).\n url_params: (:py:class:`dict`, optional): Parameters to add\n to the end of the URL (defaults to ``None``).\n\n Returns:\n :py:class:`str`: The resulting URL."}
{"_id": "q_16167", "text": "Install our gettext and ngettext functions into Jinja2's environment."}
{"_id": "q_16168", "text": "Truncate the supplied text for display.\n\n Arguments:\n text (:py:class:`str`): The text to truncate.\n max_len (:py:class:`int`, optional): The maximum length of the\n text before truncation (defaults to 350 characters).\n end (:py:class:`str`, optional): The ending to use to show that\n the text was truncated (defaults to ``'...'``).\n\n Returns:\n :py:class:`str`: The truncated text."}
{"_id": "q_16169", "text": "this is the central unsafe function, using a lock and updating the state in `guard` in-place."}
{"_id": "q_16170", "text": "Add a source, either specified by glottolog reference id, or as bibtex record."}
{"_id": "q_16171", "text": "Returns a cache key consisten of a username and image size."}
{"_id": "q_16172", "text": "Decorator to cache the result of functions that take a ``user`` and a\n ``size`` value."}
{"_id": "q_16173", "text": "Function to be called when saving or changing an user's avatars."}
{"_id": "q_16174", "text": "Returns preferences model class dynamically crated for a given app or None on conflict."}
{"_id": "q_16175", "text": "Generator to walk through variables considered as preferences\n in locals dict of a given frame.\n\n :param int stepback:\n\n :rtype: tuple"}
{"_id": "q_16176", "text": "Prints file details in the current directory"}
{"_id": "q_16177", "text": "Calculate a percentage."}
{"_id": "q_16178", "text": "Get slabs info."}
{"_id": "q_16179", "text": "Add admin global context, for compatibility with Django 1.7"}
{"_id": "q_16180", "text": "Return the status of all servers."}
{"_id": "q_16181", "text": "Show the dashboard."}
{"_id": "q_16182", "text": "Show server statistics."}
{"_id": "q_16183", "text": "For every parameter, create a matcher if the parameter has an\n annotation."}
{"_id": "q_16184", "text": "Makes a wrapper function that executes a dispatch call for func. The\n wrapper has the dispatch and dispatch_first attributes, so that\n additional overloads can be added to the group."}
{"_id": "q_16185", "text": "Adds the decorated function to this dispatch."}
{"_id": "q_16186", "text": "Adds the decorated function to this dispatch, at the FRONT of the order.\n Useful for allowing third parties to add overloaded functionality\n to be executed before default functionality."}
{"_id": "q_16187", "text": "Convert a byte value into a human-readable format."}
{"_id": "q_16188", "text": "reprojette en WGS84 et recupere l'extend"}
{"_id": "q_16189", "text": "Convert GRIB to Tif"}
{"_id": "q_16190", "text": "Triggered on dynamic preferences model save.\n Issues DB save and reread."}
{"_id": "q_16191", "text": "Binds PrefProxy objects to module variables used by apps as preferences.\n\n :param list|tuple values: Preference values.\n\n :param str|unicode category: Category name the preference belongs to.\n\n :param Field field: Django model field to represent this preference.\n\n :param str|unicode verbose_name: Field verbose name.\n\n :param str|unicode help_text: Field help text.\n\n :param bool static: Leave this preference static (do not store in DB).\n\n :param bool readonly: Make this field read only.\n\n :rtype: list"}
{"_id": "q_16192", "text": "Registers dynamically created preferences models for Admin interface.\n\n :param admin.AdminSite admin_site: AdminSite object."}
{"_id": "q_16193", "text": "Automatically discovers and registers all preferences available in all apps.\n\n :param admin.AdminSite admin_site: Custom AdminSite object."}
{"_id": "q_16194", "text": "Restores the original values of module variables\n considered preferences if they are still PatchedLocal\n and not PrefProxy."}
{"_id": "q_16195", "text": "Replaces a settings module with a Module proxy to intercept\n an access to settings.\n\n :param int depth: Frame count to go backward."}
{"_id": "q_16196", "text": "Registers preferences that should be handled by siteprefs.\n\n Expects preferences as *args.\n\n Use keyword arguments to batch apply params supported by\n ``PrefProxy`` to all preferences not constructed by ``pref`` and ``pref_group``.\n\n Batch kwargs:\n\n :param str|unicode help_text: Field help text.\n\n :param bool static: Leave this preference static (do not store in DB).\n\n :param bool readonly: Make this field read only.\n\n :param bool swap_settings_module: Whether to automatically replace settings module\n with a special ``ProxyModule`` object to access dynamic values of settings\n transparently (so not to bother with calling ``.value`` of ``PrefProxy`` object)."}
{"_id": "q_16197", "text": "Marks preferences group.\n\n :param str|unicode title: Group title\n\n :param list|tuple prefs: Preferences to group.\n\n :param str|unicode help_text: Field help text.\n\n :param bool static: Leave this preference static (do not store in DB).\n\n :param bool readonly: Make this field read only."}
{"_id": "q_16198", "text": "Marks a preference.\n\n :param preference: Preference variable.\n\n :param Field field: Django model field to represent this preference.\n\n :param str|unicode verbose_name: Field verbose name.\n\n :param str|unicode help_text: Field help text.\n\n :param bool static: Leave this preference static (do not store in DB).\n\n :param bool readonly: Make this field read only.\n\n :rtype: PrefProxy|None"}
{"_id": "q_16199", "text": "Add objects to the environment."}
{"_id": "q_16200", "text": "Replace any config tokens in the file's path with values from the config."}
{"_id": "q_16201", "text": "Get the path to the file relative to its parent."}
{"_id": "q_16202", "text": "Write data to the file.\n\n `data` is the data to write\n `mode` is the mode argument to pass to `open()`"}
{"_id": "q_16203", "text": "Create the file.\n\n If the file already exists an exception will be raised"}
{"_id": "q_16204", "text": "Replace any config tokens with values from the config."}
{"_id": "q_16205", "text": "Prepare the Directory for use in an Environment.\n\n This will create the directory if the create flag is set."}
{"_id": "q_16206", "text": "Find the path to something inside this directory."}
{"_id": "q_16207", "text": "Read a file from the directory."}
{"_id": "q_16208", "text": "Add objects to the directory."}
{"_id": "q_16209", "text": "Save the state to a file."}
{"_id": "q_16210", "text": "Load a saved state file."}
{"_id": "q_16211", "text": "Clean up the saved state."}
{"_id": "q_16212", "text": "Generate the ``versionwarning-data.json`` file.\n\n This file is included in the output and read by the AJAX request when\n accessing to the documentation and used to compare the live versions with\n the curent one.\n\n Besides, this file contains meta data about the project, the API to use and\n the banner itself."}
{"_id": "q_16213", "text": "Recursively merge values from a nested dictionary into another nested\n dictionary.\n\n For example:\n\n >>> target = {\n ... 'thing': 123,\n ... 'thang': {\n ... 'a': 1,\n ... 'b': 2\n ... }\n ... }\n >>> source = {\n ... 'thang': {\n ... 'a': 666,\n ... 'c': 777\n ... }\n ... }\n >>> update_dict(target, source)\n >>> target\n {\n 'thing': 123,\n 'thang': {\n 'a': 666,\n 'b': 2,\n 'c': 777\n }\n }"}
{"_id": "q_16214", "text": "Returns a tuple of a reference to the last container in the path, and\n the last component in the key path.\n\n For example, with a self._value like this:\n\n {\n 'thing': {\n 'another': {\n 'some_leaf': 5,\n 'one_more': {\n 'other_leaf': 'x'\n }\n }\n }\n }\n\n And a self._path of: 'thing.another.some_leaf'\n\n This will return a tuple of a reference to the 'another' dict, and\n 'some_leaf', allowing the setter and casting methods to directly access\n the item referred to by the key path."}
{"_id": "q_16215", "text": "Get the value represented by this node."}
{"_id": "q_16216", "text": "Update the configuration with new data.\n\n This can be passed either or both `data` and `options`.\n\n `options` is a dict of keypath/value pairs like this (similar to\n CherryPy's config mechanism:\n\n >>> c.update(options={\n ... 'server.port': 8080,\n ... 'server.host': 'localhost',\n ... 'admin.email': 'admin@lol'\n ... })\n\n `data` is a dict of actual config data, like this:\n\n >>> c.update(data={\n ... 'server': {\n ... 'port': 8080,\n ... 'host': 'localhost'\n ... },\n ... 'admin': {\n ... 'email': 'admin@lol'\n ... }\n ... })"}
{"_id": "q_16217", "text": "Gives objective functions a number of dimensions and parameter range\n\n Parameters\n ----------\n param_scales : (int, int)\n Scale (std. dev.) for choosing each parameter\n\n xstar : array_like\n Optimal parameters"}
{"_id": "q_16218", "text": "Pointwise minimum of two quadratic bowls"}
{"_id": "q_16219", "text": "Beale's function"}
{"_id": "q_16220", "text": "Booth's function"}
{"_id": "q_16221", "text": "Three-hump camel function"}
{"_id": "q_16222", "text": "Return a list of buckets in MimicDB.\n\n :param boolean force: If true, API call is forced to S3"}
{"_id": "q_16223", "text": "Sync either a list of buckets or the entire connection.\n\n Force all API calls to S3 and populate the database with the current\n state of S3.\n\n :param \\*string \\*buckets: Buckets to sync"}
{"_id": "q_16224", "text": "Return an iterable of keys from MimicDB.\n\n :param boolean force: If true, API call is forced to S3"}
{"_id": "q_16225", "text": "Sync a bucket.\n\n Force all API calls to S3 and populate the database with the current state of S3."}
{"_id": "q_16226", "text": "Minimize the proximal operator of a given objective using L-BFGS\n\n Parameters\n ----------\n f_df : function\n Returns the objective and gradient of the function to minimize\n\n maxiter : int\n Maximum number of L-BFGS iterations"}
{"_id": "q_16227", "text": "Applies a smoothing operator along one dimension\n\n currently only accepts a matrix as input\n\n Parameters\n ----------\n penalty : float\n\n axis : int, optional\n Axis along which to apply the smoothing (Default: 0)\n\n newshape : tuple, optional\n Desired shape of the parameters to apply the nuclear norm to. The given\n parameters are reshaped to an array with this shape, or not reshaped if\n the value of newshape is None. (Default: None)"}
{"_id": "q_16228", "text": "Projection onto the semidefinite cone"}
{"_id": "q_16229", "text": "Projection onto the probability simplex\n\n http://arxiv.org/pdf/1309.1541v1.pdf"}
{"_id": "q_16230", "text": "Applies a proximal operator to the columns of a matrix"}
{"_id": "q_16231", "text": "Adds a proximal operator to the list of operators"}
{"_id": "q_16232", "text": "Set key attributes to retrived metadata. Might be extended in the\n future to support more attributes."}
{"_id": "q_16233", "text": "Called internally for any type of upload. After upload finishes,\n make sure the key is in the bucket set and save the metadata."}
{"_id": "q_16234", "text": "Memoizes an objective + gradient function, and splits it into\n two functions that return just the objective and gradient, respectively.\n\n Parameters\n ----------\n f_df : function\n Must be unary (takes a single argument)\n\n xref : list, dict, or array_like\n The form of the parameters\n\n size : int, optional\n Size of the cache (Default=1)"}
{"_id": "q_16235", "text": "Compares the numerical gradient to the analytic gradient\n\n Parameters\n ----------\n f_df : function\n The analytic objective and gradient function to check\n\n x0 : array_like\n Parameter values to check the gradient at\n\n stepsize : float, optional\n Stepsize for the numerical gradient. Too big and this will poorly estimate the gradient.\n Too small and you will run into precision issues (default: 1e-6)\n\n tol : float, optional\n Tolerance to use when coloring correct/incorrect gradients (default: 1e-5)\n\n width : int, optional\n Width of the table columns (default: 15)\n\n style : string, optional\n Style of the printed table, see tableprint for a list of styles (default: 'round')"}
{"_id": "q_16236", "text": "Build Twilio callback url for confirming message delivery status\n\n :type message: OutgoingSMS"}
{"_id": "q_16237", "text": "Evaluate the files identified for checksum."}
{"_id": "q_16238", "text": "Check the integrity of the datapackage.json"}
{"_id": "q_16239", "text": "Guess the filetype and read the file into row sets"}
{"_id": "q_16240", "text": "Guess schema using messytables"}
{"_id": "q_16241", "text": "Calculates a checksum for a Finnish national reference number"}
{"_id": "q_16242", "text": "Helper to make sure the given character is valid for a reference number"}
{"_id": "q_16243", "text": "Creates the huge number from ISO alphanumeric ISO reference"}
{"_id": "q_16244", "text": "Calculates virtual barcode for IBAN account number and ISO reference\n\n Arguments:\n iban {string} -- IBAN formed account number\n reference {string} -- ISO 11649 creditor reference\n amount {decimal.Decimal} -- Amount in euros, 0.01 - 999999.99\n due {datetime.date} -- due date"}
{"_id": "q_16245", "text": "For various actions we need files that match patterns"}
{"_id": "q_16246", "text": "Run a specific command using the manager"}
{"_id": "q_16247", "text": "Lookup all available repos"}
{"_id": "q_16248", "text": "Add repo to the internal lookup table..."}
{"_id": "q_16249", "text": "Lookup a repo based on username reponame"}
{"_id": "q_16250", "text": "Run a shell command within the repo's context\n\n Parameters\n ----------\n\n repo: Repository object\n args: Shell command"}
{"_id": "q_16251", "text": "Check if the datapackage exists..."}
{"_id": "q_16252", "text": "Create the datapackage file.."}
{"_id": "q_16253", "text": "Update metadata with the content of the files"}
{"_id": "q_16254", "text": "Update metadata with the commit information"}
{"_id": "q_16255", "text": "Collect information from the dependent repo's"}
{"_id": "q_16256", "text": "Post to metadata server\n\n Parameters\n ----------\n\n repo: Repository object (result of lookup)"}
{"_id": "q_16257", "text": "Show details of available plugins\n\n Parameters\n ----------\n what: Class of plugins e.g., backend\n name: Name of the plugin e.g., s3\n version: Version of the plugin\n details: Show details be shown?"}
{"_id": "q_16258", "text": "Registering a plugin\n\n Params\n ------\n what: Nature of the plugin (backend, instrumentation, repo)\n obj: Instance of the plugin"}
{"_id": "q_16259", "text": "Send a message containing the RPC method call"}
{"_id": "q_16260", "text": "Read from the network layer and processes all data read. Can\n support both blocking and non-blocking sockets.\n Returns the number of input bytes processed, or EOS if input processing\n is done. Any exceptions raised by the socket are re-raised."}
{"_id": "q_16261", "text": "Write data to the network layer. Can support both blocking and\n non-blocking sockets.\n Returns the number of output bytes sent, or EOS if output processing\n is done. Any exceptions raised by the socket are re-raised."}
{"_id": "q_16262", "text": "Instantiate the validation specification"}
{"_id": "q_16263", "text": "Return a map containing the settle modes as provided by the remote.\n Skip any default value."}
{"_id": "q_16264", "text": "Assign addresses, properties, etc."}
{"_id": "q_16265", "text": "Return the authorative target of the link."}
{"_id": "q_16266", "text": "Create link from request for a sender."}
{"_id": "q_16267", "text": "Create a new receiver link."}
{"_id": "q_16268", "text": "Peer has closed its end of the session."}
{"_id": "q_16269", "text": "Called when the Proton Engine generates an endpoint state change\n event."}
{"_id": "q_16270", "text": "Check if a URL exists"}
{"_id": "q_16271", "text": "Modifies inline patterns."}
{"_id": "q_16272", "text": "Post to the metadata server\n\n Parameters\n ----------\n\n repo"}
{"_id": "q_16273", "text": "imports and returns module class from ``path.to.module.Class``\n argument"}
{"_id": "q_16274", "text": "Find max 5 executables that are responsible for this repo."}
{"_id": "q_16275", "text": "Automatically get repo\n\n Parameters\n ----------\n\n autooptions: dgit.json content"}
{"_id": "q_16276", "text": "Look through the local directory to pick up files to check"}
{"_id": "q_16277", "text": "Records any inbound stream. The record command allows users to record\n a stream that may not yet exist. When a new stream is brought into\n the server, it is checked against a list of streams to be recorded.\n\n Streams can be recorded as FLV files, MPEG-TS files or as MP4 files.\n\n :param localStreamName: The name of the stream to be used as input\n for recording.\n :type localStreamName: str\n\n :param pathToFile: Specify path and file name to write to.\n :type pathToFile: str\n\n :param type: `ts`, `mp4` or `flv`\n :type type: str\n\n :param overwrite: If false, when a file already exists for the stream\n name, a new file will be created with the next appropriate number\n appended. If 1 (true), files with the same name will be\n overwritten.\n :type overwrite: int\n\n :param keepAlive: If 1 (true), the server will restart recording every\n time the stream becomes available again.\n :type keepAlive: int\n\n :param chunkLength: If non-zero the record command will start a new\n recording file after ChunkLength seconds have elapsed.\n :type chunkLength: int\n\n :param waitForIDR: This is used if the recording is being chunked.\n When true, new files will only be created on IDR boundaries.\n :type waitForIDR: int\n\n :param winQtCompat: Mandates 32bit header fields to ensure\n compatibility with Windows QuickTime.\n :type winQtCompat: int\n\n :param dateFolderStructure: If set to 1 (true), folders will be\n created with names in `YYYYMMDD` format. Recorded files will be\n placed inside these folders based on the date they were created.\n :type dateFolderStructure: int\n\n :link: http://docs.evostream.com/ems_api_definition/record"}
{"_id": "q_16278", "text": "Instantiate the generator and filename specification"}
{"_id": "q_16279", "text": "Peer has closed its end of the link."}
{"_id": "q_16280", "text": "Protocol error occurred."}
{"_id": "q_16281", "text": "Helper function to run commands\n\n Parameters\n ----------\n cmd : list\n Arguments to git command"}
{"_id": "q_16282", "text": "Run a generic command within the repo. Assumes that you are\n in the repo's root directory"}
{"_id": "q_16283", "text": "Cleanup the repo"}
{"_id": "q_16284", "text": "Get the permalink to command that generated the dataset"}
{"_id": "q_16285", "text": "Add files to the repo"}
{"_id": "q_16286", "text": "Marks the invoice as sent in Holvi\n\n If send_email is False then the invoice is *not* automatically emailed to the recipient\n and your must take care of sending the invoice yourself."}
{"_id": "q_16287", "text": "Convert our Python object to JSON acceptable to Holvi API"}
{"_id": "q_16288", "text": "API wrapper documentation"}
{"_id": "q_16289", "text": "Parse the hostname and port out of the server_address."}
{"_id": "q_16290", "text": "Create a TCP connection to the server."}
{"_id": "q_16291", "text": "Create a TCP listening socket for a server."}
{"_id": "q_16292", "text": "Saves this order to Holvi, returns a tuple with the order itself and checkout_uri"}
{"_id": "q_16293", "text": "A utility to help determine which connections need\n processing. Returns a triple of lists containing those connections that\n 0) need to read from the network, 1) need to write to the network, 2)\n waiting for pending timers to expire. The timer list is sorted with\n the connection next expiring at index 0."}
{"_id": "q_16294", "text": "Perform connection state processing."}
{"_id": "q_16295", "text": "Get a buffer of data that needs to be written to the network."}
{"_id": "q_16296", "text": "Factory method for creating Receive links."}
{"_id": "q_16297", "text": "Clean up after connection failure detected."}
{"_id": "q_16298", "text": "Both ends of the Endpoint have become active."}
{"_id": "q_16299", "text": "The remote has closed its end of the endpoint."}
{"_id": "q_16300", "text": "Load profile INI"}
{"_id": "q_16301", "text": "Update the profile"}
{"_id": "q_16302", "text": "Insert hook into the repo"}
{"_id": "q_16303", "text": "Get the commit history for a given dataset"}
{"_id": "q_16304", "text": "Look at files and compute the diffs intelligently"}
{"_id": "q_16305", "text": "This decorator provides several helpful shortcuts for writing Twilio\n views.\n\n - It ensures that only requests from Twilio are passed through. This\n helps protect you from forged requests.\n\n - It ensures your view is exempt from CSRF checks via Django's\n @csrf_exempt decorator. This is necessary for any view that accepts\n POST requests from outside the local domain (eg: Twilio's servers).\n\n - It allows your view to (optionally) return TwiML to pass back to\n Twilio's servers instead of building a ``HttpResponse`` object\n manually.\n\n - It allows your view to (optionally) return any ``twilio.Verb`` object\n instead of building a ``HttpResponse`` object manually.\n\n Usage::\n\n from twilio.twiml import Response\n\n @twilio_view\n def my_view(request):\n r = Response()\n r.sms(\"Thanks for the SMS message!\")\n return r"}
{"_id": "q_16306", "text": "Enter sudo mode"}
{"_id": "q_16307", "text": "Install specified packages using apt-get. -y options are\n automatically used. Waits for command to finish.\n\n Parameters\n ----------\n package_names: list-like of str\n raise_on_error: bool, default False\n If True then raise ValueError if stderr is not empty\n debconf often gives tty error"}
{"_id": "q_16308", "text": "Install all requirements contained in the given file path\n Waits for command to finish.\n\n Parameters\n ----------\n requirements: str\n Path to requirements.txt\n raise_on_error: bool, default True\n If True then raise ValueError if stderr is not empty"}
{"_id": "q_16309", "text": "Create fiji-macros for stitching all channels and z-stacks for a well.\n\n Parameters\n ----------\n path : string\n Well path.\n output_folder : string\n Folder to store images. If not given well path is used.\n\n Returns\n -------\n output_files, macros : tuple\n Tuple with filenames and macros for stitched well."}
{"_id": "q_16310", "text": "Lossless compression. Save image as PNG and TIFF tags to json. Process\n can be reversed with `decompress`.\n\n Parameters\n ----------\n image : string\n TIF-image which should be compressed lossless.\n delete_tif : bool\n Wheter to delete original images.\n force : bool\n Wheter to compress even if .png already exists.\n\n Returns\n -------\n string\n Filename of compressed image, or empty string if compress failed."}
{"_id": "q_16311", "text": "Set self.path, self.dirname and self.basename."}
{"_id": "q_16312", "text": "List of paths to images."}
{"_id": "q_16313", "text": "Get path of specified image.\n\n Parameters\n ----------\n well_row : int\n Starts at 0. Same as --U in files.\n well_column : int\n Starts at 0. Same as --V in files.\n field_row : int\n Starts at 0. Same as --Y in files.\n field_column : int\n Starts at 0. Same as --X in files.\n\n Returns\n -------\n string\n Path to image or empty string if image is not found."}
{"_id": "q_16314", "text": "Get list of paths to images in specified well.\n\n\n Parameters\n ----------\n well_row : int\n Starts at 0. Same as --V in files.\n well_column : int\n Starts at 0. Save as --U in files.\n\n Returns\n -------\n list of strings\n Paths to images or empty list if no images are found."}
{"_id": "q_16315", "text": "Stitches all wells in experiment with ImageJ. Stitched images are\n saved in experiment root.\n\n Images which already exists are omitted stitching.\n\n Parameters\n ----------\n folder : string\n Where to store stitched images. Defaults to experiment path.\n\n Returns\n -------\n list\n Filenames of stitched images. Files which already exists before\n stitching are also returned."}
{"_id": "q_16316", "text": "Create a new droplet\n\n Parameters\n ----------\n name: str\n Name of new droplet\n region: str\n slug for region (e.g., sfo1, nyc1)\n size: str\n slug for droplet size (e.g., 512mb, 1024mb)\n image: int or str\n image id (e.g., 12352) or slug (e.g., 'ubuntu-14-04-x64')\n ssh_keys: list, optional\n default SSH keys to be added on creation\n this is highly recommended for ssh access\n backups: bool, optional\n whether automated backups should be enabled for the Droplet.\n Automated backups can only be enabled when the Droplet is created.\n ipv6: bool, optional\n whether IPv6 is enabled on the Droplet\n private_networking: bool, optional\n whether private networking is enabled for the Droplet. Private\n networking is currently only available in certain regions\n wait: bool, default True\n if True then block until creation is complete"}
{"_id": "q_16317", "text": "Retrieve a droplet by id\n\n Parameters\n ----------\n id: int\n droplet id\n\n Returns\n -------\n droplet: DropletActions"}
{"_id": "q_16318", "text": "Restore this droplet with given image id\n\n A Droplet restoration will rebuild an image using a backup image.\n The image ID that is passed in must be a backup of the current Droplet\n instance. The operation will leave any embedded SSH keys intact.\n\n Parameters\n ----------\n image: int or str\n int for image id and str for image slug\n wait: bool, default True\n Whether to block until the pending action is completed"}
{"_id": "q_16319", "text": "Change the kernel of this droplet\n\n Parameters\n ----------\n kernel_id: int\n Can be retrieved from output of self.kernels()\n wait: bool, default True\n Whether to block until the pending action is completed\n\n Raises\n ------\n APIError if region does not support private networking"}
{"_id": "q_16320", "text": "wait for all actions to complete on a droplet"}
{"_id": "q_16321", "text": "Open SSH connection to droplet\n\n Parameters\n ----------\n interactive: bool, default False\n If True then SSH client will prompt for password when necessary\n and also print output to console"}
{"_id": "q_16322", "text": "Given a search path, find file with requested extension"}
{"_id": "q_16323", "text": "create request url for resource"}
{"_id": "q_16324", "text": "Send a request for this resource to the API\n\n Parameters\n ----------\n kind: str, {'get', 'delete', 'put', 'post', 'head'}"}
{"_id": "q_16325", "text": "Send list request for all members of a collection"}
{"_id": "q_16326", "text": "Get single unit of collection"}
{"_id": "q_16327", "text": "Transfer this image to given region\n\n Parameters\n ----------\n region: str\n region slug to transfer to (e.g., sfo1, nyc1)"}
{"_id": "q_16328", "text": "id or slug"}
{"_id": "q_16329", "text": "id or fingerprint"}
{"_id": "q_16330", "text": "Creates a new domain\n\n Parameters\n ----------\n name: str\n new domain name\n ip_address: str\n IP address for the new domain"}
{"_id": "q_16331", "text": "Get a list of all domain records for the given domain name\n\n Parameters\n ----------\n name: str\n domain name"}
{"_id": "q_16332", "text": "Change the name of this domain record\n\n Parameters\n ----------\n id: int\n domain record id\n name: str\n new name of record"}
{"_id": "q_16333", "text": "May be used to compress PDF files. Code is more readable\r\n for testing and inspection if not compressed. Requires a boolean."}
{"_id": "q_16334", "text": "The flag is a simple integer to force the placement\r\n of the object into position in the object array.\r\n Used for overwriting the placeholder objects."}
{"_id": "q_16335", "text": "Stores the pdf code in a buffer. If it is page related,\r\n provide the page object."}
{"_id": "q_16336", "text": "Creates a PDF text stream sandwich."}
{"_id": "q_16337", "text": "Input text, short or long. Writes in order, within the defined page boundaries. Sequential add_text commands will print without\r\n additional whitespace."}
{"_id": "q_16338", "text": "Data type may be \"raw\" or \"percent\""}
{"_id": "q_16339", "text": "Called by the PDFLite object to prompt creating\r\n the page objects."}
{"_id": "q_16340", "text": "Returns a list of the pages that have\r\n orientation changes."}
{"_id": "q_16341", "text": "Creates reference images, that can be\r\n drawn throughout the document."}
{"_id": "q_16342", "text": "Prompts the creating of image objects."}
{"_id": "q_16343", "text": "Logs the user on to FogBugz.\n\n Returns None for a successful login."}
{"_id": "q_16344", "text": "Adjust the current transformation state of the current graphics state\n matrix. Not recommended for the faint of heart."}
{"_id": "q_16345", "text": "return the absolute position of x,y in user space w.r.t. default user space"}
{"_id": "q_16346", "text": "Rotates a point relative to the mesh origin by the angle specified in the angle property.\n Uses the angle formed between the segment linking the point of interest to the origin and\n the parallel intersecting the origin. This angle is called beta in the code."}
{"_id": "q_16347", "text": "Convenience function to add property info, can set any attribute and leave the others blank, it won't over-write\r\n previously set items."}
{"_id": "q_16348", "text": "Set the default viewing options."}
{"_id": "q_16349", "text": "Standard first line in a PDF."}
{"_id": "q_16350", "text": "Creates PDF reference to resource objects."}
{"_id": "q_16351", "text": "PDF Information object."}
{"_id": "q_16352", "text": "Catalog object."}
{"_id": "q_16353", "text": "Final Trailer calculations, and end-of-file\r\n reference."}
{"_id": "q_16354", "text": "Floyd's Cycle Detector.\n\n See help(cycle_detector) for more context.\n\n Args:\n\n *args: Two iterators issueing the exact same sequence:\n -or-\n f, start: Function and starting state for finite state machine\n\n Yields: \n\n Values yielded by sequence_a if it terminates, undefined if a\n cycle is found.\n\n Raises:\n\n CycleFound if exception is found; if called with f and `start`,\n the parametres `first` and `period` will be defined indicating\n the offset of start of the cycle and the cycle's period."}
{"_id": "q_16355", "text": "Naive cycle detector\n\n See help(cycle_detector) for more context.\n\n Args:\n\n sequence: A sequence to detect cyles in.\n\n f, start: Function and starting state for finite state machine\n\n Yields: \n\n Values yielded by sequence_a if it terminates, undefined if a\n cycle is found.\n\n Raises:\n\n CycleFound if exception is found. Will always generate a first\n and period value no matter which of the `seqs` or `f` interface\n is used."}
{"_id": "q_16356", "text": "Gosper's cycle detector\n\n See help(cycle_detector) for more context.\n\n Args:\n\n sequence: A sequence to detect cyles in.\n\n f, start: Function and starting state for finite state machine\n\n Yields: \n\n Values yielded by sequence_a if it terminates, undefined if a\n cycle is found.\n\n Raises:\n\n CycleFound if exception is found. Unlike Floyd and Brent's,\n Gosper's can only detect period of a cycle. It cannot\n compute the first position"}
{"_id": "q_16357", "text": "Brent's Cycle Detector.\n\n See help(cycle_detector) for more context.\n\n Args:\n\n *args: Two iterators issueing the exact same sequence:\n -or-\n f, start: Function and starting state for finite state machine\n\n Yields: \n\n Values yielded by sequence_a if it terminates, undefined if a\n cycle is found.\n\n Raises:\n\n CycleFound if exception is found; if called with f and `start`,\n the parametres `first` and `period` will be defined indicating\n the offset of start of the cycle and the cycle's period."}
{"_id": "q_16358", "text": "Chop list_ into n chunks. Returns a list."}
{"_id": "q_16359", "text": "Test to see if the line can has enough space for the given length."}
{"_id": "q_16360", "text": "Create a copy, and return it."}
{"_id": "q_16361", "text": "Mutable x addition. Defaults to set delta value."}
{"_id": "q_16362", "text": "Mutable y addition. Defaults to set delta value."}
{"_id": "q_16363", "text": "return first droplet"}
{"_id": "q_16364", "text": "Don't use this, use document.draw_table"}
{"_id": "q_16365", "text": "Creates a new label and returns the response\n\n :param name: The label name\n :type name: str\n\n :param description: An optional description for the label. The name is\n used if no description is provided.\n :type description: str\n\n :param color: The hex color for the label (ex: 'ff0000' for red). If no\n color is provided, a random one will be assigned.\n :type color: str\n\n :returns: The response of your post\n :rtype: dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16366", "text": "Get all current labels\n\n :return: The Logentries API response\n :rtype: list of dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16367", "text": "Get labels by name\n\n :param name: The label name, it must be an exact match.\n :type name: str\n\n :return: A list of matching labels. An empty list is returned if there are\n not any matches\n :rtype: list of dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16368", "text": "Delete the specified label\n\n :param id: the label's ID\n :type id: str\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16369", "text": "Get all current tags\n\n :return: All tags\n :rtype: list of dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16370", "text": "Get tags by a label's sn key\n\n :param label_sn: A corresponding label's ``sn`` key.\n :type label_sn: str or int\n\n :return: A list of matching tags. An empty list is returned if there are\n not any matches\n :rtype: list of dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16371", "text": "Get all current hooks\n\n :return: All hooks\n :rtype: list of dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16372", "text": "Get alerts that match the alert type and args.\n\n :param alert_type: The type of the alert. Must be one of 'pagerduty',\n 'mailto', 'webhook', 'slack', or 'hipchat'\n :type alert_type: str\n\n :param alert_args: The args for the alert. The provided args must be a\n subset of the actual alert args. If no args are provided, all\n alerts matching the ``alert_type`` are returned. For example:\n ``.get('mailto', alert_args={'direct': 'me@mydomain.com'})`` or\n ``.get('slack', {'url': 'https://hooks.slack.com/services...'})``\n\n :return: A list of matching alerts. An empty list is returned if there\n are not any matches\n :rtype: list of dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16373", "text": "Initialize this Sphinx extension"}
{"_id": "q_16374", "text": "Retrieve the location of the themes directory from the location of this package\n\n This is taken from Sphinx's theme documentation"}
{"_id": "q_16375", "text": "A wrapper for posting things.\n\n :param request: The request type. Must be one of the\n :class:`ApiActions<logentries_api.base.ApiActions>`\n :type request: str\n\n :param uri: The API endpoint to hit. Must be one of\n :class:`ApiUri<logentries_api.base.ApiUri>`\n :type uri: str\n\n :param params: A dictionary of supplemental kw args\n :type params: dict\n\n :returns: The response of your post\n :rtype: dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16376", "text": "Get all log sets\n\n :return: Returns a dictionary where the key is the hostname or log set,\n and the value is a list of the log keys\n :rtype: dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16377", "text": "Get a specific log or log set\n\n :param log_set: The log set or log to get. Ex: `.get(log_set='app')` or\n `.get(log_set='app/log')`\n :type log_set: str\n\n :returns: The response of your log set or log\n :rtype: dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16378", "text": "The approximate transit duration for the general case of an eccentric orbit"}
{"_id": "q_16379", "text": "Update the transit keyword arguments"}
{"_id": "q_16380", "text": "Bins the light curve model to the provided time array"}
{"_id": "q_16381", "text": "Dispatcher for the info generators.\n\n Determines which __info_*_gen() should be used based on the supplied\n parameters.\n\n Args:\n code: The status code for the command response.\n message: The status message for the command reponse.\n compressed: Force decompression. Useful for xz* commands.\n\n Returns:\n An info generator."}
{"_id": "q_16382", "text": "Call a command on the server.\n\n If the user has not authenticated then authentication will be done\n as part of calling the command on the server.\n\n For commands that don't return a status message the status message\n will default to an empty string.\n\n Args:\n verb: The verb of the command to call.\n args: The arguments of the command as a string (default None).\n\n Returns:\n A tuple of status code (as an integer) and status message.\n\n Note:\n You can run raw commands by supplying the full command (including\n args) in the verb.\n\n Note: Although it is possible you shouldn't issue more than one command\n at a time by adding newlines to the verb as it will most likely lead\n to undesirable results."}
{"_id": "q_16383", "text": "CAPABILITIES command.\n\n Determines the capabilities of the server.\n\n Although RFC3977 states that this is a required command for servers to\n implement not all servers do, so expect that NNTPPermanentError may be\n raised when this command is issued.\n\n See <http://tools.ietf.org/html/rfc3977#section-5.2>\n\n Args:\n keyword: Passed directly to the server, however, this is unused by\n the server according to RFC3977.\n\n Returns:\n A list of capabilities supported by the server. The VERSION\n capability is the first capability in the list."}
{"_id": "q_16384", "text": "MODE READER command.\n\n Instructs a mode-switching server to switch modes.\n\n See <http://tools.ietf.org/html/rfc3977#section-5.3>\n\n Returns:\n Boolean value indicating whether posting is allowed or not."}
{"_id": "q_16385", "text": "QUIT command.\n\n Tells the server to close the connection. After the server acknowledges\n the request to quit the connection is closed both at the server and\n client. Only useful for graceful shutdown. If you are in a generator\n use close() instead.\n\n Once this method has been called, no other methods of the NNTPClient\n object should be called.\n\n See <http://tools.ietf.org/html/rfc3977#section-5.4>"}
{"_id": "q_16386", "text": "DATE command.\n\n Coordinated Universal time from the perspective of the usenet server.\n It can be used to provide information that might be useful when using\n the NEWNEWS command.\n\n See <http://tools.ietf.org/html/rfc3977#section-7.1>\n\n Returns:\n The UTC time according to the server as a datetime object.\n\n Raises:\n NNTPDataError: If the timestamp can't be parsed."}
{"_id": "q_16387", "text": "HELP command.\n\n Provides a short summary of commands that are understood by the usenet\n server.\n\n See <http://tools.ietf.org/html/rfc3977#section-7.2>\n\n Returns:\n The help text from the server."}
{"_id": "q_16388", "text": "Generator for the NEWGROUPS command.\n\n Generates a list of newsgroups created on the server since the specified\n timestamp.\n\n See <http://tools.ietf.org/html/rfc3977#section-7.3>\n\n Args:\n timestamp: Datetime object giving 'created since' datetime.\n\n Yields:\n A tuple containing the name, low water mark, high water mark,\n and status for the newsgroup.\n\n Note: If the datetime object supplied as the timestamp is naive (tzinfo\n is None) then it is assumed to be given as GMT."}
{"_id": "q_16389", "text": "Generator for the NEWNEWS command.\n\n Generates a list of message-ids for articles created since the specified\n timestamp for newsgroups with names that match the given pattern.\n\n See <http://tools.ietf.org/html/rfc3977#section-7.4>\n\n Args:\n pattern: Glob matching newsgroups of intrest.\n timestamp: Datetime object giving 'created since' datetime.\n\n Yields:\n A message-id as string.\n\n Note: If the datetime object supplied as the timestamp is naive (tzinfo\n is None) then it is assumed to be given as GMT. If tzinfo is set\n then it will be converted to GMT by this function."}
{"_id": "q_16390", "text": "NEWNEWS command.\n\n Retrieves a list of message-ids for articles created since the specified\n timestamp for newsgroups with names that match the given pattern. See\n newnews_gen() for more details.\n\n See <http://tools.ietf.org/html/rfc3977#section-7.4>\n\n Args:\n pattern: Glob matching newsgroups of intrest.\n timestamp: Datetime object giving 'created since' datetime.\n\n Returns:\n A list of message-ids as given by newnews_gen()"}
{"_id": "q_16391", "text": "Generator for the LIST ACTIVE command.\n\n Generates a list of active newsgroups that match the specified pattern.\n If no pattern is specfied then all active groups are generated.\n\n See <http://tools.ietf.org/html/rfc3977#section-7.6.3>\n\n Args:\n pattern: Glob matching newsgroups of intrest.\n\n Yields:\n A tuple containing the name, low water mark, high water mark,\n and status for the newsgroup."}
{"_id": "q_16392", "text": "Generator for the LIST ACTIVE.TIMES command.\n\n Generates a list of newsgroups including the creation time and who\n created them.\n\n See <http://tools.ietf.org/html/rfc3977#section-7.6.4>\n\n Yields:\n A tuple containing the name, creation date as a datetime object and\n creator as a string for the newsgroup."}
{"_id": "q_16393", "text": "Generator for the LIST NEWSGROUPS command.\n\n Generates a list of newsgroups including the name and a short\n description.\n\n See <http://tools.ietf.org/html/rfc3977#section-7.6.6>\n\n Args:\n pattern: Glob matching newsgroups of intrest.\n\n Yields:\n A tuple containing the name, and description for the newsgroup."}
{"_id": "q_16394", "text": "Generator for the LIST OVERVIEW.FMT\n\n See list_overview_fmt() for more information.\n\n Yields:\n An element in the list returned by list_overview_fmt()."}
{"_id": "q_16395", "text": "Generator for LIST command.\n\n See list() for more information.\n\n Yields:\n An element in the list returned by list()."}
{"_id": "q_16396", "text": "LIST command.\n\n A wrapper for all of the other list commands. The output of this command\n depends on the keyword specified. The output format for each keyword can\n be found in the list function that corresponds to the keyword.\n\n Args:\n keyword: Information requested.\n arg: Pattern or keyword specific argument.\n\n Note: Keywords supported by this function are include ACTIVE,\n ACTIVE.TIMES, DISTRIB.PATS, HEADERS, NEWSGROUPS, OVERVIEW.FMT and\n EXTENSIONS.\n\n Raises:\n NotImplementedError: For unsupported keywords."}
{"_id": "q_16397", "text": "GROUP command."}
{"_id": "q_16398", "text": "NEXT command."}
{"_id": "q_16399", "text": "ARTICLE command."}
{"_id": "q_16400", "text": "HEAD command."}
{"_id": "q_16401", "text": "BODY command."}
{"_id": "q_16402", "text": "XGTITLE command."}
{"_id": "q_16403", "text": "XZHDR command.\n\n Args:\n msgid_range: A message-id as a string, or an article number as an\n integer, or a tuple of specifying a range of article numbers in\n the form (first, [last]) - if last is omitted then all articles\n after first are included. A msgid_range of None (the default)\n uses the current article."}
{"_id": "q_16404", "text": "Generator for the XOVER command.\n\n The XOVER command returns information from the overview database for\n the article(s) specified.\n\n <http://tools.ietf.org/html/rfc2980#section-2.8>\n\n Args:\n range: An article number as an integer, or a tuple of specifying a\n range of article numbers in the form (first, [last]). If last is\n omitted then all articles after first are included. A range of\n None (the default) uses the current article.\n\n Returns:\n A list of fields as given by the overview database for each\n available article in the specified range. The fields that are\n returned can be determined using the LIST OVERVIEW.FMT command if\n the server supports it.\n\n Raises:\n NNTPReplyError: If no such article exists or the currently selected\n newsgroup is invalid."}
{"_id": "q_16405", "text": "Generator for the XPAT command."}
{"_id": "q_16406", "text": "XPAT command."}
{"_id": "q_16407", "text": "XFEATURE COMPRESS GZIP command."}
{"_id": "q_16408", "text": "POST command.\n\n Args:\n headers: A dictionary of headers.\n body: A string or file like object containing the post content.\n\n Raises:\n NNTPDataError: If binary characters are detected in the message\n body.\n\n Returns:\n A value that evaluates to true if posting the message succeeded.\n (See note for further details)\n\n Note:\n '\\\\n' line terminators are converted to '\\\\r\\\\n'\n\n Note:\n Though not part of any specification it is common for usenet servers\n to return the message-id for a successfully posted message. If a\n message-id is identified in the response from the server then that\n message-id will be returned by the function, otherwise True will be\n returned.\n\n Note:\n Due to protocol issues if illegal characters are found in the body\n the message will still be posted but will be truncated as soon as\n an illegal character is detected. No illegal characters will be sent\n to the server. For information illegal characters include embedded\n carriage returns '\\\\r' and null characters '\\\\0' (because this\n function converts line feeds to CRLF, embedded line feeds are not an\n issue)"}
{"_id": "q_16409", "text": "Parse timezone to offset in seconds.\n\n Args:\n value: A timezone in the '+0000' format. An integer would also work.\n\n Returns:\n The timezone offset from GMT in seconds as an integer."}
{"_id": "q_16410", "text": "Convenience method for posting"}
{"_id": "q_16411", "text": "Convenience method for deleting"}
{"_id": "q_16412", "text": "Convenience method for getting"}
{"_id": "q_16413", "text": "List all scheduled_queries\n\n :return: A list of all scheduled query dicts\n :rtype: list of dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16414", "text": "List all tags for the account.\n\n The response differs from ``Hooks().list()``, in that tag dicts for\n anomaly alerts include a 'scheduled_query_id' key with the value being\n the UUID for the associated scheduled query\n\n :return: A list of all tag dicts\n :rtype: list of dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16415", "text": "Get alert by name or id\n\n :param name_or_id: The alert's name or id\n :type name_or_id: str\n\n :return: A list of matching tags. An empty list is returned if there are\n not any matches\n :rtype: list of dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16416", "text": "Create an inactivity alert\n\n :param name: A name for the inactivity alert\n :type name: str\n\n :param patterns: A list of regexes to match\n :type patterns: list of str\n\n :param logs: A list of log UUID's. (The 'key' key of a log)\n :type logs: list of str\n\n :param trigger_config: A AlertTriggerConfig describing how far back to\n look for inactivity.\n :type trigger_config: :class:`AlertTriggerConfig<logentries_api.special_alerts.AlertTriggerConfig>`\n\n :param alert_reports: A list of AlertReportConfigs to send alerts to\n :type alert_reports: list of\n :class:`AlertReportConfig<logentries_api.special_alerts.AlertReportConfig>`\n\n :return: The API response\n :rtype: dict\n\n :raises: This will raise a\n :class:`ServerException<logentries_api.exceptions.ServerException>`\n if there is an error from Logentries"}
{"_id": "q_16417", "text": "Create the scheduled query"}
{"_id": "q_16418", "text": "Parse a dictionary of headers to a string.\n\n Args:\n hdrs: A dictionary of headers.\n\n Returns:\n The headers as a string that can be used in an NNTP POST."}
{"_id": "q_16419", "text": "Handles the POST request sent by Boundary Url Action"}
{"_id": "q_16420", "text": "Retrieves the allowed operations for this request."}
{"_id": "q_16421", "text": "Assets if the requested operations are allowed in this context."}
{"_id": "q_16422", "text": "Fills the response object from the passed data."}
{"_id": "q_16423", "text": "Processes a `POST` request."}
{"_id": "q_16424", "text": "Processes a `PUT` request."}
{"_id": "q_16425", "text": "Processes a `DELETE` request."}
{"_id": "q_16426", "text": "Processes a `LINK` request.\n\n A `LINK` request is asking to create a relation from the currently\n represented URI to all of the `Link` request headers."}
{"_id": "q_16427", "text": "Creates a base Django project"}
{"_id": "q_16428", "text": "Run the tests that are loaded by each of the strings provided.\n\n Arguments:\n\n tests (iterable):\n\n the collection of tests (specified as `str` s) to run\n\n reporter (Reporter):\n\n a `Reporter` to use for the run. If unprovided, the default\n is to return a `virtue.reporters.Counter` (which produces no\n output).\n\n stop_after (int):\n\n a number of non-successful tests to allow before stopping the run."}
{"_id": "q_16429", "text": "Return a docstring from a list of defaults."}
{"_id": "q_16430", "text": "Set the value\n\n This invokes hooks for type-checking and bounds-checking that\n may be implemented by sub-classes."}
{"_id": "q_16431", "text": "Return the symmertic error\n\n Similar to above, but zero implies no error estimate,\n and otherwise this will either be the symmetric error,\n or the average of the low,high asymmetric errors."}
{"_id": "q_16432", "text": "Apply the criteria to filter out on the metrics required"}
{"_id": "q_16433", "text": "Helper function that performs an `ilike` query if a string value\n is passed, otherwise the normal default operation."}
{"_id": "q_16434", "text": "Return objects representing segments."}
{"_id": "q_16435", "text": "we expect foo=bar"}
{"_id": "q_16436", "text": "Set the value of this attribute for the passed object."}
{"_id": "q_16437", "text": "Consumes set specifiers as text and forms a generator to retrieve\n the requested ranges.\n\n @param[in] specifiers\n Expected syntax is from the byte-range-specifier ABNF found in the\n [RFC 2616]; eg. 15-17,151,-16,26-278,15\n\n @returns\n Consecutive tuples that describe the requested range; eg. (1, 72) or\n (1, 1) [read as 1 to 72 or 1 to 1]."}
{"_id": "q_16438", "text": "Paginate an iterable during a request.\n\n Magically splicling an iterable in our supported ORMs allows LIMIT and\n OFFSET queries. We should probably delegate this to the ORM or something\n in the future."}
{"_id": "q_16439", "text": "Decorate test methods with this if you don't require strict index checking"}
{"_id": "q_16440", "text": "operator = \"|\" | \".\" | \",\" | \"-\";"}
{"_id": "q_16441", "text": "op_add = \"+\" ;"}
{"_id": "q_16442", "text": "Loop through the list of Properties,\n extract the derived and required properties and do the\n appropriate book-keeping"}
{"_id": "q_16443", "text": "Return an array with the parameter values\n\n Parameters\n ----------\n\n pname : list or None\n If a list, get the values of the `Parameter` objects with those names\n\n If none, get all values of all the `Parameter` objects\n\n Returns\n -------\n\n values : `np.array`\n Parameter values"}
{"_id": "q_16444", "text": "Return an array with the parameter errors\n\n Parameters\n ----------\n pname : list of string or none\n If a list of strings, get the Parameter objects with those names\n\n If none, get all the Parameter objects\n\n Returns\n -------\n ~numpy.array of parameter errors\n\n Note that this is a N x 2 array."}
{"_id": "q_16445", "text": "Reset the value of all Derived properties to None\n\n This is called by setp (and by extension __setattr__)"}
{"_id": "q_16446", "text": "Read and return the request data.\n\n @param[in] deserialize\n True to deserialize the resultant text using a determiend format\n or the passed format.\n\n @param[in] format\n A specific format to deserialize in; if provided, no detection is\n done. If not provided, the content-type header is looked at to\n determine an appropriate deserializer."}
{"_id": "q_16447", "text": "Updates the active resource configuration to the passed\n keyword arguments.\n\n Invoking this method without passing arguments will just return the\n active resource configuration.\n\n @returns\n The previous configuration."}
{"_id": "q_16448", "text": "Before assigning the value validate that is in one of the\n HTTP methods we implement"}
{"_id": "q_16449", "text": "Encode URL parameters"}
{"_id": "q_16450", "text": "HTTP Post Request"}
{"_id": "q_16451", "text": "Check scene name and whether remote file exists. Raises\n WrongSceneNameError if the scene name is wrong."}
{"_id": "q_16452", "text": "Gets satellite id"}
{"_id": "q_16453", "text": "Gets the filesize of a remote file"}
{"_id": "q_16454", "text": "Download remote .tar.bz file."}
{"_id": "q_16455", "text": "Make a callable returning True for names starting with the given prefix.\n\n The returned callable takes two arguments, the attribute or name of\n the object, and possibly its corresponding value (which is ignored),\n as suitable for use with :meth:`ObjectLocator.is_test_module` and\n :meth:`ObjectLocator.is_test_method`\\ ."}
{"_id": "q_16456", "text": "Correct the timezone information on the given datetime"}
{"_id": "q_16457", "text": "Returns a list of the positions in the text where all new lines occur. This is used by\n get_line_and_char to efficiently find coordinates represented by offset positions."}
{"_id": "q_16458", "text": "Point to a position in source code.\n\n source is the text we're pointing in.\n position is a 2-tuple of (line_number, character_number) to point to.\n fmt is a 4-tuple of formatting parameters, they are:\n name default description\n ---- ------- -----------\n surrounding_lines 2 the number of lines above and below the target line to print\n show_line_numbers True if true line numbers will be generated for the output_lines\n tail_body \"~~~~~\" the body of the tail\n pointer_char \"^\" the character that will point to the position"}
{"_id": "q_16459", "text": "Given a single decorated handler function,\n prepare, append desired data to self.registry."}
{"_id": "q_16460", "text": "Given a node, return the string to use in computing the\n matching visitor methodname. Can also be a generator of strings."}
{"_id": "q_16461", "text": "Send output in textual format"}
{"_id": "q_16462", "text": "Parse string to create an instance\n\n :param str s: String with requirement to parse\n :param bool required: Is this requirement required to be fulfilled? If not, then it is a filter."}
{"_id": "q_16463", "text": "Add requirements to be managed\n\n :param list/Requirement requirements: List of :class:`BumpRequirement` or :class:`pkg_resources.Requirement`\n :param bool required: Set required flag for each requirement if provided."}
{"_id": "q_16464", "text": "Check if requirement is already satisfied by what was previously checked\n\n :param Requirement req: Requirement to check"}
{"_id": "q_16465", "text": "Add new requirements that must be fulfilled for this bump to occur"}
{"_id": "q_16466", "text": "Parse changes for requirements\n\n :param list changes:"}
{"_id": "q_16467", "text": "Bump dependencies using given requirements.\n\n :param RequirementsManager bump_reqs: Bump requirements manager\n :param dict kwargs: Additional args from argparse. Some bumpers accept user options, and some not.\n :return: List of :class:`Bump` changes made."}
{"_id": "q_16468", "text": "Transforms the object into an acceptable format for transmission.\n\n @throws ValueError\n To indicate this serializer does not support the encoding of the\n specified object."}
{"_id": "q_16469", "text": "Initialize based on a list of fortune files with set chances"}
{"_id": "q_16470", "text": "Extends a collection with a value."}
{"_id": "q_16471", "text": "special_handling = \"?\" , identifier , \"?\" ;"}
{"_id": "q_16472", "text": "All package info for given package"}
{"_id": "q_16473", "text": "All versions for package"}
{"_id": "q_16474", "text": "Flush and close the stream.\n\n This is called automatically by the base resource on resources\n unless the resource is operating asynchronously; in that case,\n this method MUST be called in order to signal the end of the request.\n If not the request will simply hang as it is waiting for some\n thread to tell it to return to the client."}
{"_id": "q_16475", "text": "Writes the given chunk to the output buffer.\n\n @param[in] chunk\n Either a byte array, a unicode string, or a generator. If `chunk`\n is a generator then calling `self.write(<generator>)` is\n equivalent to:\n\n @code\n for x in <generator>:\n self.write(x)\n self.flush()\n @endcode\n\n @param[in] serialize\n True to serialize the lines in a determined serializer.\n\n @param[in] format\n A specific format to serialize in; if provided, no detection is\n done. If not provided, the accept header (as well as the URL\n extension) is looked at to determine an appropriate serializer."}
{"_id": "q_16476", "text": "Serializes the data into this response using a serializer.\n\n @param[in] data\n The data to be serialized.\n\n @param[in] format\n A specific format to serialize in; if provided, no detection is\n done. If not provided, the accept header (as well as the URL\n extension) is looked at to determine an appropriate serializer.\n\n @returns\n A tuple of the serialized text and an instance of the\n serializer used."}
{"_id": "q_16477", "text": "Writes the passed chunk, flushes it to the client,\n and terminates the connection."}
{"_id": "q_16478", "text": "The parse tree generated by the source."}
{"_id": "q_16479", "text": "The AST rules."}
{"_id": "q_16480", "text": "The AST comments."}
{"_id": "q_16481", "text": "The diretives parsed from the comments."}
{"_id": "q_16482", "text": "The python source of the parser generated from the input source."}
{"_id": "q_16483", "text": "Returns the python source code for the generated parser."}
{"_id": "q_16484", "text": "Reads the directives and generates source code for custom imports."}
{"_id": "q_16485", "text": "Builds the class definition of the parser."}
{"_id": "q_16486", "text": "Gets the entry_point value for the parser."}
{"_id": "q_16487", "text": "Generates the source code for a rule."}
{"_id": "q_16488", "text": "The return value for each rule can be either retyped, compressed or left alone. This method\n determines that and returns the source code text for accomplishing it."}
{"_id": "q_16489", "text": "Convert an expression to an Abstract Syntax Tree Node."}
{"_id": "q_16490", "text": "Convert a parse tree node into an absract syntax tree node."}
{"_id": "q_16491", "text": "Flattens a list of optree operands based on a pred.\n\n This is used to convert concatenation([x, concatenation[y, ...]]) (or alternation) to\n concatenation([x, y, ...])."}
{"_id": "q_16492", "text": "Grouping groups are implied by optrees, this function hoists grouping group expressions up\n to their parent node."}
{"_id": "q_16493", "text": "Convert an abstract syntax tree to python source code."}
{"_id": "q_16494", "text": "Convert an AST option group to python source code."}
{"_id": "q_16495", "text": "Convert an AST repetition group to python source code."}
{"_id": "q_16496", "text": "Convert an AST concatenate op to python source code."}
{"_id": "q_16497", "text": "Convert an AST exclude op to python source code."}
{"_id": "q_16498", "text": "Convert an AST multiply op to python source code."}
{"_id": "q_16499", "text": "Convert an AST repeat op to python source code."}
{"_id": "q_16500", "text": "Finds all directives with a certain name, or that passes a predicate."}
{"_id": "q_16501", "text": "A directive is a line in a comment that begins with '!'."}
{"_id": "q_16502", "text": "Handle the results of the API call"}
{"_id": "q_16503", "text": "This ``Context Manager`` is used to move the contents of a directory\n elsewhere temporarily and put them back upon exit. This allows testing\n code to use the same file directories as normal code without fear of\n damage.\n\n The name of the temporary directory which contains your files is yielded.\n\n :param dirname:\n Path name of the directory to be replaced.\n\n\n Example:\n\n .. code-block:: python\n\n with replaced_directory('/foo/bar/') as rd:\n # \"/foo/bar/\" has been moved & renamed\n with open('/foo/bar/thing.txt', 'w') as f:\n f.write('stuff')\n f.close()\n\n\n # got here? => \"/foo/bar/ is now restored and temp has been wiped, \n # \"thing.txt\" is gone"}
{"_id": "q_16504", "text": "This ``Context Manager`` redirects STDOUT to a ``StringIO`` objects\n which is returned from the ``Context``. On exit STDOUT is restored.\n\n Example:\n\n .. code-block:: python\n\n with capture_stdout() as capture:\n print('foo')\n\n # got here? => capture.getvalue() will now have \"foo\\\\n\""}
{"_id": "q_16505", "text": "Remove a global hotkey.\n \n control - The control to affect\n key - The key to remove."}
{"_id": "q_16506", "text": "Configure handling of command line arguments."}
{"_id": "q_16507", "text": "Configure logging based on command line options"}
{"_id": "q_16508", "text": "Validates the command line arguments passed to the CLI\n Derived classes that override need to call this method before\n validating their arguments"}
{"_id": "q_16509", "text": "Convert a list of nodes in infix order to a list of nodes in postfix order.\n\n E.G. with normal algebraic precedence, 3 + 4 * 5 -> 3 4 5 * +"}
{"_id": "q_16510", "text": "Builds the URL configuration for this resource."}
{"_id": "q_16511", "text": "Dump an object in req format to the fp given.\n\n :param Mapping obj: The object to serialize. Must have a keys method.\n :param fp: A writable that can accept all the types given.\n :param separator: The separator between key and value. Defaults to u'|' or b'|', depending on the types.\n :param index_separator: The separator between key and index. Defaults to u'_' or b'_', depending on the types."}
{"_id": "q_16512", "text": "Dump an object in req format to a string.\n\n :param Mapping obj: The object to serialize. Must have a keys method.\n :param separator: The separator between key and value. Defaults to u'|' or b'|', depending on the types.\n :param index_separator: The separator between key and index. Defaults to u'_' or b'_', depending on the types."}
{"_id": "q_16513", "text": "Loads an object from a string.\n\n :param s: An object to parse\n :type s: bytes or str\n :param separator: The separator between key and value. Defaults to u'|' or b'|', depending on the types.\n :param index_separator: The separator between key and index. Defaults to u'_' or b'_', depending on the types.\n :param cls: A callable that returns a Mapping that is filled with pairs. The most common alternate option would be OrderedDict.\n :param list_cls: A callable that takes an iterable and returns a sequence."}
{"_id": "q_16514", "text": "Read the file and parse JSON into dictionary"}
{"_id": "q_16515", "text": "Looks up the metric definition from the definitions from the API call"}
{"_id": "q_16516", "text": "Gets the maximum length of each column in the field table"}
{"_id": "q_16517", "text": "Gets the maximum length of each column"}
{"_id": "q_16518", "text": "Escape underscores so that the markdown is correct"}
{"_id": "q_16519", "text": "Sends the field definitions ot standard out"}
{"_id": "q_16520", "text": "Sends the markdown of the metric definitions to standard out"}
{"_id": "q_16521", "text": "Look up each of the metrics and then output in Markdown"}
{"_id": "q_16522", "text": "Expand targets by looking for '-r' in targets."}
{"_id": "q_16523", "text": "Gets the Nginx config for the project"}
{"_id": "q_16524", "text": "Creates the virtualenv for the project"}
{"_id": "q_16525", "text": "Creates scripts to start and stop the application"}
{"_id": "q_16526", "text": "Creates the full project"}
{"_id": "q_16527", "text": "Dasherizes the passed value."}
{"_id": "q_16528", "text": "Attempt to parse source code."}
{"_id": "q_16529", "text": "Redirect to the canonical URI for this resource."}
{"_id": "q_16530", "text": "Parses out parameters and separates them out of the path.\n\n This uses one of the many defined patterns on the options class. But,\n it defaults to a no-op if there are no defined patterns."}
{"_id": "q_16531", "text": "Helper method used in conjunction with the view handler to\n stream responses to the client."}
{"_id": "q_16532", "text": "Deserializes the text using a determined deserializer.\n\n @param[in] request\n The request object to pull information from; normally used to\n determine the deserialization format (when `format` is\n not provided).\n\n @param[in] text\n The text to be deserialized. Can be left blank and the\n request will be read.\n\n @param[in] format\n A specific format to deserialize in; if provided, no detection is\n done. If not provided, the content-type header is looked at to\n determine an appropriate deserializer.\n\n @returns\n A tuple of the deserialized data and an instance of the\n deserializer used."}
{"_id": "q_16533", "text": "Entry-point of the dispatch cycle for this resource.\n\n Performs common work such as authentication, decoding, etc. before\n handing complete control of the result to a function with the\n same name as the request method."}
{"_id": "q_16534", "text": "Ensure we are allowed to access this resource."}
{"_id": "q_16535", "text": "Ensure that we're allowed to use this HTTP method."}
{"_id": "q_16536", "text": "Processes every request.\n\n Directs control flow to the appropriate HTTP/1.1 method."}
{"_id": "q_16537", "text": "Process an `OPTIONS` request.\n\n Used to initiate a cross-origin request. All handling specific to\n CORS requests is done on every request however this method also\n returns a list of available methods."}
{"_id": "q_16538", "text": "Add specific command line arguments for this command"}
{"_id": "q_16539", "text": "Attempt to parse the passed in string into a valid datetime.\n If we get a parse error then assume the string is an epoch time\n and convert to a datetime."}
{"_id": "q_16540", "text": "Output results in structured JSON format"}
{"_id": "q_16541", "text": "Output results in raw JSON format"}
{"_id": "q_16542", "text": "The default predicate used in Node.trimmed."}
{"_id": "q_16543", "text": "Returns a partial of _get_repetition that accepts only a text argument."}
{"_id": "q_16544", "text": "Checks the beginning of text for a value. If it is found, a terminal ParseNode is returned\n filled out appropriately for the value it found. DeadEnd is raised if the value does not match."}
{"_id": "q_16545", "text": "Tries to pull text with extractor repeatedly.\n\n Bounds is a 2-tuple of (lbound, ubound) where lbound is a number and ubound is a number or None.\n If the ubound is None, this method will execute extractor on text until extrator raises DeadEnd.\n Otherwise, extractor will be called until it raises DeadEnd, or it has extracted ubound times.\n\n If the number of children extracted is >= lbound, then a ParseNode with type repetition is\n returned. Otherwise, DeadEnd is raised.\n\n Bounds are interpreted as (lbound, ubound]\n\n This method is used to implement:\n - option (0, 1)\n - zero_or_more (0, None)\n - one_or_more (1, None)\n - exact_repeat (n, n)"}
{"_id": "q_16546", "text": "Returns extractor's result if exclusion does not match.\n\n If exclusion raises DeadEnd (meaning it did not match) then the result of extractor(text) is\n returned. Otherwise, if exclusion does not raise DeadEnd it means it did match, and we then\n raise DeadEnd."}
{"_id": "q_16547", "text": "Returns the number of characters at the beginning of text that are whitespace."}
{"_id": "q_16548", "text": "This method calls an extractor on some text.\n\n If extractor is just a string, it is passed as the first value to _get_terminal. Otherwise it is\n treated as a callable and text is passed directly to it.\n\n This makes it so you can have a shorthand of terminal(val) <-> val."}
{"_id": "q_16549", "text": "Returns True if this node has no children, or if all of its children are ParseNode instances\n and are empty."}
{"_id": "q_16550", "text": "Add ignored text to the node. This will add the length of the ignored text to the node's\n consumed property."}
{"_id": "q_16551", "text": "Flattens nodes by hoisting children up to ancestor nodes.\n\n A node is hoisted if pred(node) returns True."}
{"_id": "q_16552", "text": "Trim a ParseTree.\n\n A node is trimmed if pred(node) returns True."}
{"_id": "q_16553", "text": "Returns a new ParseNode whose type is this node's type, and whose children are all the\n children from this node and the other whose length is not 0."}
{"_id": "q_16554", "text": "Returns a new node with the same contents as self, but with a new node_type."}
{"_id": "q_16555", "text": "Turns the node into a value node, whose single string child is the concatenation of all its\n children."}
{"_id": "q_16556", "text": "Wraps the decorated function in a lightweight resource."}
{"_id": "q_16557", "text": "Puts the cursor on the next character."}
{"_id": "q_16558", "text": "Sets cursor as end of previous line."}
{"_id": "q_16559", "text": "Increment the cursor to the next character."}
{"_id": "q_16560", "text": "Save current position."}
{"_id": "q_16561", "text": "You could set the name after construction"}
{"_id": "q_16562", "text": "Count function define by this scope"}
{"_id": "q_16563", "text": "Update the Set with values of another Set"}
{"_id": "q_16564", "text": "Update Set with common values of another Set"}
{"_id": "q_16565", "text": "Create a new Set produce by a Set subtracted by another Set"}
{"_id": "q_16566", "text": "Create a new Set with values present in only one Set"}
{"_id": "q_16567", "text": "Add it to the Set"}
{"_id": "q_16568", "text": "Remove it only if present"}
{"_id": "q_16569", "text": "Retrieve all values"}
{"_id": "q_16570", "text": "Retrieve the first Signature ordered by mangling descendant"}
{"_id": "q_16571", "text": "Retrieve the last Signature ordered by mangling descendant"}
{"_id": "q_16572", "text": "Get a signature instance by its internal_name"}
{"_id": "q_16573", "text": "Retrieve a Set of all signature by symbol name"}
{"_id": "q_16574", "text": "Retrieve the unique Signature of a symbol.\n Fail if the Signature is not unique"}
{"_id": "q_16575", "text": "For now, polymorphic return type are handle by symbol artefact.\n\n --> possible multi-polymorphic but with different constraint attached!"}
{"_id": "q_16576", "text": "If don't have injector call from parent"}
{"_id": "q_16577", "text": "Normalize an AST nodes.\n\n all builtins containers are replace by referencable subclasses"}
{"_id": "q_16578", "text": "allow to completly mutate the node into any subclasses of Node"}
{"_id": "q_16579", "text": "Check if given hit is withing the limits."}
{"_id": "q_16580", "text": "Compute a signature Using resolution!!!\n\n TODO: discuss of relevance of a final generation for a signature"}
{"_id": "q_16581", "text": "Use self.resolution to subsitute type_name.\n Allow to instanciate polymorphic type ?1, ?toto"}
{"_id": "q_16582", "text": "Warning!!! Need to rethink it when global poly type"}
{"_id": "q_16583", "text": "Adds a method to the internal lists of allowed or denied methods.\n Each object in the internal list contains a resource ARN and a\n condition statement. The condition statement can be null."}
{"_id": "q_16584", "text": "This function loops over an array of objects containing\n a resourceArn and conditions statement and generates\n the array of statements for the policy."}
{"_id": "q_16585", "text": "Deletes the specified file from the local filesystem."}
{"_id": "q_16586", "text": "Deletes the specified file, either locally or from S3, depending on the file's storage type."}
{"_id": "q_16587", "text": "Saves the specified file to the configured S3 bucket."}
{"_id": "q_16588", "text": "Finds files by licking an S3 bucket's contents by prefix."}
{"_id": "q_16589", "text": "Add a mapping with key thing_name for callobject in chainmap with\n namespace handling."}
{"_id": "q_16590", "text": "Attach a method to a parsing class and register it as a parser rule.\n\n The method is registered with its name unless rulename is provided."}
{"_id": "q_16591", "text": "Attach a class to a parsing decorator and register it to the global\n decorator list.\n The class is registered with its name unless directname is provided"}
{"_id": "q_16592", "text": "Allow to alias a node to another name.\n\n Useful to bind a node to _ as return of Rule::\n\n R = [\n __scope__:L [item:I #add_item(L, I]* #bind('_', L)\n ]\n\n It's also the default behaviour of ':>'"}
{"_id": "q_16593", "text": "Return True if the parser can consume an EOL byte sequence."}
{"_id": "q_16594", "text": "Push context variable to store rule nodes."}
{"_id": "q_16595", "text": "Pop context variable that store rule nodes"}
{"_id": "q_16596", "text": "Return the text value of the node"}
{"_id": "q_16597", "text": "Save the current index under the given name."}
{"_id": "q_16598", "text": "Extract the string between saved and current index."}
{"_id": "q_16599", "text": "Merge internal rules set with the given rules"}
{"_id": "q_16600", "text": "Merge internal hooks set with the given hooks"}
{"_id": "q_16601", "text": "Evaluate a rule by name."}
{"_id": "q_16602", "text": "Same as readText but doesn't consume the stream."}
{"_id": "q_16603", "text": "Consume the c head byte, increment current index and return True\n else return False. It use peekchar and it's the same as '' in BNF."}
{"_id": "q_16604", "text": "Consume all the stream. Same as EOF in BNF."}
{"_id": "q_16605", "text": "Consume whitespace characters."}
{"_id": "q_16606", "text": "Set the data type of the hits.\n\n Fields that are not mentioned here are NOT copied into the clustered hits array.\n Clusterizer has to know the hit data type to produce the clustered hit result with the same data types.\n\n Parameters:\n -----------\n hit_dtype : numpy.dtype or equivalent\n Defines the dtype of the hit array.\n\n Example:\n --------\n hit_dtype = [(\"column\", np.uint16), (\"row\", np.uint16)], where\n \"column\", \"row\" is the field name of the input hit array."}
{"_id": "q_16607", "text": "Takes the hit array and checks if the important data fields have the same data type than the hit clustered array and that the field names are correct."}
{"_id": "q_16608", "text": "Create a tree.Rule"}
{"_id": "q_16609", "text": "Attach a parser tree to the dict of rules"}
{"_id": "q_16610", "text": "Add the rule name"}
{"_id": "q_16611", "text": "Create a tree.Seq"}
{"_id": "q_16612", "text": "Create a tree.Alt"}
{"_id": "q_16613", "text": "Add a repeater to the previous sequence"}
{"_id": "q_16614", "text": "Create a tree.Bind"}
{"_id": "q_16615", "text": "Create a tree.Hook"}
{"_id": "q_16616", "text": "Parse a int in parameter list"}
{"_id": "q_16617", "text": "Parse a str in parameter list"}
{"_id": "q_16618", "text": "Parse a char in parameter list"}
{"_id": "q_16619", "text": "Parse a hook name"}
{"_id": "q_16620", "text": "Parse a hook parameter"}
{"_id": "q_16621", "text": "Parse the DSL and provide a dictionnaries of all resulting rules.\n Call by the MetaGrammar class.\n\n TODO: could be done in the rules property of parsing.BasicParser???"}
{"_id": "q_16622", "text": "Consume comments and whitespace characters."}
{"_id": "q_16623", "text": "write a '.dot' file."}
{"_id": "q_16624", "text": "Manage transition of state."}
{"_id": "q_16625", "text": "Infer type on block is to type each of is sub-element"}
{"_id": "q_16626", "text": "Infer type on the subexpr"}
{"_id": "q_16627", "text": "Infer type from an ID!\n - check if ID is declarated in the scope\n - if no ID is polymorphic type"}
{"_id": "q_16628", "text": "Infer type from an LITERAL!\n Type of literal depend of language.\n We adopt a basic convention"}
{"_id": "q_16629", "text": "Dump tag,rule,id and value cache. For debug.\n\n example::\n\n R = [\n #dump_nodes\n ]"}
{"_id": "q_16630", "text": "Generates code for a rule.\n\n def rulename(self):\n <code for the rule>\n return True"}
{"_id": "q_16631", "text": "Create the appropriate scope exiting statement.\n\n The documentation only shows one level and always uses\n 'return False' in examples.\n\n 'raise AltFalse()' within a try.\n 'break' within a loop.\n 'return False' otherwise."}
{"_id": "q_16632", "text": "Normalize a test expression into a statements list.\n\n Statements list are returned as-is.\n Expression is packaged as:\n if not expr:\n return False"}
{"_id": "q_16633", "text": "Generates python code calling the function.\n\n fn(*args)"}
{"_id": "q_16634", "text": "Generates python code calling a hook.\n\n self.evalHook('hookname', self.ruleNodes[-1])"}
{"_id": "q_16635", "text": "Generates python code calling a rule.\n\n self.evalRule('rulename')"}
{"_id": "q_16636", "text": "Generates python code for a scope.\n\n if not self.begin():\n return False\n res = self.pt()\n if not self.end():\n return False\n return res"}
{"_id": "q_16637", "text": "Generates python code for clauses.\n\n #Continuous clauses which can can be inlined are combined with and\n clause and clause\n\n if not clause:\n return False\n if not clause:\n return False"}
{"_id": "q_16638", "text": "Generates python code for an optional clause.\n\n <code for the clause>"}
{"_id": "q_16639", "text": "Generates python code for a clause repeated 0 or more times.\n\n #If all clauses can be inlined\n while clause:\n pass\n\n while True:\n <code for the clause>"}
{"_id": "q_16640", "text": "cat two strings but handle \\n for tabulation"}
{"_id": "q_16641", "text": "recurs into list for string computing"}
{"_id": "q_16642", "text": "Print nodes.\n\n example::\n\n R = [\n In : node #echo(\"coucou\", 12, node)\n ]"}
{"_id": "q_16643", "text": "function that connect each other one sequence of MatchExpr."}
{"_id": "q_16644", "text": "function that create a state for all instance\n of MatchExpr in the given list and connect each others."}
{"_id": "q_16645", "text": "Test if a node set with setint or setstr equal a certain value\n\n example::\n\n R = [\n __scope__:n\n ['a' #setint(n, 12) | 'b' #setint(n, 14)]\n C\n [#eq(n, 12) D]\n ]"}
{"_id": "q_16646", "text": "Create a Grammar from a string"}
{"_id": "q_16647", "text": "Create a Grammar from a file"}
{"_id": "q_16648", "text": "Parse source using the grammar"}
{"_id": "q_16649", "text": "Parse filename using the grammar"}
{"_id": "q_16650", "text": "Basically copy one node to another.\n usefull to transmit a node from a terminal\n rule as result of the current rule.\n\n example::\n\n R = [\n In : node #set(_, node)\n ]\n\n here the node return by the rule In is\n also the node return by the rule R"}
{"_id": "q_16651", "text": "get the value of subnode\n\n example::\n\n R = [\n __scope__:big getsomethingbig:>big\n #get(_, big, '.val') // copy big.val into _\n ]"}
{"_id": "q_16652", "text": "Check all necessary system requirements to exist.\n\n :param pre_requirements:\n Sequence of pre-requirements to check by running\n ``where <pre_requirement>`` on Windows and ``which ...`` elsewhere."}
{"_id": "q_16653", "text": "Convert config dict to arguments list.\n\n :param config: Configuration dict."}
{"_id": "q_16654", "text": "Decorator to error handling."}
{"_id": "q_16655", "text": "Install library or project into virtual environment.\n\n :param env: Use given virtual environment name.\n :param requirements: Use given requirements file for pip.\n :param args: Pass given arguments to pip script.\n :param ignore_activated:\n Do not run pip inside already activated virtual environment. By\n default: False\n :param install_dev_requirements:\n When enabled install prefixed or suffixed dev requirements after\n original installation process completed. By default: False\n :param quiet: Do not output message to terminal. By default: False"}
{"_id": "q_16656", "text": "Iterate over dict items."}
{"_id": "q_16657", "text": "r\"\"\"Bootstrap Python projects and libraries with virtualenv and pip.\n\n Also check system requirements before bootstrap and run post bootstrap\n hook if any.\n\n :param \\*args: Command line arguments list."}
{"_id": "q_16658", "text": "r\"\"\"Run pip command in given or activated virtual environment.\n\n :param env: Virtual environment name.\n :param cmd: Pip subcommand to run.\n :param ignore_activated:\n Ignore activated virtual environment and use given venv instead. By\n default: False\n :param \\*\\*kwargs:\n Additional keyword arguments to be passed to :func:`~run_cmd`"}
{"_id": "q_16659", "text": "Convert config dict to command line args line.\n\n :param config: Configuration dict.\n :param bootstrap: Bootstrapper configuration dict."}
{"_id": "q_16660", "text": "Print error message to stderr, using ANSI-colors.\n\n :param message: Message to print\n :param wrap:\n Wrap message into ``ERROR: <message>. Exit...`` template. By default:\n True"}
{"_id": "q_16661", "text": "Read and parse configuration file. By default, ``filename`` is relative\n path to current work directory.\n\n If no config file found, default ``CONFIG`` would be used.\n\n :param filename: Read config from given filename.\n :param args: Parsed command line arguments."}
{"_id": "q_16662", "text": "Save error traceback to bootstrapper log file.\n\n :param err: Catched exception."}
{"_id": "q_16663", "text": "Get deposits."}
{"_id": "q_16664", "text": "Dump the deposition object as dictionary."}
{"_id": "q_16665", "text": "Returns the absolute path of folderpath.\n If the path does not exist, will raise IOError."}
{"_id": "q_16666", "text": "Return the extension of fpath.\n\n Parameters\n ----------\n fpath: string\n File name or path\n\n check_if_exists: bool\n\n allowed_exts: dict\n Dictionary of strings, where the key if the last part of a complex ('.' separated) extension\n and the value is the previous part.\n For example: for the '.nii.gz' extension I would have a dict as {'.gz': ['.nii',]}\n\n Returns\n -------\n str\n The extension of the file name or path"}
{"_id": "q_16667", "text": "Add the extension ext to fpath if it doesn't have it.\n\n Parameters\n ----------\n filepath: str\n File name or path\n\n ext: str\n File extension\n\n check_if_exists: bool\n\n Returns\n -------\n File name or path with extension added, if needed."}
{"_id": "q_16668", "text": "Joins path to each line in filelist\n\n Parameters\n ----------\n path: str\n\n filelist: list of str\n\n Returns\n -------\n list of filepaths"}
{"_id": "q_16669", "text": "Return a folder path if it exists.\n\n First will check if it is an existing system path, if it is, will return it\n expanded and absoluted.\n\n If this fails will look for the rcpath variable in the app_name rcfiles or\n exclusively within the given section_name, if given.\n\n Parameters\n ----------\n rcpath: str\n Existing folder path or variable name in app_name rcfile with an\n existing one.\n\n section_name: str\n Name of a section in the app_name rcfile to look exclusively there for\n variable names.\n\n app_name: str\n Name of the application to look for rcfile configuration files.\n\n Returns\n -------\n sys_path: str\n A expanded absolute file or folder path if the path exists.\n\n Raises\n ------\n IOError if the proposed sys_path does not exist."}
{"_id": "q_16670", "text": "Read environment variables and config files and return them merged with\n predefined list of arguments.\n\n Parameters\n ----------\n appname: str\n Application name, used for config files and environment variable\n names.\n\n section: str\n Name of the section to be read. If this is not set: appname.\n\n args:\n arguments from command line (optparse, docopt, etc).\n\n strip_dashes: bool\n Strip dashes prefixing key names from args dict.\n\n Returns\n --------\n dict\n containing the merged variables of environment variables, config\n files and args.\n\n Raises\n ------\n IOError\n In case the return value is empty.\n\n Notes\n -----\n Environment variables are read if they start with appname in uppercase\n with underscore, for example:\n\n TEST_VAR=1\n\n Config files compatible with ConfigParser are read and the section name\n appname is read, example:\n\n [appname]\n var=1\n\n We can also have host-dependent configuration values, which have\n priority over the default appname values.\n\n [appname]\n var=1\n\n [appname:mylinux]\n var=3\n\n\n For boolean flags do not try to use: 'True' or 'False',\n 'on' or 'off',\n '1' or '0'.\n Unless you are willing to parse this values by yourself.\n We recommend commenting the variables out with '#' if you want to set a\n flag to False and check if it is in the rcfile cfg dict, i.e.:\n\n flag_value = 'flag_variable' in cfg\n\n\n Files are read from: /etc/appname/config,\n /etc/appfilerc,\n ~/.config/appname/config,\n ~/.config/appname,\n ~/.appname/config,\n ~/.appnamerc,\n appnamerc,\n .appnamerc,\n appnamerc file found in 'path' folder variable in args,\n .appnamerc file found in 'path' folder variable in args,\n file provided by 'config' variable in args.\n\n Example\n -------\n args = rcfile(__name__, docopt(__doc__, version=__version__))"}
{"_id": "q_16671", "text": "Return the dictionary containing the rcfile section configuration\n variables.\n\n Parameters\n ----------\n section_name: str\n Name of the section in the rcfiles.\n\n app_name: str\n Name of the application to look for its rcfiles.\n\n Returns\n -------\n settings: dict\n Dict with variable values"}
{"_id": "q_16672", "text": "Return the value of the variable in the section_name section of the\n app_name rc file.\n\n Parameters\n ----------\n var_name: str\n Name of the variable to be searched for.\n\n section_name: str\n Name of the section in the rcfiles.\n\n app_name: str\n Name of the application to look for its rcfiles.\n\n Returns\n -------\n var_value: str\n The value of the variable with given var_name."}
{"_id": "q_16673", "text": "Filters the lst using pattern.\n If pattern starts with '(' it will be considered a re regular expression,\n otherwise it will use fnmatch filter.\n\n :param lst: list of strings\n\n :param pattern: string\n\n :return: list of strings\n Filtered list of strings"}
{"_id": "q_16674", "text": "Given a nested dictionary adict.\n This returns its childen just below the path.\n The path is a string composed of adict keys separated by sep.\n\n :param adict: nested dict\n\n :param path: str\n\n :param sep: str\n\n :return: dict or list or leaf of treemap"}
{"_id": "q_16675", "text": "Given a nested dictionary, this returns all its leave elements in a list.\n\n :param adict:\n\n :return: list"}
{"_id": "q_16676", "text": "Looks for path_regex within base_path. Each match is append\n in the returned list.\n path_regex may contain subfolder structure.\n If any part of the folder structure is a\n\n :param base_path: str\n\n :param path_regex: str\n\n :return list of strings"}
{"_id": "q_16677", "text": "Will create dirpath folder. If dirpath already exists and overwrite is False,\n will append a '+' suffix to dirpath until dirpath does not exist."}
{"_id": "q_16678", "text": "Converts an array-like to an array of floats\n\n The new dtype will be np.float32 or np.float64, depending on the original\n type. The function can create a copy or modify the argument depending\n on the argument copy.\n\n Parameters\n ----------\n X : {array-like, sparse matrix}\n\n copy : bool, optional\n If True, a copy of X will be created. If False, a copy may still be\n returned if X's dtype is not a floating point type.\n\n Returns\n -------\n XT : {array, sparse matrix}\n An array of type np.float"}
{"_id": "q_16679", "text": "Warning utility function to check that data type is floating point.\n\n Returns True if a warning was raised (i.e. the input is not float) and\n False otherwise, for easier input validation."}
{"_id": "q_16680", "text": "Convert a FWHM value to sigma in a Gaussian kernel.\n\n Parameters\n ----------\n fwhm: float or numpy.array\n fwhm value or values\n\n Returns\n -------\n fwhm: float or numpy.array\n sigma values"}
{"_id": "q_16681", "text": "Smooth images with a a Gaussian filter.\n\n Apply a Gaussian filter along the three first dimensions of arr.\n\n Parameters\n ----------\n arr: numpy.ndarray\n 3D or 4D array, with image number as last dimension.\n\n affine: numpy.ndarray\n Image affine transformation matrix for image.\n\n fwhm: scalar, numpy.ndarray\n Smoothing kernel size, as Full-Width at Half Maximum (FWHM) in millimeters.\n If a scalar is given, kernel width is identical on all three directions.\n A numpy.ndarray must have 3 elements, giving the FWHM along each axis.\n\n copy: bool\n if True, will make a copy of the input array. Otherwise will directly smooth the input array.\n\n Returns\n -------\n smooth_arr: numpy.ndarray"}
{"_id": "q_16682", "text": "Smooth images by applying a Gaussian filter.\n Apply a Gaussian filter along the three first dimensions of arr.\n In all cases, non-finite values in input image are replaced by zeros.\n\n This is copied and slightly modified from nilearn:\n https://github.com/nilearn/nilearn/blob/master/nilearn/image/image.py\n Added the **kwargs argument.\n\n Parameters\n ==========\n imgs: Niimg-like object or iterable of Niimg-like objects\n See http://nilearn.github.io/manipulating_images/manipulating_images.html#niimg.\n Image(s) to smooth.\n fwhm: scalar, numpy.ndarray, 'fast' or None\n Smoothing strength, as a Full-Width at Half Maximum, in millimeters.\n If a scalar is given, width is identical on all three directions.\n A numpy.ndarray must have 3 elements, giving the FWHM along each axis.\n If fwhm == 'fast', a fast smoothing will be performed with\n a filter [0.2, 1, 0.2] in each direction and a normalisation\n to preserve the scale.\n If fwhm is None, no filtering is performed (useful when just removal\n of non-finite values is needed)\n Returns\n =======\n filtered_img: nibabel.Nifti1Image or list of.\n Input image, filtered. If imgs is an iterable, then filtered_img is a\n list."}
{"_id": "q_16683", "text": "Create requests session with any required auth headers\n applied.\n\n :rtype: requests.Session."}
{"_id": "q_16684", "text": "Create requests session with AAD auth headers\n\n :rtype: requests.Session."}
{"_id": "q_16685", "text": "Return a grid with coordinates in 3D physical space for `img`."}
{"_id": "q_16686", "text": "Return the header and affine matrix from a Nifti file.\n\n Parameters\n ----------\n image: img-like object or str\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n Returns\n -------\n hdr, aff"}
{"_id": "q_16687", "text": "Return the voxel matrix of the Nifti file.\n If safe_mode will make a copy of the img before returning the data, so the input image is not modified.\n\n Parameters\n ----------\n image: img-like object or str\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n copy: bool\n If safe_mode will make a copy of the img before returning the data, so the input image is not modified.\n\n Returns\n -------\n array_like"}
{"_id": "q_16688", "text": "From the list of absolute paths to nifti files, creates a Numpy array\n with the data.\n\n Parameters\n ----------\n img_filelist: list of str\n List of absolute file paths to nifti files. All nifti files must have\n the same shape.\n\n outdtype: dtype\n Type of the elements of the array, if not set will obtain the dtype from\n the first nifti file.\n\n Returns\n -------\n outmat: Numpy array with shape N x prod(vol.shape)\n containing the N files as flat vectors.\n\n vol_shape: Tuple with shape of the volumes, for reshaping."}
{"_id": "q_16689", "text": "Crops img as much as possible\n\n Will crop img, removing as many zero entries as possible\n without touching non-zero entries. Will leave one voxel of\n zero padding around the obtained non-zero area in order to\n avoid sampling issues later on.\n\n Parameters\n ----------\n image: img-like object or str\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n Image to be cropped.\n\n rtol: float\n relative tolerance (with respect to maximal absolute\n value of the image), under which values are considered\n negligeable and thus croppable.\n\n copy: boolean\n Specifies whether cropped data is copied or not.\n\n Returns\n -------\n cropped_img: image\n Cropped version of the input image"}
{"_id": "q_16690", "text": "Create a new image of the same class as the reference image\n\n Parameters\n ----------\n ref_niimg: image\n Reference image. The new image will be of the same type.\n\n data: numpy array\n Data to be stored in the image\n\n affine: 4x4 numpy array, optional\n Transformation matrix\n\n copy_header: boolean, optional\n Indicated if the header of the reference image should be used to\n create the new image\n\n Returns\n -------\n new_img: image\n A loaded image with the same type (and header) as the reference image."}
{"_id": "q_16691", "text": "Get BibDocs for Invenio 1."}
{"_id": "q_16692", "text": "Get bibdocs to check."}
{"_id": "q_16693", "text": "Check bibdocs."}
{"_id": "q_16694", "text": "Return the h5py.File given its file path.\n\n Parameters\n ----------\n file_path: string\n HDF5 file path\n\n mode: string\n r Readonly, file must exist\n r+ Read/write, file must exist\n w Create file, truncate if exists\n w- Create file, fail if exists\n a Read/write if exists, create otherwise (default)\n\n Returns\n -------\n h5file: h5py.File"}
{"_id": "q_16695", "text": "Return all dataset contents from h5path group in h5file in an OrderedDict.\n\n Parameters\n ----------\n h5file: h5py.File\n HDF5 file object\n\n h5path: str\n HDF5 group path to read datasets from\n\n Returns\n -------\n datasets: OrderedDict\n Dict with variables contained in file_path/h5path"}
{"_id": "q_16696", "text": "Return the node of type node_type names within h5path of h5file.\n\n Parameters\n ----------\n h5file: h5py.File\n HDF5 file object\n\n h5path: str\n HDF5 group path to get the group names from\n\n node_type: h5py object type\n HDF5 object type\n\n Returns\n -------\n names: list of str\n List of names"}
{"_id": "q_16697", "text": "Submits cmd to HTCondor queue\n\n Parameters\n ----------\n cmd: string\n Command to be submitted\n\n Returns\n -------\n int\n returncode value from calling the submission command."}
{"_id": "q_16698", "text": "Dump the oauth2server tokens."}
{"_id": "q_16699", "text": "Clean previously built package artifacts."}
{"_id": "q_16700", "text": "Upload the package to an index server.\n\n This implies cleaning and re-building the package.\n\n :param repo: Required. Name of the index server to upload to, as specifies\n in your .pypirc configuration file."}
{"_id": "q_16701", "text": "Get UserEXT objects."}
{"_id": "q_16702", "text": "Dump the UserEXt objects as a list of dictionaries.\n\n :param u: UserEXT to be dumped.\n :type u: `invenio_accounts.models.UserEXT [Invenio2.x]`\n :returns: User serialized to dictionary.\n :rtype: dict"}
{"_id": "q_16703", "text": "Get communities."}
{"_id": "q_16704", "text": "Get record ids for Invenio 1."}
{"_id": "q_16705", "text": "Get record ids for Invenio 2."}
{"_id": "q_16706", "text": "Get all restrictions for a given collection, users and fireroles."}
{"_id": "q_16707", "text": "Get all collections the record belong to."}
{"_id": "q_16708", "text": "Dump JSON of record."}
{"_id": "q_16709", "text": "Get recids matching query and with changes."}
{"_id": "q_16710", "text": "Dump MARCXML and JSON representation of a record.\n\n :param recid: Record identifier\n :param from_date: Dump only revisions from this date onwards.\n :param with_json: If ``True`` use old ``Record.create`` to generate the\n JSON representation of the record.\n :param latest_only: Dump only the last revision of the record metadata.\n :param with_collections: If ``True`` dump the list of collections that the\n record belongs to.\n :returns: List of versions of the record."}
{"_id": "q_16711", "text": "Will rename all files in file_lst to a padded serial\n number plus its extension\n\n :param file_lst: list of path.py paths"}
{"_id": "q_16712", "text": "Search for dicoms in folders and save file paths into\n self.dicom_paths set.\n\n :param folders: str or list of str"}
{"_id": "q_16713", "text": "Overwrites self.items with the given set of files.\n Will filter the fileset and keep only Dicom files.\n\n Parameters\n ----------\n fileset: iterable of str\n Paths to files\n\n check_if_dicoms: bool\n Whether to check if the items in fileset are dicom file paths"}
{"_id": "q_16714", "text": "Update this set with the union of itself and dicomset.\n\n Parameters\n ----------\n dicomset: DicomFileSet"}
{"_id": "q_16715", "text": "Copies all files within this set to the output_folder\n\n Parameters\n ----------\n output_folder: str\n Path of the destination folder of the files\n\n rename_files: bool\n Whether or not rename the files to a sequential format\n\n mkdir: bool\n Whether to make the folder if it does not exist\n\n verbose: bool\n Whether to print to stdout the files that are beind copied"}
{"_id": "q_16716", "text": "Creates a lambda function to read DICOM files.\n If store_store_metadata is False, will only return the file path.\n Else if you give header_fields, will return only the set of of\n header_fields within a DicomFile object or the whole DICOM file if\n None.\n\n :return: function\n This function has only one parameter: file_path"}
{"_id": "q_16717", "text": "Tries to read the file using dicom.read_file,\n if the file exists and dicom.read_file does not raise\n and Exception returns True. False otherwise.\n\n :param filepath: str\n Path to DICOM file\n\n :return: bool"}
{"_id": "q_16718", "text": "Return the attributes values from this DicomFile\n\n Parameters\n ----------\n attributes: str or list of str\n DICOM field names\n\n default: str\n Default value if the attribute does not exist.\n\n Returns\n -------\n Value of the field or list of values."}
{"_id": "q_16719", "text": "Picks a function whose first argument is an `img`, processes its\n data and returns a numpy array. This decorator wraps this numpy array\n into a nibabel.Nifti1Image."}
{"_id": "q_16720", "text": "Pixelwise division or divide by a number"}
{"_id": "q_16721", "text": "Return the image with the given `mask` applied."}
{"_id": "q_16722", "text": "Return an image with the binarised version of the data of `img`."}
{"_id": "q_16723", "text": "Return a z-scored version of `icc`.\n This function is based on GIFT `icatb_convertImageToZScores` function."}
{"_id": "q_16724", "text": "Write the `data` and `meta_dict` in two files with names\n that use `filename` as a prefix.\n\n Parameters\n ----------\n filename: str\n Path to the output file.\n This is going to be used as a preffix.\n Two files will be created, one with a '.mhd' extension\n and another with '.raw'. If `filename` has any of these already\n they will be taken into account to build the filenames.\n\n data: numpy.ndarray\n n-dimensional image data array.\n\n shape: tuple\n Tuple describing the shape of `data`\n Default: data.shape\n\n meta_dict: dict\n Dictionary with the fields of the metadata .mhd file\n Default: {}\n\n Returns\n -------\n mhd_filename: str\n Path to the .mhd file\n\n raw_filename: str\n Path to the .raw file"}
{"_id": "q_16725", "text": "Copy .mhd and .raw files to dst.\n\n If dst is a folder, won't change the file, but if dst is another filepath,\n will modify the ElementDataFile field in the .mhd to point to the\n new renamed .raw file.\n\n Parameters\n ----------\n src: str\n Path to the .mhd file to be copied\n\n dst: str\n Path to the destination of the .mhd and .raw files.\n If a new file name is given, the extension will be ignored.\n\n Returns\n -------\n dst: str"}
{"_id": "q_16726", "text": "Initialize app context for Invenio 2.x."}
{"_id": "q_16727", "text": "Get roles connected to an action."}
{"_id": "q_16728", "text": "Get action definitions to dump."}
{"_id": "q_16729", "text": "SPSS .sav files to Pandas DataFrame through Rpy2\n\n :param input_file: string\n\n :return:"}
{"_id": "q_16730", "text": "Valid extensions '.pyshelf', '.mat', '.hdf5' or '.h5'\n\n @param filename: string\n\n @param varnames: list of strings\n Names of the variables\n\n @param varlist: list of objects\n The objects to be saved"}
{"_id": "q_16731", "text": "Create CLI environment"}
{"_id": "q_16732", "text": "Load the oauth2server token from data dump."}
{"_id": "q_16733", "text": "Import config var import path or use default value."}
{"_id": "q_16734", "text": "Find all the ROIs in img and returns a similar volume with the ROIs\n emptied, keeping only their border voxels.\n\n This is useful for DTI tractography.\n\n Parameters\n ----------\n img: img-like object or str\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n Returns\n -------\n np.ndarray\n an array of same shape as img_data"}
{"_id": "q_16735", "text": "Return the largest connected component of a 3D array.\n\n Parameters\n -----------\n volume: numpy.array\n 3D boolean array.\n\n Returns\n --------\n volume: numpy.array\n 3D boolean array with only one connected component."}
{"_id": "q_16736", "text": "Return as mask for `volume` that includes only areas where\n the connected components have a size bigger than `min_cluster_size`\n in number of voxels.\n\n Parameters\n -----------\n volume: numpy.array\n 3D boolean array.\n\n min_cluster_size: int\n Minimum size in voxels that the connected component must have.\n\n Returns\n --------\n volume: numpy.array\n 3D int array with a mask excluding small connected components."}
{"_id": "q_16737", "text": "Look for the files in filelist containing the names in roislist, these files will be opened, binarised\n and merged in one mask.\n\n Parameters\n ----------\n roislist: list of strings\n Names of the ROIs, which will have to be in the names of the files in filelist.\n\n filelist: list of strings\n List of paths to the volume files containing the ROIs.\n\n Returns\n -------\n numpy.ndarray\n Mask volume"}
{"_id": "q_16738", "text": "Return a sorted list of the non-zero unique values of arr.\n\n Parameters\n ----------\n arr: numpy.ndarray\n The data array\n\n Returns\n -------\n list of items of arr."}
{"_id": "q_16739", "text": "Get the center of mass for each ROI in the given volume.\n\n Parameters\n ----------\n vol: numpy ndarray\n Volume with different values for each ROI.\n\n Returns\n -------\n OrderedDict\n Each entry in the dict has the ROI value as key and the center_of_mass coordinate as value."}
{"_id": "q_16740", "text": "Pick one 3D volume from a 4D nifti image file\n\n Parameters\n ----------\n image: img-like object or str\n Volume defining different ROIs.\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n vol_idx: int\n Index of the 3D volume to be extracted from the 4D volume.\n\n Returns\n -------\n vol, hdr, aff\n The data array, the image header and the affine transform matrix."}
{"_id": "q_16741", "text": "Saves a Numpy array in a dataset in the HDF file, registers it as\n ds_name and returns the h5py dataset.\n\n :param ds_name: string\n Registration name of the dataset to be registered.\n\n :param data: Numpy ndarray\n\n :param dtype: dtype\n Datatype of the dataset\n\n :return: h5py dataset"}
{"_id": "q_16742", "text": "See create_dataset."}
{"_id": "q_16743", "text": "Will get the names of the index colums of df, obtain their ranges from\n range_values dict and return a reindexed version of df with the given\n range values.\n\n :param df: pandas DataFrame\n\n :param range_values: dict or array-like\n Must contain for each index column of df an entry with all the values\n within the range of the column.\n\n :param fill_value: scalar or 'nearest', default 0\n Value to use for missing values. Defaults to 0, but can be any\n \"compatible\" value, e.g., NaN.\n The 'nearest' mode will fill the missing value with the nearest value in\n the column.\n\n :param fill_method: {'backfill', 'bfill', 'pad', 'ffill', None}, default None\n Method to use for filling holes in reindexed DataFrame\n 'pad' / 'ffill': propagate last valid observation forward to next valid\n 'backfill' / 'bfill': use NEXT valid observation to fill gap\n\n :return: pandas Dataframe and used column ranges\n reindexed DataFrame and dict with index column ranges"}
{"_id": "q_16744", "text": "Store object in HDFStore\n\n Parameters\n ----------\n key : str\n\n value : {Series, DataFrame, Panel, Numpy ndarray}\n\n format : 'fixed(f)|table(t)', default is 'fixed'\n fixed(f) : Fixed format\n Fast writing/reading. Not-appendable, nor searchable\n\n table(t) : Table format\n Write as a PyTables Table structure which may perform worse but allow more flexible operations\n like searching/selecting subsets of the data\n\n append : boolean, default False\n This will force Table format, append the input data to the\n existing.\n\n encoding : default None, provide an encoding for strings"}
{"_id": "q_16745", "text": "Returns a PyTables HDF Array from df in the shape given by its index columns range values.\n\n :param key: string object\n\n :param df: pandas DataFrame\n\n :param range_values: dict or array-like\n Must contain for each index column of df an entry with all the values\n within the range of the column.\n\n :param loop_multiindex: bool\n Will loop through the first index in a multiindex dataframe, extract a\n dataframe only for one value, complete and fill the missing values and\n store in the HDF.\n If this is True, it will not use unstack.\n This is as fast as unstacking.\n\n :param unstack: bool\n Unstack means that this will use the first index name to\n unfold the DataFrame, and will create a group with as many datasets\n as valus has this first index.\n Use this if you think the filled dataframe won't fit in your RAM memory.\n If set to False, this will transform the dataframe in memory first\n and only then save it.\n\n :param fill_value: scalar or 'nearest', default 0\n Value to use for missing values. Defaults to 0, but can be any\n \"compatible\" value, e.g., NaN.\n The 'nearest' mode will fill the missing value with the nearest value in\n the column.\n\n :param fill_method: {'backfill', 'bfill', 'pad', 'ffill', None}, default None\n Method to use for filling holes in reindexed DataFrame\n 'pad' / 'ffill': propagate last valid observation forward to next valid\n 'backfill' / 'bfill': use NEXT valid observation to fill gap\n\n :return: PyTables data node"}
{"_id": "q_16746", "text": "First set_mask and the get_masked_data.\n\n Parameters\n ----------\n mask_img: nifti-like image, NeuroImage or str\n 3D mask array: True where a voxel should be used.\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n Returns\n -------\n The masked data deepcopied"}
{"_id": "q_16747", "text": "Return the data masked with self.mask\n\n Parameters\n ----------\n data: np.ndarray\n\n Returns\n -------\n masked np.ndarray\n\n Raises\n ------\n ValueError if the data and mask dimensions are not compatible.\n Other exceptions related to numpy computations."}
{"_id": "q_16748", "text": "Set self._smooth_fwhm and then smooths the data.\n See boyle.nifti.smooth.smooth_imgs.\n\n Returns\n -------\n the smoothed data deepcopied."}
{"_id": "q_16749", "text": "Save this object instance in outpath.\n\n Parameters\n ----------\n outpath: str\n Output file path"}
{"_id": "q_16750", "text": "Setup logging configuration."}
{"_id": "q_16751", "text": "Return a 3D volume from a 4D nifti image file\n\n Parameters\n ----------\n filename: str\n Path to the 4D .mhd file\n\n vol_idx: int\n Index of the 3D volume to be extracted from the 4D volume.\n\n Returns\n -------\n vol, hdr\n The data array and the new 3D image header."}
{"_id": "q_16752", "text": "Get user accounts Invenio 1."}
{"_id": "q_16753", "text": "Get user accounts from Invenio 2."}
{"_id": "q_16754", "text": "Dump the users as a list of dictionaries.\n\n :param u: User to be dumped.\n :type u: `invenio.modules.accounts.models.User [Invenio2.x]` or namedtuple.\n :returns: User serialized to dictionary.\n :rtype: dict"}
{"_id": "q_16755", "text": "Load the raw JSON dump of the Deposition.\n\n Uses Record API in order to bypass all Deposit-specific initialization,\n which are to be done after the final stage of deposit migration.\n\n :param data: Dictionary containing deposition data.\n :type data: dict"}
{"_id": "q_16756", "text": "Create the deposit record metadata and persistent identifier.\n\n :param data: Raw JSON dump of the deposit.\n :type data: dict\n :returns: A deposit object and its pid\n :rtype: (`invenio_records.api.Record`,\n `invenio_pidstore.models.PersistentIdentifier`)"}
{"_id": "q_16757", "text": "Saves a Nifti1Image into an HDF5 file.\n\n Parameters\n ----------\n file_path: string\n Output HDF5 file path\n\n spatial_img: nibabel SpatialImage\n Image to be saved\n\n h5path: string\n HDF5 group path where the image data will be saved.\n Datasets will be created inside the given group path:\n 'data', 'extra', 'affine', the header information will\n be set as attributes of the 'data' dataset.\n Default: '/img'\n\n append: bool\n True if you don't want to erase the content of the file\n if it already exists, False otherwise.\n\n Note\n ----\n HDF5 open modes\n >>> 'r' Readonly, file must exist\n >>> 'r+' Read/write, file must exist\n >>> 'w' Create file, truncate if exists\n >>> 'w-' Create file, fail if exists\n >>> 'a' Read/write if exists, create otherwise (default)"}
{"_id": "q_16758", "text": "Transforms an H5py Attributes set to a dict.\n Converts unicode string keys into standard strings\n and each value into a numpy array.\n\n Parameters\n ----------\n h5attrs: H5py Attributes\n\n Returns\n --------\n dict"}
{"_id": "q_16759", "text": "Returns in a list all images found under h5group.\n\n Parameters\n ----------\n h5group: h5py.Group\n HDF group\n\n Returns\n -------\n list of nifti1Image"}
{"_id": "q_16760", "text": "Query Schema information for existing reliable dictionaries.\n\n Query Schema information existing reliable dictionaries for given application and service.\n\n :param application_name: Name of the application.\n :type application_name: str\n :param service_name: Name of the service.\n :type service_name: str\n :param dictionary: Name of the reliable dictionary.\n :type dictionary: str\n :param output_file: Optional file to save the schema."}
{"_id": "q_16761", "text": "Verify arguments for select command"}
{"_id": "q_16762", "text": "Use openpyxl to read an Excel file."}
{"_id": "q_16763", "text": "Return the expanded absolute path of `xl_path` if\n if exists and 'xlrd' or 'openpyxl' depending on\n which module should be used for the Excel file in `xl_path`.\n\n Parameters\n ----------\n xl_path: str\n Path to an Excel file\n\n Returns\n -------\n xl_path: str\n User expanded and absolute path to `xl_path`\n\n module: str\n The name of the module you should use to process the\n Excel file.\n Choices: 'xlrd', 'pyopenxl'\n\n Raises\n ------\n IOError\n If the file does not exist\n\n RuntimError\n If a suitable reader for xl_path is not found"}
{"_id": "q_16764", "text": "Return the workbook from the Excel file in `xl_path`."}
{"_id": "q_16765", "text": "Return a list with the name of the sheets in\n the Excel file in `xl_path`."}
{"_id": "q_16766", "text": "Load a single record into the database.\n\n :param record_dump: Record dump.\n :type record_dump: dict\n :param source_type: 'json' or 'marcxml'\n :param eager: If ``True`` execute the task synchronously."}
{"_id": "q_16767", "text": "Load records migration dump."}
{"_id": "q_16768", "text": "Load deposit.\n\n Usage:\n invenio dumps loaddeposit ~/data/deposit_dump_*.json\n invenio dumps loaddeposit -d 12345 ~/data/deposit_dump_*.json"}
{"_id": "q_16769", "text": "Convert to string all values in `data`.\n\n Parameters\n ----------\n data: dict[str]->object\n\n Returns\n -------\n string_data: dict[str]->str"}
{"_id": "q_16770", "text": "Search for items in `table` that have the same field sub-set values as in `sample`.\n Expecting it to be unique, otherwise will raise an exception.\n\n Parameters\n ----------\n table: tinydb.table\n sample: dict\n Sample data\n\n Returns\n -------\n search_result: tinydb.database.Element\n Unique item result of the search.\n\n Raises\n ------\n KeyError:\n If the search returns for more than one entry."}
{"_id": "q_16771", "text": "Search in `table` an item with the value of the `unique_fields` in the `sample` sample.\n Check if the the obtained result is unique. If nothing is found will return an empty list,\n if there is more than one item found, will raise an IndexError.\n\n Parameters\n ----------\n table: tinydb.table\n\n sample: dict\n Sample data\n\n unique_fields: list of str\n Name of fields (keys) from `data` which are going to be used to build\n a sample to look for exactly the same values in the database.\n If None, will use every key in `data`.\n\n Returns\n -------\n eid: int\n Id of the object found with same `unique_fields`.\n None if none is found.\n\n Raises\n ------\n MoreThanOneItemError\n If more than one example is found."}
{"_id": "q_16772", "text": "Create a TinyDB query that looks for items that have each field in `sample` with a value\n compared with the correspondent operation in `operators`.\n\n Parameters\n ----------\n sample: dict\n The sample data\n\n operators: str or list of str\n A list of comparison operations for each field value in `sample`.\n If this is a str, will use the same operator for all `sample` fields.\n If you want different operators for each field, remember to use an OrderedDict for `sample`.\n Check TinyDB.Query class for possible choices.\n\n Returns\n -------\n query: tinydb.database.Query"}
{"_id": "q_16773", "text": "Create a tinyDB Query object that looks for items that confirms the correspondent operator\n from `operators` for each `field_names` field values from `data`.\n\n Parameters\n ----------\n data: dict\n The data sample\n\n field_names: str or list of str\n The name of the fields in `data` that will be used for the query.\n\n operators: str or list of str\n A list of comparison operations for each field value in `field_names`.\n If this is a str, will use the same operator for all `field_names`.\n If you want different operators for each field, remember to use an OrderedDict for `data`.\n Check TinyDB.Query class for possible choices.\n\n Returns\n -------\n query: tinydb.database.Query"}
{"_id": "q_16774", "text": "Search in `table` an item with the value of the `unique_fields` in the `data` sample.\n Check if the the obtained result is unique. If nothing is found will return an empty list,\n if there is more than one item found, will raise an IndexError.\n\n Parameters\n ----------\n table_name: str\n\n sample: dict\n Sample data\n\n unique_fields: list of str\n Name of fields (keys) from `data` which are going to be used to build\n a sample to look for exactly the same values in the database.\n If None, will use every key in `data`.\n\n Returns\n -------\n eid: int\n Id of the object found with same `unique_fields`.\n None if none is found.\n\n Raises\n ------\n MoreThanOneItemError\n If more than one example is found."}
{"_id": "q_16775", "text": "Update the unique matching element to have a given set of fields.\n\n Parameters\n ----------\n table_name: str\n\n fields: dict or function[dict -> None]\n new data/values to insert into the unique element\n or a method that will update the elements.\n\n data: dict\n Sample data for query\n\n cond: tinydb.Query\n which elements to update\n\n unique_fields: list of str\n\n raise_if_not_found: bool\n Will raise an exception if the element is not found for update.\n\n Returns\n -------\n eid: int\n The eid of the updated element if found, None otherwise."}
{"_id": "q_16776", "text": "Return the number of items that match the `sample` field values\n in table `table_name`.\n Check function search_sample for more details."}
{"_id": "q_16777", "text": "Check for get_data and get_affine method in an object\n\n Parameters\n ----------\n obj: any object\n Tested object\n\n Returns\n -------\n is_img: boolean\n True if get_data and get_affine methods are present and callable,\n False otherwise."}
{"_id": "q_16778", "text": "Return true if one_img and another_img have the same shape.\n False otherwise.\n If both are nibabel.Nifti1Image will also check for affine matrices.\n\n Parameters\n ----------\n one_img: nibabel.Nifti1Image or np.ndarray\n\n another_img: nibabel.Nifti1Image or np.ndarray\n\n only_check_3d: bool\n If True will check only the 3D part of the affine matrices when they have more dimensions.\n\n Raises\n ------\n NiftiFilesNotCompatible"}
{"_id": "q_16779", "text": "Returns true if array1 and array2 have the same shapes, false\n otherwise.\n\n Parameters\n ----------\n array1: numpy.ndarray\n\n array2: numpy.ndarray\n\n nd_to_check: int\n Number of the dimensions to check, i.e., if == 3 then will check only the 3 first numbers of array.shape.\n Returns\n -------\n bool"}
{"_id": "q_16780", "text": "Run as sample test server."}
{"_id": "q_16781", "text": "Dump current profiler statistics into a file."}
{"_id": "q_16782", "text": "Clear profiler statistics."}
{"_id": "q_16783", "text": "Create a list of regex matches that result from the match_regex\n of all file names within wd.\n The list of files will have wd as path prefix.\n\n @param regex: string\n @param wd: string\n working directory\n @return:"}
{"_id": "q_16784", "text": "Returns absolute paths of folders that match the regex within folder_path and\n all its children folders.\n\n Note: The regex matching is done using the match function\n of the re module.\n\n Parameters\n ----------\n folder_path: string\n\n regex: string\n\n Returns\n -------\n A list of strings."}
{"_id": "q_16785", "text": "Returns absolute paths of files that match the regex within file_dir and\n all its children folders.\n\n Note: The regex matching is done using the search function\n of the re module.\n\n Parameters\n ----------\n folder_path: string\n\n regex: string\n\n Returns\n -------\n A list of strings."}
{"_id": "q_16786", "text": "Returns absolute paths of files that match the regexs within folder_path and\n all its children folders.\n\n This is an iterator function that will use yield to return each set of\n file_paths in one iteration.\n\n Will only return value if all the strings in regex match a file name.\n\n Note: The regex matching is done using the search function\n of the re module.\n\n Parameters\n ----------\n folder_path: string\n\n regex: strings\n\n Returns\n -------\n A list of strings."}
{"_id": "q_16787", "text": "Generator that loops through all absolute paths of the files within folder\n\n Parameters\n ----------\n folder: str\n Root folder start point for recursive search.\n\n Yields\n ------\n fpath: str\n Absolute path of one file in the folders"}
{"_id": "q_16788", "text": "Load user from data dump.\n\n NOTE: This task takes into account the possible duplication of emails and\n usernames, hence it should be called synchronously.\n In such case of collision it will raise UserEmailExistsError or\n UserUsernameExistsError, if email or username are already existing in\n the database. Caller of this task should take care to to resolve those\n collisions beforehand or after catching an exception.\n\n :param data: Dictionary containing user data.\n :type data: dict"}
{"_id": "q_16789", "text": "Calculate image translations in parallel.\n\n Parameters\n ----------\n images : ImageCollection\n Images as instance of ImageCollection.\n\n Returns\n -------\n 2d array, (ty, tx)\n ty and tx is translation to previous image in respectively\n x or y direction."}
{"_id": "q_16790", "text": "Adds a dimensions with ones to array."}
{"_id": "q_16791", "text": "Append key-value pairs to msg, for display.\n\n Parameters\n ----------\n msg: string\n arbitrary message\n kwargs: dict\n arbitrary dictionary\n\n Returns\n -------\n updated_msg: string\n msg, with \"key: value\" appended. Only string values are appended.\n\n Example\n -------\n >>> compose_err_msg('Error message with arguments...', arg_num=123, \\\n arg_str='filename.nii', arg_bool=True)\n 'Error message with arguments...\\\\narg_str: filename.nii'\n >>>"}
{"_id": "q_16792", "text": "Copy the DICOM file groups to folder_path. Each group will be copied into\n a subfolder with named given by groupby_field.\n\n Parameters\n ----------\n dicom_groups: boyle.dicom.sets.DicomFileSet\n\n folder_path: str\n Path to where copy the DICOM files.\n\n groupby_field_name: str\n DICOM field name. Will get the value of this field to name the group\n folder."}
{"_id": "q_16793", "text": "Calculates the DicomFileDistance between all files in dicom_files, using an\n weighted Levenshtein measure between all field names in field_weights and\n their corresponding weights.\n\n Parameters\n ----------\n dicom_files: iterable of str\n Dicom file paths\n\n field_weights: dict of str to float\n A dict with header field names to float scalar values, that\n indicate a distance measure ratio for the levenshtein distance\n averaging of all the header field names in it. e.g., {'PatientID': 1}\n\n dist_method_cls: DicomFileDistance class\n Distance method object to compare the files.\n If None, the default DicomFileDistance method using Levenshtein\n distance between the field_wieghts will be used.\n\n kwargs: DicomFileDistance instantiation named arguments\n Apart from the field_weitghts argument.\n\n Returns\n -------\n file_dists: np.ndarray or scipy.sparse.lil_matrix of shape NxN\n Levenshtein distances between each of the N items in dicom_files."}
{"_id": "q_16794", "text": "Check the field values in self.dcmf1 and self.dcmf2 and returns True\n if all the field values are the same, False otherwise.\n\n Returns\n -------\n bool"}
{"_id": "q_16795", "text": "Updates the status of the file clusters comparing the cluster\n key files with a levenshtein weighted measure using either the\n header_fields or self.header_fields.\n\n Parameters\n ----------\n field_weights: dict of strings with floats\n A dict with header field names to float scalar values, that indicate a distance measure\n ratio for the levenshtein distance averaging of all the header field names in it.\n e.g., {'PatientID': 1}"}
{"_id": "q_16796", "text": "Returns a list of 2-tuples with pairs of dicom groups that\n are in the same folder within given depth.\n\n Parameters\n ----------\n folder_depth: int\n Path depth to check for folder equality.\n\n Returns\n -------\n list of tuples of str"}
{"_id": "q_16797", "text": "Extend the lists within the DICOM groups dictionary.\n The indices will indicate which list have to be extended by which\n other list.\n\n Parameters\n ----------\n indices: list or tuple of 2 iterables of int, bot having the same len\n The indices of the lists that have to be merged, both iterables\n items will be read pair by pair, the first is the index to the\n list that will be extended with the list of the second index.\n The indices can be constructed with Numpy e.g.,\n indices = np.where(square_matrix)"}
{"_id": "q_16798", "text": "Copy the file groups to folder_path. Each group will be copied into\n a subfolder with named given by groupby_field.\n\n Parameters\n ----------\n folder_path: str\n Path to where copy the DICOM files.\n\n groupby_field_name: str\n DICOM field name. Will get the value of this field to name the group\n folder. If empty or None will use the basename of the group key file."}
{"_id": "q_16799", "text": "Set a config by name to a value."}
{"_id": "q_16800", "text": "Set AAD token cache."}
{"_id": "q_16801", "text": "Set certificate usage paths"}
{"_id": "q_16802", "text": "Returns a list with of the objects in olist that have a fieldname valued as fieldval\n\n Parameters\n ----------\n olist: list of objects\n\n fieldname: string\n\n fieldval: anything\n\n Returns\n -------\n list of objets"}
{"_id": "q_16803", "text": "Return index of the nth match found of pattern in strings\n\n Parameters\n ----------\n strings: list of str\n List of strings\n\n pattern: str\n Pattern to be matched\n\n nth: int\n Number of times the match must happen to return the item index.\n\n lookup_func: callable\n Function to match each item in strings to the pattern, e.g., re.match or re.search.\n\n Returns\n -------\n index: int\n Index of the nth item that matches the pattern.\n If there are no n matches will return -1"}
{"_id": "q_16804", "text": "Generate a dcm2nii configuration file that disable the interactive\n mode."}
{"_id": "q_16805", "text": "Call MRICron's `dcm2nii` to convert the DICOM files inside `input_dir`\n to Nifti and save the Nifti file in `output_dir` with a `filename` prefix.\n\n Parameters\n ----------\n input_dir: str\n Path to the folder that contains the DICOM files\n\n output_dir: str\n Path to the folder where to save the NifTI file\n\n filename: str\n Output file basename\n\n Returns\n -------\n filepaths: list of str\n List of file paths created in `output_dir`."}
{"_id": "q_16806", "text": "Return a subset of `filepaths`. Keep only the files that have a basename longer than the\n others with same suffix.\n This works based on that dcm2nii appends a preffix character for each processing\n step it does automatically in the DICOM to NifTI conversion.\n\n Parameters\n ----------\n filepaths: iterable of str\n\n Returns\n -------\n cleaned_paths: iterable of str"}
{"_id": "q_16807", "text": "Extend the within a dict of lists. The indices will indicate which\n list have to be extended by which other list.\n\n Parameters\n ----------\n adict: OrderedDict\n An ordered dictionary of lists\n\n indices: list or tuple of 2 iterables of int, bot having the same length\n The indices of the lists that have to be merged, both iterables items\n will be read pair by pair, the first is the index to the list that\n will be extended with the list of the second index.\n The indices can be constructed with Numpy e.g.,\n indices = np.where(square_matrix)\n\n pop_later: bool\n If True will oop out the lists that are indicated in the second\n list of indices.\n\n copy: bool\n If True will perform a deep copy of the input adict before\n modifying it, hence not changing the original input.\n\n Returns\n -------\n Dictionary of lists\n\n Raises\n ------\n IndexError\n If the indices are out of range"}
{"_id": "q_16808", "text": "Return a dict of lists from a list of dicts with the same keys.\n For each dict in list_of_dicts with look for the values of the\n given keys and append it to the output dict.\n\n Parameters\n ----------\n list_of_dicts: list of dicts\n\n keys: list of str\n List of keys to create in the output dict\n If None will use all keys in the first element of list_of_dicts\n Returns\n -------\n DefaultOrderedDict of lists"}
{"_id": "q_16809", "text": "Imports the contents of filepath as a Python module.\n\n :param filepath: string\n\n :param mod_name: string\n Name of the module when imported\n\n :return: module\n Imported module"}
{"_id": "q_16810", "text": "Copies the files in the built file tree map\n to despath.\n\n :param configfile: string\n Path to the FileTreeMap config file\n\n :param destpath: string\n Path to the files destination\n\n :param overwrite: bool\n Overwrite files if they already exist.\n\n :param sub_node: string\n Tree map configuration sub path.\n Will copy only the contents within this sub-node"}
{"_id": "q_16811", "text": "Transforms the input .sav SPSS file into other format.\n If you don't specify an outputfile, it will use the\n inputfile and change its extension to .csv"}
{"_id": "q_16812", "text": "Load a Nifti mask volume.\n\n Parameters\n ----------\n image: img-like object or boyle.nifti.NeuroImage or str\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n allow_empty: boolean, optional\n Allow loading an empty mask (full of 0 values)\n\n Returns\n -------\n nibabel.Nifti1Image with boolean data."}
{"_id": "q_16813", "text": "Load a Nifti mask volume and return its data matrix as boolean and affine.\n\n Parameters\n ----------\n image: img-like object or boyle.nifti.NeuroImage or str\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n allow_empty: boolean, optional\n Allow loading an empty mask (full of 0 values)\n\n Returns\n -------\n numpy.ndarray with dtype==bool, numpy.ndarray of affine transformation"}
{"_id": "q_16814", "text": "Creates a binarised mask with the union of the files in filelist.\n\n Parameters\n ----------\n filelist: list of img-like object or boyle.nifti.NeuroImage or str\n List of paths to the volume files containing the ROIs.\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n Returns\n -------\n ndarray of bools\n Mask volume\n\n Raises\n ------\n ValueError"}
{"_id": "q_16815", "text": "Read a Nifti file nii_file and a mask Nifti file.\n Returns the voxels in nii_file that are within the mask, the mask indices\n and the mask shape.\n\n Parameters\n ----------\n image: img-like object or boyle.nifti.NeuroImage or str\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n mask_img: img-like object or boyle.nifti.NeuroImage or str\n 3D mask array: True where a voxel should be used.\n See img description.\n\n Returns\n -------\n vol[mask_indices], mask_indices\n\n Note\n ----\n nii_file and mask_file must have the same shape.\n\n Raises\n ------\n NiftiFilesNotCompatible, ValueError"}
{"_id": "q_16816", "text": "Read a Nifti file nii_file and a mask Nifti file.\n Extract the signals in nii_file that are within the mask, the mask indices\n and the mask shape.\n\n Parameters\n ----------\n image: img-like object or boyle.nifti.NeuroImage or str\n Can either be:\n - a file path to a Nifti image\n - any object with get_data() and get_affine() methods, e.g., nibabel.Nifti1Image.\n If niimg is a string, consider it as a path to Nifti image and\n call nibabel.load on it. If it is an object, check if get_data()\n and get_affine() methods are present, raise TypeError otherwise.\n\n mask_img: img-like object or boyle.nifti.NeuroImage or str\n 3D mask array: True where a voxel should be used.\n See img description.\n\n smooth_mm: float #TBD\n (optional) The size in mm of the FWHM Gaussian kernel to smooth the signal.\n If True, remove_nans is True.\n\n remove_nans: bool #TBD\n If remove_nans is True (default), the non-finite values (NaNs and\n infs) found in the images will be replaced by zeros.\n\n Returns\n -------\n session_series, mask_data\n\n session_series: numpy.ndarray\n 2D array of series with shape (voxel number, image number)\n\n Note\n ----\n nii_file and mask_file must have the same shape.\n\n Raises\n ------\n FileNotFound, NiftiFilesNotCompatible"}
{"_id": "q_16817", "text": "Transform a given vector to a volume. This is a reshape function for\n 4D flattened masked matrices where the second dimension of the matrix\n corresponds to the original 4th dimension.\n\n Parameters\n ----------\n arr: numpy.array\n 2D numpy.array\n\n mask: numpy.ndarray\n Mask image. Must have 3 dimensions, bool dtype.\n\n dtype: return type\n If None, will get the type from vector\n\n Returns\n -------\n data: numpy.ndarray\n Unmasked data.\n Shape: (mask.shape[0], mask.shape[1], mask.shape[2], X.shape[1])"}
{"_id": "q_16818", "text": "From the list of absolute paths to nifti files, creates a Numpy array\n with the masked data.\n\n Parameters\n ----------\n img_filelist: list of str\n List of absolute file paths to nifti files. All nifti files must have\n the same shape.\n\n mask_file: str\n Path to a Nifti mask file.\n Should be the same shape as the files in nii_filelist.\n\n outdtype: dtype\n Type of the elements of the array, if not set will obtain the dtype from\n the first nifti file.\n\n Returns\n -------\n outmat:\n Numpy array with shape N x prod(vol.shape) containing the N files as flat vectors.\n\n mask_indices:\n Tuple with the 3D spatial indices of the masking voxels, for reshaping\n with vol_shape and remapping.\n\n vol_shape:\n Tuple with shape of the volumes, for reshaping."}
{"_id": "q_16819", "text": "Create record based on dump."}
{"_id": "q_16820", "text": "Update an existing record."}
{"_id": "q_16821", "text": "Create persistent identifiers."}
{"_id": "q_16822", "text": "Create files.\n\n This method is currently limited to a single bucket per record."}
{"_id": "q_16823", "text": "Delete the bucket."}
{"_id": "q_16824", "text": "Filter persistent identifiers."}
{"_id": "q_16825", "text": "Prepare data."}
{"_id": "q_16826", "text": "Get files from data dump."}
{"_id": "q_16827", "text": "Load community from data dump.\n\n :param data: Dictionary containing community data.\n :type data: dict\n :param logos_dir: Path to a local directory with community logos.\n :type logos_dir: str"}
{"_id": "q_16828", "text": "Load community featuring from data dump.\n\n :param data: Dictionary containing community featuring data.\n :type data: dict"}
{"_id": "q_16829", "text": "Create a client for Service Fabric APIs."}
{"_id": "q_16830", "text": "Check data in Invenio legacy."}
{"_id": "q_16831", "text": "Pipeable aggregation method.\n \n Takes either \n - a dataframe and a tuple of arguments required for aggregation,\n - a tuple of arguments if a dataframe has already been piped into.\n In any case one argument has to be a class that extends callable.\n\n :Example:\n\n aggregate(dataframe, Function, \"new_col_name\", \"old_col_name\")\n\n :Example:\n\n dataframe >> aggregate(Function, \"new_col_name\", \"old_col_name\")\n\n :param args: tuple of arguments\n :type args: tuple\n :return: returns a dataframe object\n :rtype: DataFrame"}
{"_id": "q_16832", "text": "Pipeable subsetting method.\n\n Takes either\n - a dataframe and a tuple of arguments required for subsetting,\n - a tuple of arguments if a dataframe has already been piped into.\n\n :Example:\n \n subset(dataframe, \"column\")\n \n :Example:\n \n dataframe >> subset(\"column\")\n\n :param args: tuple of arguments\n :type args: tuple\n :return: returns a dataframe object\n :rtype: DataFrame"}
{"_id": "q_16833", "text": "Pipeable modification method \n \n Takes either \n - a dataframe and a tuple of arguments required for modification,\n - a tuple of arguments if a dataframe has already been piped into.\n In any case one argument has to be a class that extends callable.\n\n :Example:\n\n modify(dataframe, Function, \"new_col_name\", \"old_col_name\")\n \n :Example:\n\n dataframe >> modify(Function, \"new_col_name\", \"old_col_name\")\n\n :param args: tuple of arguments\n :type args: tuple\n :return: returns a dataframe object\n :rtype: DataFrame"}
{"_id": "q_16834", "text": "Deletes resources of this widget that require manual cleanup.\n \n Currently removes all actions, event handlers and the background.\n \n The background itself should automatically remove all vertex lists to avoid visual artifacts.\n \n Note that this method is currently experimental, as it seems to have a memory leak."}
{"_id": "q_16835", "text": "Simple vector helper function returning the length of a vector.\n \n ``v`` may be any vector, with any number of dimensions"}
{"_id": "q_16836", "text": "Normalizes the given vector.\n \n The vector given may have any number of dimensions."}
{"_id": "q_16837", "text": "Transforms the given texture coordinates using the internal texture coordinates.\n \n Currently, the dimensionality of the input texture coordinates must always be 2 and the output is 3-dimensional with the last coordinate always being zero.\n \n The given texture coordinates are fitted to the internal texture coordinates. Note that values higher than 1 or lower than 0 may result in unexpected visual glitches.\n \n The length of the given texture coordinates should be divisible by the dimensionality."}
{"_id": "q_16838", "text": "Helper method ensuring per-entity bone data has been properly initialized.\n \n Should be called at the start of every method accessing per-entity data.\n \n ``data`` is the entity to check in dictionary form."}
{"_id": "q_16839", "text": "Sets the length of this bone on the given entity.\n \n ``data`` is the entity to modify in dictionary form.\n \n ``blength`` is the new length of the bone."}
{"_id": "q_16840", "text": "Sets the parent of this bone for all entities.\n \n Note that this method must be called before many other methods to ensure internal state has been initialized.\n \n This method also registers this bone as a child of its parent."}
{"_id": "q_16841", "text": "Returns the point this bone pivots around on the given entity.\n \n This method works recursively by calling its parent and then adding its own offset.\n \n The resulting coordinate is relative to the entity, not the world."}
{"_id": "q_16842", "text": "Callback that is called to initialize this animation on a specific actor.\n \n Internally sets the ``_anidata`` key of the given dict ``data``\\ .\n \n ``jumptype`` is either ``jump`` or ``animate`` to define how to switch to this animation."}
{"_id": "q_16843", "text": "Sets the state required for this actor.\n \n Currently translates the matrix to the position of the actor."}
{"_id": "q_16844", "text": "Resets the state required for this actor to the default state.\n \n Currently resets the matrix to its previous translation."}
{"_id": "q_16845", "text": "Sets the state required for this vertex region.\n \n Currently binds and enables the texture of the material of the region."}
{"_id": "q_16846", "text": "Resets the state required for this actor to the default state.\n \n Currently only disables the target of the texture of the material, it may still be bound."}
{"_id": "q_16847", "text": "Ensures that the given ``obj`` has been initialized to be used with this model.\n \n If the object is found to not be initialized, it will be initialized."}
{"_id": "q_16848", "text": "Redraws the model of the given object.\n \n Note that currently this method probably won't change any data since all movement and animation is done through pyglet groups."}
{"_id": "q_16849", "text": "Actually draws the model of the given object to the render target.\n \n Note that if the batch used for this object already existed, drawing will be skipped as the batch should be drawn by the owner of it."}
{"_id": "q_16850", "text": "Sets the model this actor should use when drawing.\n \n This method also automatically initializes the new model and removes the old, if any."}
{"_id": "q_16851", "text": "write the collection of reports to the given path"}
{"_id": "q_16852", "text": "Escape a single character"}
{"_id": "q_16853", "text": "Escape a string so that it only contains characters in a safe set.\n\n Characters outside the safe list will be escaped with _%x_,\n where %x is the hex value of the character.\n\n If `allow_collisions` is True, occurrences of `escape_char`\n in the input will not be escaped.\n\n In this case, `unescape` cannot be used to reverse the transform\n because occurrences of the escape char in the resulting string are ambiguous.\n Only use this mode when:\n\n 1. collisions cannot occur or do not matter, and\n 2. unescape will never be called.\n\n .. versionadded: 1.0\n allow_collisions argument.\n Prior to 1.0, behavior was the same as allow_collisions=False (default)."}
{"_id": "q_16854", "text": "Determines whether this backend is allowed to send a notification to\n the given user and notice_type."}
{"_id": "q_16855", "text": "Copy the attributes from a source object to a destination object."}
{"_id": "q_16856", "text": "Returns DataFrameRow of the DataFrame given its index.\n\n :param idx: the index of the row in the DataFrame.\n :return: returns a DataFrameRow"}
{"_id": "q_16857", "text": "The notice settings view.\n\n Template: :template:`notification/notice_settings.html`\n\n Context:\n\n notice_types\n A list of all :model:`notification.NoticeType` objects.\n\n notice_settings\n A dictionary containing ``column_headers`` for each ``NOTICE_MEDIA``\n and ``rows`` containing a list of dictionaries: ``notice_type``, a\n :model:`notification.NoticeType` object and ``cells``, a list of\n tuples whose first value is suitable for use in forms and the second\n value is ``True`` or ``False`` depending on a ``request.POST``\n variable called ``form_label``, whose valid value is ``on``."}
{"_id": "q_16858", "text": "Re-draws the text by calculating its position.\n \n Currently, the text will always be centered on the position of the label."}
{"_id": "q_16859", "text": "Re-draws the label by calculating its position.\n \n Currently, the label will always be centered on the position of the label."}
{"_id": "q_16860", "text": "Deletes the widget by the given name.\n \n Note that this feature is currently experimental as there seems to be a memory leak with this method."}
{"_id": "q_16861", "text": "Query Wolfram Alpha and return a Result object"}
{"_id": "q_16862", "text": "Return list of all Pod objects in result"}
{"_id": "q_16863", "text": "Registers the motion and drag handlers.\n \n Note that because of the way pyglet treats mouse dragging, there is also an handler registered to the on_mouse_drag event."}
{"_id": "q_16864", "text": "Adds the main label of the dialog.\n \n This widget can be triggered by setting the label ``label_main`` to a string.\n \n This widget will be centered on the screen."}
{"_id": "q_16865", "text": "Adds an OK button to allow the user to exit the dialog.\n \n This widget can be triggered by setting the label ``label_ok`` to a string.\n \n This widget will be mostly centered on the screen, but below the main label\n by the double of its height."}
{"_id": "q_16866", "text": "Adds a confirm button to let the user confirm whatever action they were presented with.\n \n This widget can be triggered by setting the label ``label_confirm`` to a string.\n \n This widget will be positioned slightly below the main label and to the left\n of the cancel button."}
{"_id": "q_16867", "text": "Updates the progressbar by re-calculating the label.\n \n It is not required to manually call this method since setting any of the\n properties of this class will automatically trigger a re-calculation."}
{"_id": "q_16868", "text": "Find a node in the tree. If the node is not found it is added first and then returned.\n\n :param args: a tuple\n :return: returns the node"}
{"_id": "q_16869", "text": "Returns site-specific notification language for this user. Raises\n LanguageStoreNotAvailable if this site does not use translated\n notifications."}
{"_id": "q_16870", "text": "A basic interface around both queue and send_now. This honors a global\n flag NOTIFICATION_QUEUE_ALL that helps determine whether all calls should\n be queued or not. A per call ``queue`` or ``now`` keyword argument can be\n used to always override the default global behavior."}
{"_id": "q_16871", "text": "A helper function to write lammps pair potentials to string. Assumes that\n functions are vectorized.\n\n Parameters\n ----------\n func: function\n A function that will be evaluated for the force at each radius. Required to\n be numpy vectorizable.\n dfunc: function\n Optional. A function that will be evaluated for the energy at each\n radius. If not supplied the centered difference method will be\n used. Required to be numpy vectorizable.\n bounds: tuple, list\n Optional. specifies min and max radius to evaluate the\n potential. Default 1 length unit, 10 length unit.\n samples: int\n Number of points to evaluate potential. Default 1000. Note that\n a low number of sample points will reduce accuracy.\n tollerance: float\n Value used to centered difference differentiation.\n keyword: string\n Lammps keyword to use to pair potential. This keyword will need\n to be used in the lammps pair_coeff. Default ``PAIR``\n filename: string\n Optional. filename to write lammps table potential as. Default\n ``lammps.table`` it is highly recomended to change the value.\n\n A file for each unique pair potential is required."}
{"_id": "q_16872", "text": "Renders the world in 3d-mode.\n \n If you want to render custom terrain, you may override this method. Be careful that you still call the original method or else actors may not be rendered."}
{"_id": "q_16873", "text": "Renders the world."}
{"_id": "q_16874", "text": "Start a new step. returns a context manager which allows you to\n report an error"}
{"_id": "q_16875", "text": "Adds a new texture category with the given name.\n \n If the category already exists, it will be overridden."}
{"_id": "q_16876", "text": "Gets the model object by the given name.\n \n If it was loaded previously, a cached version will be returned.\n If it was not loaded, it will be loaded and inserted into the cache."}
{"_id": "q_16877", "text": "Gets the model data associated with the given name.\n \n If it was loaded, a cached copy will be returned.\n It it was not loaded, it will be loaded and cached."}
{"_id": "q_16878", "text": "Loads the model data of the given name.\n \n The model file must always be a .json file."}
{"_id": "q_16879", "text": "Draws the submenu and its background.\n \n Note that this leaves the OpenGL state set to 2d drawing and may modify the scissor settings."}
{"_id": "q_16880", "text": "Redraws the background and any child widgets."}
{"_id": "q_16881", "text": "AABB Collision checker that can be used for most axis-aligned collisions.\n \n Intended for use in widgets to check if the mouse is within the bounds of a particular widget."}
{"_id": "q_16882", "text": "Adds a new layer to the stack, optionally at the specified z-value.\n \n ``layer`` must be an instance of Layer or subclasses.\n \n ``z`` can be used to override the index of the layer in the stack. Defaults to ``-1`` for appending."}
{"_id": "q_16883", "text": "Checks if elements of set2 are in set1.\n\n :param set1: a set of values\n :param set2: a set of values\n :param warn: the error message that should be thrown\n when the sets are NOT disjoint\n :return: returns true no elements of set2 are in set1"}
{"_id": "q_16884", "text": "Checks if all elements from set2 are in set1.\n\n :param set1: a set of values\n :param set2: a set of values\n :param warn: the error message that should be thrown \n when the sets are not containd\n :return: returns true if all values of set2 are in set1"}
{"_id": "q_16885", "text": "Map a buffer region using this attribute as an accessor.\n\n The returned region can be modified as if the buffer was a contiguous\n array of this attribute (though it may actually be interleaved or\n otherwise non-contiguous).\n\n The returned region consists of a contiguous array of component\n data elements. For example, if this attribute uses 3 floats per\n vertex, and the `count` parameter is 4, the number of floats mapped\n will be ``3 * 4 = 12``.\n\n :Parameters:\n `buffer` : `AbstractMappable`\n The buffer to map.\n `start` : int\n Offset of the first vertex to map.\n `count` : int\n Number of vertices to map\n\n :rtype: `AbstractBufferRegion`"}
{"_id": "q_16886", "text": "Draw vertices in the domain.\n\n If `vertex_list` is not specified, all vertices in the domain are\n drawn. This is the most efficient way to render primitives.\n\n If `vertex_list` specifies a `VertexList`, only primitives in that\n list will be drawn.\n\n :Parameters:\n `mode` : int\n OpenGL drawing mode, e.g. ``GL_POINTS``, ``GL_LINES``, etc.\n `vertex_list` : `VertexList`\n Vertex list to draw, or ``None`` for all lists in this domain."}
{"_id": "q_16887", "text": "Helper method that calls all callbacks registered for the given action."}
{"_id": "q_16888", "text": "Registers a name to the registry.\n \n ``name`` is the name of the object and must be a string.\n \n ``force_id`` can be optionally set to override the automatic ID generation\n and force a specific ID.\n \n Note that using ``force_id`` is discouraged, since it may cause problems when ``reuse_ids`` is false."}
{"_id": "q_16889", "text": "Adds the given layer at the given Z Index.\n \n If ``z_index`` is not given, the Z Index specified by the layer will be used."}
{"_id": "q_16890", "text": "Draws all layers of this LayeredWidget.\n \n This should normally be unneccessary, since it is recommended that layers use Vertex Lists instead of OpenGL Immediate Mode."}
{"_id": "q_16891", "text": "Deletes all layers within this LayeredWidget before deleting itself.\n \n Recommended to call if you are removing the widget, but not yet exiting the interpreter."}
{"_id": "q_16892", "text": "Property to be used for setting and getting the border of the layer.\n \n Note that setting this property causes an immediate redraw."}
{"_id": "q_16893", "text": "Property to be used for setting and getting the offset of the layer.\n \n Note that setting this property causes an immediate redraw."}
{"_id": "q_16894", "text": "Serialize object back to XML string.\n\n Returns:\n str: String which should be same as original input, if everything\\\n works as expected."}
{"_id": "q_16895", "text": "Parse MARC XML document to dicts, which are contained in\n self.controlfields and self.datafields.\n\n Args:\n xml (str or HTMLElement): input data\n\n Also detect if this is oai marc format or not (see elf.oai_marc)."}
{"_id": "q_16896", "text": "Parse control fields.\n\n Args:\n fields (list): list of HTMLElements\n tag_id (str): parameter name, which holds the information, about\n field name this is normally \"tag\", but in case of\n oai_marc \"id\"."}
{"_id": "q_16897", "text": "Parse data fields.\n\n Args:\n fields (list): of HTMLElements\n tag_id (str): parameter name, which holds the information, about\n field name this is normally \"tag\", but in case of\n oai_marc \"id\"\n sub_id (str): id of parameter, which holds informations about\n subfield name this is normally \"code\" but in case of\n oai_marc \"label\""}
{"_id": "q_16898", "text": "Return content of given `subfield` in `datafield`.\n\n Args:\n datafield (str): Section name (for example \"001\", \"100\", \"700\").\n subfield (str): Subfield name (for example \"a\", \"1\", etc..).\n i1 (str, default None): Optional i1/ind1 parameter value, which\n will be used for search.\n i2 (str, default None): Optional i2/ind2 parameter value, which\n will be used for search.\n exception (bool): If ``True``, :exc:`~exceptions.KeyError` is\n raised when method couldn't found given `datafield` /\n `subfield`. If ``False``, blank array ``[]`` is returned.\n\n Returns:\n list: of :class:`.MARCSubrecord`.\n\n Raises:\n KeyError: If the subfield or datafield couldn't be found.\n\n Note:\n MARCSubrecord is practically same thing as string, but has defined\n :meth:`.MARCSubrecord.i1` and :attr:`.MARCSubrecord.i2`\n methods.\n\n You may need to be able to get this, because MARC XML depends on\n i/ind parameters from time to time (names of authors for example)."}
{"_id": "q_16899", "text": "Connectivity builder using Numba for speed boost."}
{"_id": "q_16900", "text": "Add the fields into the list of fields."}
{"_id": "q_16901", "text": "Returns the dimension of the embedded space of each element."}
{"_id": "q_16902", "text": "Returns a dataframe containing volume and centroids of all the elements."}
{"_id": "q_16903", "text": "Returns the internal angles of all elements and the associated statistics"}
{"_id": "q_16904", "text": "Returns the aspect ratio of all elements."}
{"_id": "q_16905", "text": "Returns mesh quality and geometric stats."}
{"_id": "q_16906", "text": "Makes a node set from an element set."}
{"_id": "q_16907", "text": "Creates elements sets corresponding to a surface."}
{"_id": "q_16908", "text": "Returns metadata as a dataframe."}
{"_id": "q_16909", "text": "Set the given param for each of the DOFs for a joint."}
{"_id": "q_16910", "text": "Given an angle and an axis, create a quaternion."}
{"_id": "q_16911", "text": "Given a set of bodies, compute their center of mass in world coordinates."}
{"_id": "q_16912", "text": "Set the state of this body.\n\n Parameters\n ----------\n state : BodyState tuple\n The desired state of the body."}
{"_id": "q_16913", "text": "Convert a body-relative offset to world coordinates.\n\n Parameters\n ----------\n position : 3-tuple of float\n A tuple giving body-relative offsets.\n\n Returns\n -------\n position : 3-tuple of float\n A tuple giving the world coordinates of the given offset."}
{"_id": "q_16914", "text": "Convert a point in world coordinates to a body-relative offset.\n\n Parameters\n ----------\n position : 3-tuple of float\n A world coordinates position.\n\n Returns\n -------\n offset : 3-tuple of float\n A tuple giving the body-relative offset of the given position."}
{"_id": "q_16915", "text": "Convert a relative body offset to world coordinates.\n\n Parameters\n ----------\n offset : 3-tuple of float\n The offset of the desired point, given as a relative fraction of the\n size of this body. For example, offset (0, 0, 0) is the center of\n the body, while (0.5, -0.2, 0.1) describes a point halfway from the\n center towards the maximum x-extent of the body, 20% of the way from\n the center towards the minimum y-extent, and 10% of the way from the\n center towards the maximum z-extent.\n\n Returns\n -------\n position : 3-tuple of float\n A position in world coordinates of the given body offset."}
{"_id": "q_16916", "text": "Add a force to this body.\n\n Parameters\n ----------\n force : 3-tuple of float\n A vector giving the forces along each world or body coordinate axis.\n relative : bool, optional\n If False, the force values are assumed to be given in the world\n coordinate frame. If True, they are assumed to be given in the\n body-relative coordinate frame. Defaults to False.\n position : 3-tuple of float, optional\n If given, apply the force at this location in world coordinates.\n Defaults to the current position of the body.\n relative_position : 3-tuple of float, optional\n If given, apply the force at this relative location on the body. If\n given, this method ignores the ``position`` parameter."}
{"_id": "q_16917", "text": "Connect this body to another one using a joint.\n\n This method creates a joint to fasten this body to the other one. See\n :func:`World.join`.\n\n Parameters\n ----------\n joint : str\n The type of joint to use when connecting these bodies.\n other_body : :class:`Body` or str, optional\n The other body to join with this one. If not given, connects this\n body to the world."}
{"_id": "q_16918", "text": "Move another body next to this one and join them together.\n\n This method will move the ``other_body`` so that the anchor points for\n the joint coincide. It then creates a joint to fasten the two bodies\n together. See :func:`World.move_next_to` and :func:`World.join`.\n\n Parameters\n ----------\n joint : str\n The type of joint to use when connecting these bodies.\n other_body : :class:`Body` or str\n The other body to join with this one.\n offset : 3-tuple of float, optional\n The body-relative offset where the anchor for the joint should be\n placed. Defaults to (0, 0, 0). See :func:`World.move_next_to` for a\n description of how offsets are specified.\n other_offset : 3-tuple of float, optional\n The offset on the second body where the joint anchor should be\n placed. Defaults to (0, 0, 0). Like ``offset``, this is given as an\n offset relative to the size and shape of ``other_body``."}
{"_id": "q_16919", "text": "List of positions for linear degrees of freedom."}
{"_id": "q_16920", "text": "List of position rates for linear degrees of freedom."}
{"_id": "q_16921", "text": "List of angles for rotational degrees of freedom."}
{"_id": "q_16922", "text": "List of angle rates for rotational degrees of freedom."}
{"_id": "q_16923", "text": "List of axes for this object's degrees of freedom."}
{"_id": "q_16924", "text": "Set the lo stop values for this object's degrees of freedom.\n\n Parameters\n ----------\n lo_stops : float or sequence of float\n A lo stop value to set on all degrees of freedom, or a list\n containing one such value for each degree of freedom. For rotational\n degrees of freedom, these values must be in radians."}
{"_id": "q_16925", "text": "Set the hi stop values for this object's degrees of freedom.\n\n Parameters\n ----------\n hi_stops : float or sequence of float\n A hi stop value to set on all degrees of freedom, or a list\n containing one such value for each degree of freedom. For rotational\n degrees of freedom, these values must be in radians."}
{"_id": "q_16926", "text": "Set the target velocities for this object's degrees of freedom.\n\n Parameters\n ----------\n velocities : float or sequence of float\n A target velocity value to set on all degrees of freedom, or a list\n containing one such value for each degree of freedom. For rotational\n degrees of freedom, these values must be in radians / second."}
{"_id": "q_16927", "text": "Set the CFM values for this object's degrees of freedom.\n\n Parameters\n ----------\n cfms : float or sequence of float\n A CFM value to set on all degrees of freedom, or a list\n containing one such value for each degree of freedom."}
{"_id": "q_16928", "text": "Set the CFM values for this object's DOF limits.\n\n Parameters\n ----------\n stop_cfms : float or sequence of float\n A CFM value to set on all degrees of freedom limits, or a list\n containing one such value for each degree of freedom limit."}
{"_id": "q_16929", "text": "Set the ERP values for this object's DOF limits.\n\n Parameters\n ----------\n stop_erps : float or sequence of float\n An ERP value to set on all degrees of freedom limits, or a list\n containing one such value for each degree of freedom limit."}
{"_id": "q_16930", "text": "Set the linear axis of displacement for this joint.\n\n Parameters\n ----------\n axes : list containing one 3-tuple of floats\n A list of the axes for this joint. For a slider joint, which has one\n degree of freedom, this must contain one 3-tuple specifying the X,\n Y, and Z axis for the joint."}
{"_id": "q_16931", "text": "Set the angular axis of rotation for this joint.\n\n Parameters\n ----------\n axes : list containing one 3-tuple of floats\n A list of the axes for this joint. For a hinge joint, which has one\n degree of freedom, this must contain one 3-tuple specifying the X,\n Y, and Z axis for the joint."}
{"_id": "q_16932", "text": "Create a new body.\n\n Parameters\n ----------\n shape : str\n The \"shape\" of the body to be created. This should name a type of\n body object, e.g., \"box\" or \"cap\".\n name : str, optional\n The name to use for this body. If not given, a default name will be\n constructed of the form \"{shape}{# of objects in the world}\".\n\n Returns\n -------\n body : :class:`Body`\n The created body object."}
{"_id": "q_16933", "text": "Create a new joint that connects two bodies together.\n\n Parameters\n ----------\n shape : str\n The \"shape\" of the joint to use for joining together two bodies.\n This should name a type of joint, such as \"ball\" or \"piston\".\n body_a : str or :class:`Body`\n The first body to join together with this joint. If a string is\n given, it will be used as the name of a body to look up in the\n world.\n body_b : str or :class:`Body`, optional\n If given, identifies the second body to join together with\n ``body_a``. If not given, ``body_a`` is joined to the world.\n name : str, optional\n If given, use this name for the created joint. If not given, a name\n will be constructed of the form\n \"{body_a.name}^{shape}^{body_b.name}\".\n\n Returns\n -------\n joint : :class:`Joint`\n The joint object that was created."}
{"_id": "q_16934", "text": "Move one body to be near another one.\n\n After moving, the location described by ``offset_a`` on ``body_a`` will\n be coincident with the location described by ``offset_b`` on ``body_b``.\n\n Parameters\n ----------\n body_a : str or :class:`Body`\n The body to use as a reference for moving the other body. If this is\n a string, it is treated as the name of a body to look up in the\n world.\n body_b : str or :class:`Body`\n The body to move next to ``body_a``. If this is a string, it is\n treated as the name of a body to look up in the world.\n offset_a : 3-tuple of float\n The offset of the anchor point, given as a relative fraction of the\n size of ``body_a``. See :func:`Body.relative_offset_to_world`.\n offset_b : 3-tuple of float\n The offset of the anchor point, given as a relative fraction of the\n size of ``body_b``.\n\n Returns\n -------\n anchor : 3-tuple of float\n The location of the shared point, which is often useful to use as a\n joint anchor."}
{"_id": "q_16935", "text": "Set the states of some bodies in the world.\n\n Parameters\n ----------\n states : sequence of states\n A complete state tuple for one or more bodies in the world. See\n :func:`get_body_states`."}
{"_id": "q_16936", "text": "Step the world forward by one frame.\n\n Parameters\n ----------\n substeps : int, optional\n Split the step into this many sub-steps. This helps to prevent the\n time delta for an update from being too large."}
{"_id": "q_16937", "text": "Determine whether the given bodies are currently connected.\n\n Parameters\n ----------\n body_a : str or :class:`Body`\n One body to test for connectedness. If this is a string, it is\n treated as the name of a body to look up.\n body_b : str or :class:`Body`\n One body to test for connectedness. If this is a string, it is\n treated as the name of a body to look up.\n\n Returns\n -------\n connected : bool\n Return True iff the two bodies are connected."}
{"_id": "q_16938", "text": "Traverse the bone hierarchy and create physics bodies."}
{"_id": "q_16939", "text": "Parse informations about corporations from given field identified\n by `datafield` parameter.\n\n Args:\n datafield (str): MARC field ID (\"``110``\", \"``610``\", etc..)\n subfield (str): MARC subfield ID with name, which is typically\n stored in \"``a``\" subfield.\n roles (str): specify which roles you need. Set to ``[\"any\"]`` for\n any role, ``[\"dst\"]`` for distributors, etc.. For\n details, see\n http://www.loc.gov/marc/relators/relaterm.html\n\n Returns:\n list: :class:`Corporation` objects."}
{"_id": "q_16940", "text": "Get list of VALID ISBN.\n\n Returns:\n list: List with *valid* ISBN strings."}
{"_id": "q_16941", "text": "Content of field ``856u42``. Typically URL pointing to producers\n homepage.\n\n Returns:\n list: List of URLs defined by producer."}
{"_id": "q_16942", "text": "URL's, which may point to edeposit, aleph, kramerius and so on.\n\n Fields ``856u40``, ``998a`` and ``URLu``.\n\n Returns:\n list: List of internal URLs."}
{"_id": "q_16943", "text": "r'''Create a callable that implements a PID controller.\n\n A PID controller returns a control signal :math:`u(t)` given a history of\n error measurements :math:`e(0) \\dots e(t)`, using proportional (P), integral\n (I), and derivative (D) terms, according to:\n\n .. math::\n\n u(t) = kp * e(t) + ki * \\int_{s=0}^t e(s) ds + kd * \\frac{de(s)}{ds}(t)\n\n The proportional term is just the current error, the integral term is the\n sum of all error measurements, and the derivative term is the instantaneous\n derivative of the error measurement.\n\n Parameters\n ----------\n kp : float\n The weight associated with the proportional term of the PID controller.\n ki : float\n The weight associated with the integral term of the PID controller.\n kd : float\n The weight associated with the derivative term of the PID controller.\n smooth : float in [0, 1]\n Derivative values will be smoothed with this exponential average. A\n value of 1 never incorporates new derivative information, a value of 0.5\n uses the mean of the historic and new information, and a value of 0\n discards historic information (i.e., the derivative in this case will be\n unsmoothed). The default is 0.1.\n\n Returns\n -------\n controller : callable (float, float) -> float\n Returns a function that accepts an error measurement and a delta-time\n value since the previous measurement, and returns a control signal."}
{"_id": "q_16944", "text": "Load a skeleton definition from an ASF text file.\n\n Parameters\n ----------\n source : str or file\n A filename or file-like object that contains text information\n describing a skeleton, in ASF format."}
{"_id": "q_16945", "text": "Set PID parameters for all joints in the skeleton.\n\n Parameters for this method are passed directly to the `pid` constructor."}
{"_id": "q_16946", "text": "Get a list of the indices for a specific joint.\n\n Parameters\n ----------\n name : str\n The name of the joint to look up.\n\n Returns\n -------\n list of int :\n A list of the index values for quantities related to the named\n joint. Often useful for getting, say, the angles for a specific\n joint in the skeleton."}
{"_id": "q_16947", "text": "Get a list of the indices for a specific body.\n\n Parameters\n ----------\n name : str\n The name of the body to look up.\n step : int, optional\n The number of numbers for each body. Defaults to 3, should be set\n to 4 for body rotation (since quaternions have 4 values).\n\n Returns\n -------\n list of int :\n A list of the index values for quantities related to the named body."}
{"_id": "q_16948", "text": "Get the current joint separations for the skeleton.\n\n Returns\n -------\n distances : list of float\n A list expressing the distance between the two joint anchor points,\n for each joint in the skeleton. These quantities describe how\n \"exploded\" the bodies in the skeleton are; a value of 0 indicates\n that the constraints are perfectly satisfied for that joint."}
{"_id": "q_16949", "text": "Add torques for each degree of freedom in the skeleton.\n\n Parameters\n ----------\n torques : list of float\n A list of the torques to add to each degree of freedom in the\n skeleton."}
{"_id": "q_16950", "text": "Runs the post-proc script."}
{"_id": "q_16951", "text": "Makes the mesh using gmsh."}
{"_id": "q_16952", "text": "Reads an history output report."}
{"_id": "q_16953", "text": "Reads a field output report."}
{"_id": "q_16954", "text": "Converts a list-like to string with given line width."}
{"_id": "q_16955", "text": "Returns an Abaqus INP formated string for a given linear equation."}
{"_id": "q_16956", "text": "Returns a set as inp string with unsorted option."}
{"_id": "q_16957", "text": "Parses the API response and raises appropriate\n errors if raise_errors was set to True\n\n :param response: response from requests http call\n :returns: dictionary of response\n :rtype: dict"}
{"_id": "q_16958", "text": "Builds the url for the specified method and arguments and returns\n the response as a dictionary."}
{"_id": "q_16959", "text": "Return the names of our marker labels in canonical order."}
{"_id": "q_16960", "text": "Load marker data from a CSV file.\n\n The file will be imported using Pandas, which must be installed to use\n this method. (``pip install pandas``)\n\n The first line of the CSV file will be used for header information. The\n \"time\" column will be used as the index for the data frame. There must\n be columns named 'markerAB-foo-x','markerAB-foo-y','markerAB-foo-z', and\n 'markerAB-foo-c' for marker 'foo' to be included in the model.\n\n Parameters\n ----------\n filename : str\n Name of the CSV file to load."}
{"_id": "q_16961", "text": "Load attachment configuration from the given text source.\n\n The attachment configuration file has a simple format. After discarding\n Unix-style comments (any part of a line that starts with the pound (#)\n character), each line in the file is then expected to have the following\n format::\n\n marker-name body-name X Y Z\n\n The marker name must correspond to an existing \"channel\" in our marker\n data. The body name must correspond to a rigid body in the skeleton. The\n X, Y, and Z coordinates specify the body-relative offsets where the\n marker should be attached: 0 corresponds to the center of the body along\n the given axis, while -1 and 1 correspond to the minimal (maximal,\n respectively) extent of the body's bounding box along the corresponding\n dimension.\n\n Parameters\n ----------\n source : str or file-like\n A filename or file-like object that we can use to obtain text\n configuration that describes how markers are attached to skeleton\n bodies.\n\n skeleton : :class:`pagoda.skeleton.Skeleton`\n The skeleton to attach our marker data to."}
{"_id": "q_16962", "text": "Reposition markers to a specific frame of data.\n\n Parameters\n ----------\n frame_no : int\n The frame of data where we should reposition marker bodies. Markers\n will be positioned in the appropriate places in world coordinates.\n In addition, linear velocities of the markers will be set according\n to the data as long as there are no dropouts in neighboring frames."}
{"_id": "q_16963", "text": "Get a list of the distances between markers and their attachments.\n\n Returns\n -------\n distances : ndarray of shape (num-markers, 3)\n Array of distances for each marker joint in our attachment setup. If\n a marker does not currently have an associated joint (e.g. because\n it is not currently visible) this will contain NaN for that row."}
{"_id": "q_16964", "text": "Return an array of the forces exerted by marker springs.\n\n Notes\n -----\n\n The forces exerted by the marker springs can be approximated by::\n\n F = kp * dx\n\n where ``dx`` is the current array of marker distances. An even more\n accurate value is computed by approximating the velocity of the spring\n displacement::\n\n F = kp * dx + kd * (dx - dx_tm1) / dt\n\n where ``dx_tm1`` is an array of distances from the previous time step.\n\n Parameters\n ----------\n dx_tm1 : ndarray\n An array of distances from markers to their attachment targets,\n measured at the previous time step.\n\n Returns\n -------\n F : ndarray\n An array of forces that the markers are exerting on the skeleton."}
{"_id": "q_16965", "text": "Create and configure a skeleton in our model.\n\n Parameters\n ----------\n filename : str\n The name of a file containing skeleton configuration data.\n pid_params : dict, optional\n If given, use this dictionary to set the PID controller\n parameters on each joint in the skeleton. See\n :func:`pagoda.skeleton.pid` for more information."}
{"_id": "q_16966", "text": "Load marker data and attachment preferences into the model.\n\n Parameters\n ----------\n filename : str\n The name of a file containing marker data. This currently needs to\n be either a .C3D or a .CSV file. CSV files must adhere to a fairly\n strict column naming convention; see :func:`Markers.load_csv` for\n more information.\n attachments : str\n The name of a text file specifying how markers are attached to\n skeleton bodies.\n max_frames : number, optional\n Only read in this many frames of marker data. By default, the entire\n data file is read into memory.\n\n Returns\n -------\n markers : :class:`Markers`\n Returns a markers object containing loaded marker data as well as\n skeleton attachment configuration."}
{"_id": "q_16967", "text": "Advance the physics world by one step.\n\n Typically this is called as part of a :class:`pagoda.viewer.Viewer`, but\n it can also be called manually (or some other stepping mechanism\n entirely can be used)."}
{"_id": "q_16968", "text": "Settle the skeleton to our marker data at a specific frame.\n\n Parameters\n ----------\n frame_no : int, optional\n Settle the skeleton to marker data at this frame. Defaults to 0.\n max_distance : float, optional\n The settling process will stop when the mean marker distance falls\n below this threshold. Defaults to 0.1m (10cm). Setting this too\n small prevents the settling process from finishing (it will loop\n indefinitely), and setting it too large prevents the skeleton from\n settling to a stable state near the markers.\n max_iters : int, optional\n Attempt to settle markers for at most this many iterations. Defaults\n to 1000.\n states : list of body states, optional\n If given, set the bodies in our skeleton to these kinematic states\n before starting the settling process."}
{"_id": "q_16969", "text": "Iterate over a set of marker data, dragging its skeleton along.\n\n Parameters\n ----------\n start : int, optional\n Start following marker data after this frame. Defaults to 0.\n end : int, optional\n Stop following marker data after this frame. Defaults to the end of\n the marker data.\n states : list of body states, optional\n If given, set the states of the skeleton bodies to these values\n before starting to follow the marker data."}
{"_id": "q_16970", "text": "Follow a set of angle data, yielding dynamic joint torques.\n\n Parameters\n ----------\n angles : ndarray (num-frames x num-dofs)\n Follow angle data provided by this array of angle values.\n start : int, optional\n Start following angle data after this frame. Defaults to the start\n of the angle data.\n end : int, optional\n Stop following angle data after this frame. Defaults to the end of\n the angle data.\n states : list of body states, optional\n If given, set the states of the skeleton bodies to these values\n before starting to follow the marker data.\n max_force : float, optional\n Allow each degree of freedom in the skeleton to exert at most this\n force when attempting to follow the given joint angles. Defaults to\n 100N. Setting this value to be large results in more accurate\n following but can cause oscillations in the PID controllers,\n resulting in noisy torques.\n\n Returns\n -------\n torques : sequence of torque frames\n Returns a generator of joint torque data for the skeleton. One set\n of joint torques will be generated for each frame of angle data\n between `start` and `end`."}
{"_id": "q_16971", "text": "Sort values, but put numbers after alphabetically sorted words.\n\n This function is here to make outputs diff-compatible with Aleph.\n\n Example::\n >>> sorted([\"b\", \"1\", \"a\"])\n ['1', 'a', 'b']\n >>> resorted([\"b\", \"1\", \"a\"])\n ['a', 'b', '1']\n\n Args:\n values (iterable): any iterable object/list/tuple/whatever.\n\n Returns:\n list of sorted values, but with numbers after words"}
{"_id": "q_16972", "text": "Draw all bodies in the world."}
{"_id": "q_16973", "text": "Writes a field report and rewrites it in a cleaner format."}
{"_id": "q_16974", "text": "List components that are available on your machine"}
{"_id": "q_16975", "text": "Get room stream to listen for messages.\n\n Kwargs:\n error_callback (func): Callback to call when an error occurred (parameters: exception)\n live (bool): If True, issue a live stream, otherwise an offline stream\n\n Returns:\n :class:`Stream`. Stream"}
{"_id": "q_16976", "text": "Set the room name.\n\n Args:\n name (str): Name\n\n Returns:\n bool. Success"}
{"_id": "q_16977", "text": "Returns a list of paths specified by the XDG_CONFIG_DIRS environment\n variable or the appropriate default.\n\n The list is sorted by precedence, with the most important item coming\n *last* (required by the existing config_resolver logic)."}
{"_id": "q_16978", "text": "Searches for an appropriate config file. If found, loads the file into\n the current instance. This method can also be used to reload a\n configuration. Note that you may want to set ``reload`` to ``True`` to\n clear the configuration before loading in that case. Without doing\n that, values will remain available even if they have been removed from\n the config files.\n\n :param reload: if set to ``True``, the existing values are cleared\n before reloading.\n :param require_load: If set to ``True`` this will raise a\n :py:exc:`IOError` if no config file has been found\n to load."}
{"_id": "q_16979", "text": "Return an error message for use in exceptions thrown by\n subclasses."}
{"_id": "q_16980", "text": "This method will be called to set Series data"}
{"_id": "q_16981", "text": "sets the graph ploting options"}
{"_id": "q_16982", "text": "Get styles."}
{"_id": "q_16983", "text": "Create a connection with given settings.\n\n Args:\n settings (dict): A dictionary of settings\n\n Returns:\n :class:`Connection`. The connection"}
{"_id": "q_16984", "text": "Issue a PUT request.\n\n Kwargs:\n url (str): Destination URL\n post_data (dict): Dictionary of parameter and values\n parse_data (bool): If true, parse response data\n key (string): If parse_data==True, look for this key when parsing data\n parameters (dict): Additional GET parameters to append to the URL\n\n Returns:\n dict. Response (a dict with keys: success, data, info, body)\n \n Raises:\n AuthenticationError, ConnectionError, urllib2.HTTPError, ValueError, Exception"}
{"_id": "q_16985", "text": "Issue a POST request.\n\n Kwargs:\n url (str): Destination URL\n post_data (dict): Dictionary of parameter and values\n parse_data (bool): If true, parse response data\n key (string): If parse_data==True, look for this key when parsing data\n parameters (dict): Additional GET parameters to append to the URL\n listener (func): callback called when uploading a file\n\n Returns:\n dict. Response (a dict with keys: success, data, info, body)\n \n Raises:\n AuthenticationError, ConnectionError, urllib2.HTTPError, ValueError, Exception"}
{"_id": "q_16986", "text": "Issue a GET request.\n\n Kwargs:\n url (str): Destination URL\n parse_data (bool): If true, parse response data\n key (string): If parse_data==True, look for this key when parsing data\n parameters (dict): Additional GET parameters to append to the URL\n\n Returns:\n dict. Response (a dict with keys: success, data, info, body)\n\n Raises:\n AuthenticationError, ConnectionError, urllib2.HTTPError, ValueError, Exception"}
{"_id": "q_16987", "text": "Get headers.\n\n Returns:\n tuple: Headers"}
{"_id": "q_16988", "text": "Get URL used for authentication\n\n Returns:\n string: URL"}
{"_id": "q_16989", "text": "Parses a response.\n\n Args:\n text (str): Text to parse\n\n Kwargs:\n key (str): Key to look for, if any\n\n Returns:\n Parsed value\n\n Raises:\n ValueError"}
{"_id": "q_16990", "text": "Issue a request.\n\n Args:\n method (str): Request method (GET/POST/PUT/DELETE/etc.) If not specified, it will be POST if post_data is not None\n\n Kwargs:\n url (str): Destination URL\n post_data (str): A string of what to POST\n parse_data (bool): If true, parse response data\n key (string): If parse_data==True, look for this key when parsing data\n parameters (dict): Additional GET parameters to append to the URL\n listener (func): callback called when uploading a file\n full_return (bool): If set to True, get a full response (with success, data, info, body)\n\n Returns:\n dict. Response. If full_return==True, a dict with keys: success, data, info, body, otherwise the parsed data\n\n Raises:\n AuthenticationError, ConnectionError, urllib2.HTTPError, ValueError"}
{"_id": "q_16991", "text": "Get rooms list.\n\n Kwargs:\n sort (bool): If True, sort rooms by name\n\n Returns:\n array. List of rooms (each room is a dict)"}
{"_id": "q_16992", "text": "Get room.\n\n Returns:\n :class:`Room`. Room"}
{"_id": "q_16993", "text": "Get user.\n\n Returns:\n :class:`User`. User"}
{"_id": "q_16994", "text": "Search transcripts.\n\n Args:\n terms (str): Terms for search\n\n Returns:\n array. Messages"}
{"_id": "q_16995", "text": "Called when incoming messages arrive.\n\n Args:\n messages (tuple): Messages (each message is a dict)"}
{"_id": "q_16996", "text": "Fetch new messages."}
{"_id": "q_16997", "text": "Called when new messages arrive.\n\n Args:\n messages (tuple): Messages"}
{"_id": "q_16998", "text": "Callback issued by twisted when new line arrives.\n\n Args:\n line (str): Incoming line"}
{"_id": "q_16999", "text": "Process data.\n\n Args:\n data (str): Incoming data"}
{"_id": "q_17000", "text": "Cleanup code after asked to stop producing.\n\n Kwargs:\n forced (bool): If True, we were forced to stop"}
{"_id": "q_17001", "text": "Send a block of bytes to the consumer.\n\n Args:\n block (str): Block of bytes"}
{"_id": "q_17002", "text": "Returns total length for this request.\n\n Returns:\n int. Length"}
{"_id": "q_17003", "text": "Build headers for each field."}
{"_id": "q_17004", "text": "A strategy which generates filesystem path values.\n\n The generated values include everything which the builtin\n :func:`python:open` function accepts i.e. which won't lead to\n :exc:`ValueError` or :exc:`TypeError` being raised.\n\n Note that the range of the returned values depends on the operating\n system, the Python version, and the filesystem encoding as returned by\n :func:`sys.getfilesystemencoding`.\n\n :param allow_pathlike:\n If :obj:`python:None` makes the strategy include objects implementing\n the :class:`python:os.PathLike` interface when Python >= 3.6 is used.\n If :obj:`python:False` no pathlike objects will be generated. If\n :obj:`python:True` pathlike will be generated (Python >= 3.6 required)\n\n :type allow_pathlike: :obj:`python:bool` or :obj:`python:None`\n\n .. versionadded:: 3.15"}
{"_id": "q_17005", "text": "Convert str_value to an int or a float, depending on the\n numeric value represented by str_value."}
{"_id": "q_17006", "text": "Tag to plot graphs into the template"}
{"_id": "q_17007", "text": "exec compiled code"}
{"_id": "q_17008", "text": "replace all blocks in extends with current blocks"}
{"_id": "q_17009", "text": "Processes the texts using TweeboParse and returns them in CoNLL format.\n\n :param texts: The List of Strings to be processed by TweeboParse.\n :param retry_count: The number of times it has retried for. Default\n 0 does not require setting, main purpose is for\n recursion.\n :return: A list of CoNLL formated strings.\n :raises ServerError: Caused when the server is not running.\n :raises :py:class:`requests.exceptions.HTTPError`: Caused when the\n input texts is not formated correctly e.g. When you give it a\n String not a list of Strings.\n :raises :py:class:`json.JSONDecodeError`: Caused if after self.retries\n attempts to parse the data it cannot decode the data.\n\n :Example:"}
{"_id": "q_17010", "text": "Set entity data\n\n Args:\n data (dict): Entity data\n datetime_fields (array): Fields that should be parsed as datetimes"}
{"_id": "q_17011", "text": "validates XML text"}
{"_id": "q_17012", "text": "validates XML name"}
{"_id": "q_17013", "text": "Try really really hard to get a Unicode copy of a string.\n\n First try :class:`BeautifulSoup.UnicodeDammit` to try to force\n to Unicode; if that fails, assume UTF-8 encoding, and ignore\n all errors.\n\n :param str raw: string to coerce\n :return: Unicode approximation of `raw`\n :returntype: :class:`unicode`"}
{"_id": "q_17014", "text": "Get a clean text representation of presumed HTML.\n\n Treat `raw` as though it is HTML, even if we have no idea what it\n really is, and attempt to get a properly formatted HTML document\n with all HTML-escaped characters converted to their unicode.\n\n This is called below by the `clean_html` transform stage, which\n interprets MIME-type. If `character_encoding` is not provided,\n and `stream_item` is provided, then this falles back to\n :attr:`streamcorpus.StreamItem.body.encoding`.\n\n :param str raw: raw text to clean up\n :param stream_item: optional stream item with encoding metadata\n :type stream_item: :class:`streamcorpus.StreamItem`\n :returns: UTF-8-encoded byte string of cleaned HTML text\n :returntype: :class:`str`"}
{"_id": "q_17015", "text": "Give the actors, the world, and the messaging system a chance to react \n to the end of the game."}
{"_id": "q_17016", "text": "extract a lower-case, no-slashes domain name from a raw string\n that might be a URL"}
{"_id": "q_17017", "text": "returns a list of strings created by splitting the domain on\n '.' and successively cutting off the left most portion"}
{"_id": "q_17018", "text": "Get a Murmur hash and a normalized token.\n\n `tok` may be a :class:`unicode` string or a UTF-8-encoded\n byte string. :data:`DOCUMENT_HASH_KEY`, hash value 0, is\n reserved for the document count, and this function remaps\n that value.\n\n :param tok: token to hash\n :return: pair of normalized `tok` and its hash"}
{"_id": "q_17019", "text": "Collect all of the words to be indexed from a stream item.\n\n This scans `si` for all of the configured tagger IDs. It\n collects all of the token values (the\n :attr:`streamcorpus.Token.token`) and returns a\n :class:`collections.Counter` of them.\n\n :param si: stream item to scan\n :type si: :class:`streamcorpus.StreamItem`\n :return: counter of :class:`unicode` words to index\n :returntype: :class:`collections.Counter`"}
{"_id": "q_17020", "text": "Get stream IDs for a single hash.\n\n This yields strings that can be retrieved using\n :func:`streamcorpus_pipeline._kvlayer.get_kvlayer_stream_item`,\n or fed back into :mod:`coordinate` or other job queue systems.\n\n Note that for common terms this can return a large number of\n stream IDs! This is a scan over a dense region of a\n :mod:`kvlayer` table so it should be reasonably efficient,\n but be prepared for it to return many documents in a large\n corpus. Blindly storing the results in a :class:`list`\n may be inadvisable.\n\n This will return nothing unless the index was written with\n :attr:`hash_docs` set. No document will correspond to\n :data:`DOCUMENT_HASH_KEY`; use\n :data:`DOCUMENT_HASH_KEY_REPLACEMENT` instead.\n\n :param int h: Murmur hash to look up"}
{"_id": "q_17021", "text": "Create a ContentItem from a node in the spinn3r data tree.\n\n The ContentItem is created with raw data set to ``node.data``,\n decompressed if the node's encoding is 'zlib', and UTF-8\n normalized, with a MIME type from ``node.mime_type``.\n\n ``node``\n the actual node from the spinn3r protobuf data\n ``mime_type``\n string MIME type to use (defaults to ``node.mime_type``)\n ``alternate_data``\n alternate (compressed) data to use, if ``node.data`` is missing\n or can't be decompressed"}
{"_id": "q_17022", "text": "Read exactly a varint out of the underlying file."}
{"_id": "q_17023", "text": "Change working directory and restore the previous on exit"}
{"_id": "q_17024", "text": "Removes the suffix, if it's there, otherwise returns input string unchanged.\n If strict is True, also ensures the suffix was present"}
{"_id": "q_17025", "text": "Are all the elements of needle contained in haystack, and in the same order?\n There may be other elements interspersed throughout"}
{"_id": "q_17026", "text": "Stop the simple WSGI server running the appliation."}
{"_id": "q_17027", "text": "Decorator to add a callback that generates error page.\n\n The *status* parameter specifies the HTTP response status code\n for which the decorated callback should be invoked. If the\n *status* argument is not specified, then the decorated callable\n is considered to be a fallback callback.\n\n A fallback callback, when defined, is invoked to generate the\n error page for any HTTP response representing an error when\n there is no error handler defined explicitly for the response\n code of the HTTP response.\n\n Arguments:\n status(int, optional): HTTP response status code.\n\n Returns:\n function: Decorator function to add error handler."}
{"_id": "q_17028", "text": "Send content of a static file as response.\n\n The path to the document root directory should be specified as\n the root argument. This is very important to prevent directory\n traversal attack. This method guarantees that only files within\n the document root directory are served and no files outside this\n directory can be accessed by a client.\n\n The path to the actual file to be returned should be specified\n as the path argument. This path must be relative to the document\n directory.\n\n The *media_type* and *charset* arguments are used to set the\n Content-Type header of the HTTP response. If *media_type*\n is not specified or specified as ``None`` (the default), then it\n is guessed from the filename of the file to be returned.\n\n Arguments:\n root (str): Path to document root directory.\n path (str): Path to file relative to document root directory.\n media_type (str, optional): Media type of file.\n charset (str, optional): Character set of file.\n\n Returns:\n bytes: Content of file to be returned in the HTTP response."}
{"_id": "q_17029", "text": "Return an error page for the current response status."}
{"_id": "q_17030", "text": "Resolve a request to a route handler.\n\n Arguments:\n method (str): HTTP method, e.g. GET, POST, etc. (type: str)\n path (str): Request path\n\n Returns:\n tuple or None: A tuple of three items:\n\n 1. Route handler (callable)\n 2. Positional arguments (list)\n 3. Keyword arguments (dict)\n\n ``None`` if no route matches the request."}
{"_id": "q_17031", "text": "Return the HTTP response body.\n\n Returns:\n bytes: HTTP response body as a sequence of bytes"}
{"_id": "q_17032", "text": "Add a Set-Cookie header to response object.\n\n For a description about cookie attribute values, see\n https://docs.python.org/3/library/http.cookies.html#http.cookies.Morsel.\n\n Arguments:\n name (str): Name of the cookie\n value (str): Value of the cookie\n attrs (dict): Dicitionary with cookie attribute keys and\n values."}
{"_id": "q_17033", "text": "Return the HTTP response status line.\n\n The status line is determined from :attr:`status` code. For\n example, if the status code is 200, then '200 OK' is returned.\n\n Returns:\n str: Status line"}
{"_id": "q_17034", "text": "Return the value of Content-Type header field.\n\n The value for the Content-Type header field is determined from\n the :attr:`media_type` and :attr:`charset` data attributes.\n\n Returns:\n str: Value of Content-Type header field"}
{"_id": "q_17035", "text": "Return the list of all values for the specified key.\n\n Arguments:\n key (object): Key\n default (list): Default value to return if the key does not\n exist, defaults to ``[]``, i.e. an empty list.\n\n Returns:\n list: List of all values for the specified key if the key\n exists, ``default`` otherwise."}
{"_id": "q_17036", "text": "Template variables."}
{"_id": "q_17037", "text": "return list of open files for current process\n\n .. warning: will only work on UNIX-like os-es."}
{"_id": "q_17038", "text": "get a rejester.WorkUnit with KBA s3 path, fetch it, and save\n some counts about it."}
{"_id": "q_17039", "text": "attempt a fetch and iteration over a work_unit.key path in s3"}
{"_id": "q_17040", "text": "Return a list of non-empty lines from `file_path`."}
{"_id": "q_17041", "text": "Return an ordered 2-tuple containing a species and a describer."}
{"_id": "q_17042", "text": "Return an ordered 2-tuple containing a species and a describer.\n The letter-count of the pair is guarantee to not exceed `maxlen` if\n it is given. If `prevent_stutter` is True, the last letter of the\n first item of the pair will be different from the first letter of\n the second item."}
{"_id": "q_17043", "text": "Use when application is starting."}
{"_id": "q_17044", "text": "Catch a connection asyncrounosly."}
{"_id": "q_17045", "text": "Initialize self."}
{"_id": "q_17046", "text": "Asyncronously wait for a connection from the pool."}
{"_id": "q_17047", "text": "Listen for an id from the server.\n\n At the beginning of a game, each client receives an IdFactory from the \n server. This factory are used to give id numbers that are guaranteed \n to be unique to tokens that created locally. This method checks to see if such \n a factory has been received. If it hasn't, this method does not block \n and immediately returns False. If it has, this method returns True \n after saving the factory internally. At this point it is safe to enter \n the GameStage."}
{"_id": "q_17048", "text": "Register the given callback to be called whenever the method with the \n given name is called. You can easily take advantage of this feature in \n token extensions by using the @watch_token decorator."}
{"_id": "q_17049", "text": "Morphological analysis for Japanese."}
{"_id": "q_17050", "text": "Scoring the similarity of two words."}
{"_id": "q_17051", "text": "Convert the Japanese to Hiragana or Katakana."}
{"_id": "q_17052", "text": "Extract unique representation from sentence."}
{"_id": "q_17053", "text": "Summarize reviews into a short summary."}
{"_id": "q_17054", "text": "Extract \"keywords\" from an input document."}
{"_id": "q_17055", "text": "Converts XML tree to event generator"}
{"_id": "q_17056", "text": "Converts events stream into lXML tree"}
{"_id": "q_17057", "text": "Parses file content into events stream"}
{"_id": "q_17058", "text": "locates ENTER peer for each EXIT object. Convenient when selectively\n filtering out XML markup"}
{"_id": "q_17059", "text": "Create a pipeline stage.\n\n Instantiates `stage` with `config`. This essentially\n translates to ``stage(config)``, except that two keys from\n `scp_config` are injected into the configuration:\n ``tmp_dir_path`` is an execution-specific directory from\n combining the top-level ``tmp_dir_path`` configuration with\n :attr:`tmp_dir_suffix`; and ``third_dir_path`` is the same\n path from the top-level configuration. `stage` may be either\n a callable returning the stage (e.g. its class), or its name\n in the configuration.\n\n `scp_config` is the configuration for the pipeline as a\n whole, and is required. `config` is the configuration for\n the stage; if it is :const:`None` then it is extracted\n from `scp_config`.\n\n If you already have a fully formed configuration block\n and want to create a stage, you can call\n\n .. code-block:: python\n\n factory.registry[stage](stage_config)\n\n In most cases if you have a stage class object and want to\n instantiate it with its defaults you can call\n\n .. code-block:: python\n\n stage = stage_cls(stage_cls.default_config)\n\n .. note:: This mirrors\n :meth:`yakonfig.factory.AutoFactory.create`, with\n some thought that this factory class might migrate\n to using that as a base in the future.\n\n :param stage: pipeline stage class, or its name in the registry\n :param dict scp_config: configuration block for the pipeline\n :param dict config: configuration block for the stage, or\n :const:`None` to get it from `scp_config`"}
{"_id": "q_17060", "text": "Run the pipeline.\n\n This runs all of the steps described in the pipeline constructor,\n reading from some input and writing to some output.\n\n :param str i_str: name of the input file, or other reader-specific\n description of where to get input\n :param int start_count: index of the first stream item\n :param int start_chunk_time: timestamp for the first stream item"}
{"_id": "q_17061", "text": "Run all of the writers over some intermediate chunk.\n\n :param int start_count: index of the first item\n :param int next_idx: index of the next item (after the last\n item in this chunk)\n :param list sources: source strings included in this chunk\n (usually only one source)\n :param str i_str: name of input file or other input\n :param str t_path: location of intermediate chunk on disk\n :return: list of output file paths or other outputs"}
{"_id": "q_17062", "text": "Run transforms on stream item.\n Item may be discarded by some transform.\n Writes successful items out to current self.t_chunk\n Returns transformed item or None."}
{"_id": "q_17063", "text": "construct BusinessDate instance from datetime.date instance,\n raise ValueError exception if not possible\n\n :param datetime.date datetime_date: calendar day\n :return bool:"}
{"_id": "q_17064", "text": "construct datetime.date instance represented calendar date of BusinessDate instance\n\n :return datetime.date:"}
{"_id": "q_17065", "text": "addition of a period object\n\n :param BusinessDate d:\n :param p:\n :type p: BusinessPeriod or str\n :param list holiday_obj:\n :return bankdate:"}
{"_id": "q_17066", "text": "addition of a number of months\n\n :param BusinessDate d:\n :param int month_int:\n :return bankdate:"}
{"_id": "q_17067", "text": "Replace the top-level pipeline configurable object.\n\n This investigates a number of sources, including\n `external_stages_path` and `external_stages_modules` configuration\n and `streamcorpus_pipeline.stages` entry points, and uses these to\n find the actual :data:`sub_modules` for\n :mod:`streamcorpus_pipeline`."}
{"_id": "q_17068", "text": "assemble in-doc coref chains by mapping equiv_id to tokens and\n their cleansed name strings\n\n :param sentences: iterator over token generators\n :returns dict:\n keys are equiv_ids,\n values are tuple(concatentated name string, list of tokens)"}
{"_id": "q_17069", "text": "For each name string in the target_mentions list, searches through\n all chain_mentions looking for any cleansed Token.token that\n contains the name. Returns True only if all of the target_mention\n strings appeared as substrings of at least one cleansed\n Token.token. Otherwise, returns False.\n\n :type target_mentions: list of basestring\n :type chain_mentions: list of basestring\n\n :returns bool:"}
{"_id": "q_17070", "text": "For each name string in the target_mentions list, searches through\n all chain_mentions looking for any cleansed Token.token that\n contains the name. Returns True if any of the target_mention\n strings appeared as substrings of any cleansed Token.token.\n Otherwise, returns False.\n\n :type target_mentions: list of basestring\n :type chain_mentions: list of basestring\n\n :returns bool:"}
{"_id": "q_17071", "text": "iterate through all tokens looking for matches of cleansed tokens\n or token regexes, skipping tokens left empty by cleansing and\n coping with Token objects that produce multiple space-separated\n strings when cleansed. Yields tokens that match."}
{"_id": "q_17072", "text": "iterate through tokens looking for near-exact matches to strings\n in si.ratings...mentions"}
{"_id": "q_17073", "text": "run tagger a child process to get XML output"}
{"_id": "q_17074", "text": "send SIGTERM to the tagger child process"}
{"_id": "q_17075", "text": "Replace all angle bracket emails with a unique key."}
{"_id": "q_17076", "text": "generate strings identified as sentences"}
{"_id": "q_17077", "text": "make a sortedcollection on body.labels"}
{"_id": "q_17078", "text": "assemble Sentence and Token objects"}
{"_id": "q_17079", "text": "Convert any HTML, XML, or numeric entities in the attribute values.\n For example '&amp;' becomes '&'.\n\n This is adapted from BeautifulSoup, which should be able to do the\n same thing when called like this --- but this fails to convert\n everything for some bug.\n\n text = unicode(BeautifulStoneSoup(text, convertEntities=BeautifulStoneSoup.XML_ENTITIES))"}
{"_id": "q_17080", "text": "returns number of days for the given year and month\n\n :param int year: calendar year\n :param int month: calendar month\n :return int:"}
{"_id": "q_17081", "text": "Initialize the application."}
{"_id": "q_17082", "text": "Register connection's middleware and prepare self database."}
{"_id": "q_17083", "text": "Close all connections."}
{"_id": "q_17084", "text": "Register a model in self."}
{"_id": "q_17085", "text": "Manage a database connection."}
{"_id": "q_17086", "text": "make a temp file of cleansed text"}
{"_id": "q_17087", "text": "Convert a string of text into a lowercase string with no\n punctuation and only spaces for whitespace.\n\n :param span: string"}
{"_id": "q_17088", "text": "iterate through the i_chunk and tmp_ner_path to generate a new\n Chunk with body.ner"}
{"_id": "q_17089", "text": "setup the config and load external modules\n\n This updates 'config' as follows:\n\n * All paths are replaced with absolute paths\n * A hash and JSON dump of the config are stored in the config\n * If 'pythonpath' is in the config, it is added to sys.path\n * If 'setup_modules' is in the config, all modules named in it are loaded"}
{"_id": "q_17090", "text": "This _looks_ like a Chunk only in that it generates StreamItem\n instances when iterated upon."}
{"_id": "q_17091", "text": "Takes an HTML-like binary string as input and returns a binary\n string of the same length with all tags replaced by whitespace.\n This also detects script and style tags, and replaces the text\n between them with whitespace.\n\n Pre-existing whitespace of any kind (newlines, tabs) is converted\n to single spaces ' ', which has the same byte length (and\n character length).\n\n Note: this does not change any characters like &rsquo; and &nbsp;,\n so taggers operating on this text must cope with such symbols.\n Converting them to some other character would change their byte\n length, even if equivalent from a character perspective.\n\n This is regex based, which can occassionally just hang..."}
{"_id": "q_17092", "text": "make a temp file of clean_visible text"}
{"_id": "q_17093", "text": "Convert a unicode string into a lowercase string with no\npunctuation and only spaces for whitespace.\n\nReplace PennTreebank escaped brackets with ' ':\n-LRB- -RRB- -RSB- -RSB- -LCB- -RCB-\n(The acronyms stand for (Left|Right) (Round|Square|Curly) Bracket.)\nhttp://www.cis.upenn.edu/~treebank/tokenization.html\n\n:param span: string"}
{"_id": "q_17094", "text": "manual test loop for make_clean_visible_from_raw"}
{"_id": "q_17095", "text": "Try to load a stage into self, ignoring errors.\n\n If loading a module fails because of some subordinate load\n failure, just give a warning and move on. On success the\n stage is added to the stage dictionary.\n\n :param str moduleName: name of the Python module\n :param str functionName: name of the stage constructor\n :param str name: name of the stage, defaults to `functionName`"}
{"_id": "q_17096", "text": "Add external stages from the Python module in `path`.\n\n `path` must be a path to a Python module source that contains\n a `Stages` dictionary, which is a map from stage name to callable.\n\n :param str path: path to the module file"}
{"_id": "q_17097", "text": "Add external stages from the Python module `mod`.\n\n If `mod` is a string, then it will be interpreted as the name\n of a module; otherwise it is an actual module object. The\n module should exist somewhere in :data:`sys.path`. The module\n must contain a `Stages` dictionary, which is a map from stage\n name to callable.\n\n :param mod: name of the module or the module itself\n :raise exceptions.ImportError: if `mod` cannot be loaded or does\n not contain ``Stages``"}
{"_id": "q_17098", "text": "iterates through idx_bytes until a byte in stop_bytes or a byte\n not in run_bytes.\n\n :rtype (int, string): idx of last byte and all of bytes including\n the terminal byte from stop_bytes or not in run_bytes"}
{"_id": "q_17099", "text": "Make a list of Labels for 'author' and the filtered hrefs &\n anchors"}
{"_id": "q_17100", "text": "Runs a series of parsers in sequence passing the result of each parser to the next.\n The result of the last parser is returned."}
{"_id": "q_17101", "text": "Returns the current token if it satisfies the guard function provided.\n \n Fails otherwise.\n This is the a generalisation of one_of."}
{"_id": "q_17102", "text": "Applies the parser to input zero or more times.\n \n Returns a list of parser results."}
{"_id": "q_17103", "text": "Consumes as many of these as it can until it term is encountered.\n \n Returns a tuple of the list of these results and the term result"}
{"_id": "q_17104", "text": "Like sep but must consume at least one of parser."}
{"_id": "q_17105", "text": "fills the internal buffer from the source iterator"}
{"_id": "q_17106", "text": "Advances to and returns the next token or returns EndOfFile"}
{"_id": "q_17107", "text": "generate the data objects for every task"}
{"_id": "q_17108", "text": "get a random key out of the first max_iter rows"}
{"_id": "q_17109", "text": "Run a game being developed with the kxg game engine.\n\nUsage:\n {exe_name} sandbox [<num_ais>] [-v...]\n {exe_name} client [--host HOST] [--port PORT] [-v...]\n {exe_name} server <num_guis> [<num_ais>] [--host HOST] [--port PORT] [-v...] \n {exe_name} debug <num_guis> [<num_ais>] [--host HOST] [--port PORT] [-v...]\n {exe_name} --help\n\nCommands:\n sandbox\n Play a single-player game with the specified number of AIs. None of \n the multiplayer machinery will be used.\n\n client\n Launch a client that will try to connect to a server on the given host \n and port. Once it connects and the game starts, the client will allow \n you to play the game against any other connected clients.\n\n server\n Launch a server that will manage a game between the given number of \n human and AI players. The human players must connect using this \n command's client mode.\n\n debug\n Debug a multiplayer game locally. This command launches a server and \n the given number of clients all in different processes, and configures \n the logging system such that the output from each process can be easily \n distinguished.\n\nArguments:\n <num_guis>\n The number of human players that will be playing the game. Only needed \n by commands that will launch some sort of multiplayer server.\n\n <num_ais>\n The number of AI players that will be playing the game. Only needed by \n commands that will launch single-player games or multiplayer servers.\n\nOptions:\n -x --host HOST [default: {default_host}]\n The address of the machine running the server. Must be accessible from \n the machines running the clients.\n\n -p --port PORT [default: {default_port}]\n The port that the server should listen on. Don't specify a value less \n than 1024 unless the server is running with root permissions.\n\n -v --verbose \n Have the game engine log more information about what it's doing. You \n can specify this option several times to get more and more information.\n\nThis command is provided so that you can start writing your game with the least \npossible amount of boilerplate code. However, the clients and servers provided \nby this command are not capable of running a production game. Once you have \nwritten your game and want to give it a polished set of menus and options, \nyou'll have to write new Stage subclasses encapsulating that logic and you'll \nhave to call those stages yourself by interacting more directly with the \nTheater class. The online documentation has more information on this process."}
{"_id": "q_17110", "text": "Return database field type."}
{"_id": "q_17111", "text": "return True if okay, raise Exception if not"}
{"_id": "q_17112", "text": "This function is mostly about managing configuration, and then\n finally returns a boto.Bucket object.\n\n AWS credentials come first from config keys\n aws_access_key_id_path, aws_secret_access_key_path (paths to one\n line files); secondly from environment variables\n AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY; also from $HOME/.aws/credentials\n or the magic Amazon http://169.254.169.254/ service. If credentials\n are not set in the config then behavior is the same as other AWS-based\n command-line tools."}
{"_id": "q_17113", "text": "Given the raw data from s3, return a generator for the items\n contained in that data. A generator is necessary to support\n chunk files, but non-chunk files can be provided by a generator\n that yields exactly one item.\n\n Decoding works by case analysis on the config option\n ``input_format``. If an invalid ``input_format`` is given, then\n a ``ConfigurationError`` is raised."}
{"_id": "q_17114", "text": "return Chunk object full of records\n bucket_name may be None"}
{"_id": "q_17115", "text": "Convert a text stream ID to a kvlayer key.\n\n The return tuple can be used directly as a key in the\n :data:`STREAM_ITEMS_TABLE` table.\n\n :param str stream_id: stream ID to convert\n :return: :mod:`kvlayer` key tuple\n :raise exceptions.KeyError: if `stream_id` is malformed"}
{"_id": "q_17116", "text": "Convert a kvlayer key to a text stream ID.\n\n `k` should be of the same form produced by\n :func:`stream_id_to_kvlayer_key`.\n\n :param k: :mod:`kvlayer` key tuple\n :return: converted stream ID\n :returntype str:"}
{"_id": "q_17117", "text": "Get a kvlayer key from a stream item.\n\n The return tuple can be used directly as a key in the\n :data:`STREAM_ITEMS_TABLE` table. Note that this recalculates the\n stream ID, and if the internal data on the stream item is inconsistent\n then this could return a different result from\n :func:`stream_id_to_kvlayer_key`.\n\n :param si: stream item to get key for\n :return: :mod:`kvlayer` key tuple"}
{"_id": "q_17118", "text": "Create a session on the frontier silicon device."}
{"_id": "q_17119", "text": "Execute a frontier silicon API call."}
{"_id": "q_17120", "text": "Helper method for setting a value by using the fsapi API."}
{"_id": "q_17121", "text": "Helper method for fetching a text value."}
{"_id": "q_17122", "text": "Helper method for fetching a long value. Result is integer."}
{"_id": "q_17123", "text": "Check if the device is muted."}
{"_id": "q_17124", "text": "Mute or unmute the device."}
{"_id": "q_17125", "text": "Get the play status of the device."}
{"_id": "q_17126", "text": "Set device sleep timer."}
{"_id": "q_17127", "text": "Serve up some ponies."}
{"_id": "q_17128", "text": "Mutably tag tokens with xpath offsets.\n\n Given some stream item, this will tag all tokens from all taggings\n in the document that contain character offsets. Note that some\n tokens may not have computable xpath offsets, so an xpath offset\n for those tokens will not be set. (See the documentation and\n comments for ``char_offsets_to_xpaths`` for what it means for a\n token to have a computable xpath.)\n\n If a token can have its xpath offset computed, it is added to its\n set of offsets with a ``OffsetType.XPATH_CHARS`` key."}
{"_id": "q_17129", "text": "Convert stream item sentences to character ``Offset``s."}
{"_id": "q_17130", "text": "Convert character ``Offset``s to character ranges."}
{"_id": "q_17131", "text": "Converts HTML and a sequence of char offsets to xpath offsets.\n\n Returns a generator of :class:`streamcorpus.XpathRange` objects\n in correspondences with the sequence of ``char_offsets`` given.\n Namely, each ``XpathRange`` should address precisely the same text\n as that ``char_offsets`` (sans the HTML).\n\n Depending on how ``char_offsets`` was tokenized, it's possible that\n some tokens cannot have their xpaths generated reliably. In this\n case, a ``None`` value is yielded instead of a ``XpathRange``.\n\n ``char_offsets`` must be a sorted and non-overlapping sequence of\n character ranges. They do not have to be contiguous."}
{"_id": "q_17132", "text": "Record that `tag` has been seen at this depth.\n\n If `tag` is :class:`TextElement`, it records a text node."}
{"_id": "q_17133", "text": "Returns the one-based index of the current text node."}
{"_id": "q_17134", "text": "Yields all the elements from the source\n source - if an element, yields all child elements in order; if any other iterator yields the elements from that iterator"}
{"_id": "q_17135", "text": "Yields all the elements with the given name\n source - if an element, starts with all child elements in order; can also be any other iterator\n name - will yield only elements with this name"}
{"_id": "q_17136", "text": "Yields elements from the source whose name matches the given regular expression pattern\n source - if an element, starts with all child elements in order; can also be any other iterator\n pat - re.pattern object"}
{"_id": "q_17137", "text": "Yields elements from the source having the given attrivute, optionally with the given attribute value\n source - if an element, starts with all child elements in order; can also be any other iterator\n name - attribute name to check\n val - if None check only for the existence of the attribute, otherwise compare the given value as well"}
{"_id": "q_17138", "text": "Yields elements and text which have the same parent as elem, but come afterward in document order"}
{"_id": "q_17139", "text": "Call inkscape CLI with arguments and returns its return value.\n\n Parameters\n ----------\n args_string: list of str\n\n inkscape_binpath: str\n\n Returns\n -------\n return_value\n Inkscape command CLI call return value."}
{"_id": "q_17140", "text": "Parse genotype from VCF line data"}
{"_id": "q_17141", "text": "toIndex - An optional method which will return the value prepped for index.\n\n\t\t\tBy default, \"toStorage\" will be called. If you provide \"hashIndex=True\" on the constructor,\n\t\t\tthe field will be md5summed for indexing purposes. This is useful for large strings, etc."}
{"_id": "q_17142", "text": "copy - Create a copy of this IRField.\n\n\t\t\t Each subclass should implement this, as you'll need to pass in the args to constructor.\n\n\t\t\t@return <IRField (or subclass)> - Another IRField that has all the same values as this one."}
{"_id": "q_17143", "text": "Return a Jinja2 environment for where file_path is.\n\n Parameters\n ----------\n file_path: str\n\n Returns\n -------\n jinja_env: Jinja2.Environment"}
{"_id": "q_17144", "text": "Setup self.template\n\n Parameters\n ----------\n template_file_path: str\n Document template file path."}
{"_id": "q_17145", "text": "Fill the content of the document with the information in doc_contents.\n\n Parameters\n ----------\n doc_contents: dict\n Set of values to set the template document.\n\n Returns\n -------\n filled_doc: str\n The content of the document with the template information filled."}
{"_id": "q_17146", "text": "Save the content of the .txt file in a text file.\n\n Parameters\n ----------\n file_path: str\n Path to the output file."}
{"_id": "q_17147", "text": "Factory function to create a specific document of the\n class given by the `command` or the extension of `template_file_path`.\n\n See get_doctype_by_command and get_doctype_by_extension.\n\n Parameters\n ----------\n template_file_path: str\n\n command: str\n\n Returns\n -------\n doc"}
{"_id": "q_17148", "text": "Save the content of the .text file in the PDF.\n\n Parameters\n ----------\n file_path: str\n Path to the output file."}
{"_id": "q_17149", "text": "Convert XML 1.0 to MicroXML\n\n source - XML 1.0 input\n handler - MicroXML events handler\n\n Returns uxml, extras\n\n uxml - MicroXML element extracted from the source\n extras - information to be preserved but not part of MicroXML, e.g. namespaces"}
{"_id": "q_17150", "text": "objHasUnsavedChanges - Check if any object has unsaved changes, cascading."}
{"_id": "q_17151", "text": "Check that a value has a certain JSON type.\n\n Raise TypeError if the type does not match.\n\n Supported types: str, int, float, bool, list, dict, and None.\n float will match any number, int will only match numbers without\n fractional part.\n\n The special type JList(x) will match a list value where each\n item is of type x:\n\n >>> assert_json_type([1, 2, 3], JList(int))"}
{"_id": "q_17152", "text": "Parse a fragment if markup in HTML mode, and return a bindery node\n\n Warning: if you pass a string, you must make sure it's a byte string, not a Unicode object. You might also want to wrap it with amara.lib.inputsource.text if it's not obviously XML or HTML (for example it could be confused with a file name)\n\n from amara.lib import inputsource\n from amara.bindery import html\n doc = html.markup_fragment(inputsource.text('XXX<html><body onload=\"\" color=\"white\"><p>Spam!<p>Eggs!</body></html>YYY'))\n\n See also: http://wiki.xml3k.org/Amara2/Tagsoup"}
{"_id": "q_17153", "text": "Insert data as text in the current node, positioned before the\n start of node insertBefore or to the end of the node's text."}
{"_id": "q_17154", "text": "A script that melody calls with each valid set of options. This\n script runs the required code and returns the results."}
{"_id": "q_17155", "text": "Recursively compute intersection of data. For dictionaries, items\n for specific keys will be reduced to unique items. For lists, items\n will be reduced to unique items. This method is meant to be analogous\n to set.intersection for composite objects.\n\n Args:\n other (composite): Other composite object to intersect with.\n recursive (bool): Whether or not to perform the operation recursively,\n for all nested composite objects."}
{"_id": "q_17156", "text": "Recursively compute union of data. For dictionaries, items\n for specific keys will be combined into a list, depending on the\n status of the overwrite= parameter. For lists, items will be appended\n and reduced to unique items. This method is meant to be analogous\n to set.union for composite objects.\n\n Args:\n other (composite): Other composite object to union with.\n recursive (bool): Whether or not to perform the operation recursively,\n for all nested composite objects.\n overwrite (bool): Whether or not to overwrite entries with the same\n key in a nested dictionary."}
{"_id": "q_17157", "text": "Append to object, if object is list."}
{"_id": "q_17158", "text": "Extend list from object, if object is list."}
{"_id": "q_17159", "text": "Write composite object to file handle in JSON format.\n\n Args:\n fh (file): File handle to write to.\n pretty (bool): Sort keys and indent in output."}
{"_id": "q_17160", "text": "Prune leaves of filetree according to specified\n regular expression.\n\n Args:\n regex (str): Regular expression to use in pruning tree."}
{"_id": "q_17161", "text": "Returns the value this reference is pointing to. This method uses 'ctx' to resolve the reference and return\n the value this reference references.\n If the call was already made, it returns a cached result.\n It also makes sure there's no cyclic reference, and if so raises CyclicReferenceError."}
{"_id": "q_17162", "text": "delete - Delete all objects in this list.\n\n\t\t\t@return <int> - Number of objects deleted"}
{"_id": "q_17163", "text": "Get settings from config file."}
{"_id": "q_17164", "text": "Create event in calendar with sms reminder."}
{"_id": "q_17165", "text": "Processing notification call main function."}
{"_id": "q_17166", "text": "Return the extension of fpath.\n\n Parameters\n ----------\n fpath: string\n File name or path\n\n check_if_exists: bool\n\n Returns\n -------\n str\n The extension of the file name or path"}
{"_id": "q_17167", "text": "Remove the files in workdir that have the given extension.\n\n Parameters\n ----------\n workdir:\n Folder path from where to clean the files.\n\n extension: str\n File extension without the dot, e.g., 'txt'"}
{"_id": "q_17168", "text": "Convert a CSV file in `csv_filepath` into a JSON file in `json_filepath`.\n\n Parameters\n ----------\n csv_filepath: str\n Path to the input CSV file.\n\n json_filepath: str\n Path to the output JSON file. Will be overwritten if exists.\n\n fieldnames: List[str]\n Names of the fields in the CSV file.\n\n ignore_first_line: bool"}
{"_id": "q_17169", "text": "Renders as a str"}
{"_id": "q_17170", "text": "Returns the elements HTML start tag"}
{"_id": "q_17171", "text": "Returns a repr of an object and falls back to a minimal representation of type and ID if the call to repr raised\n an error.\n\n :param obj: object to safe repr\n :returns: repr string or '(type<id> repr error)' string\n :rtype: str"}
{"_id": "q_17172", "text": "Match a genome VCF to variants in the ClinVar VCF file\n\n Acts as a generator, yielding tuples of:\n (ClinVarVCFLine, ClinVarAllele, zygosity)\n\n 'zygosity' is a string and corresponds to the genome's zygosity for that\n ClinVarAllele. It can be either: 'Het' (heterozygous), 'Hom' (homozygous),\n or 'Hem' (hemizygous, e.g. X chromosome in XY individuals)."}
{"_id": "q_17173", "text": "If next tag is link with same href, combine them."}
{"_id": "q_17174", "text": "See if span tag has bold style and wrap with strong tag."}
{"_id": "q_17175", "text": "Reject attributes not defined in ATTR_WHITELIST."}
{"_id": "q_17176", "text": "get unicode string without any other content transformation.\n and clean extra spaces"}
{"_id": "q_17177", "text": "Extract \"real\" URL from Google redirected url by getting `q`\n querystring parameter."}
{"_id": "q_17178", "text": "Return Allele data as dict object."}
{"_id": "q_17179", "text": "Create list of Alleles from VCF line data"}
{"_id": "q_17180", "text": "Dict representation of parsed VCF data"}
{"_id": "q_17181", "text": "Convert data to json string representation.\n\n Returns:\n json representation as string."}
{"_id": "q_17182", "text": "_toStorage - Convert the value to a string representation for storage.\n\n\t\t\t@param value - The value of the item to convert\n\t\t\t@return A string value suitable for storing."}
{"_id": "q_17183", "text": "Returns absolute paths of files that match the regex within folder_path and\n all its children folders.\n\n Note: The regex matching is done using the match function\n of the re module.\n\n Parameters\n ----------\n folder_path: string\n\n regex: string\n\n Returns\n -------\n A list of strings."}
{"_id": "q_17184", "text": "Yields one string, concatenation of argument strings"}
{"_id": "q_17185", "text": "Yields one boolean, whether the first string contains the second"}
{"_id": "q_17186", "text": "Yields one number"}
{"_id": "q_17187", "text": "Yields the result of applying an expression to each item in the input sequence.\n\n * seq: input sequence\n * expr: expression to be converted to string, then dynamically evaluated for each item on the sequence to produce the result"}
{"_id": "q_17188", "text": "Replace known special characters to SVG code.\n\n Parameters\n ----------\n svg_content: str\n\n Returns\n -------\n corrected_svg: str\n Corrected SVG content"}
{"_id": "q_17189", "text": "Merge all the PDF files in `pdf_filepaths` in a new PDF file `out_filepath`.\n\n Parameters\n ----------\n pdf_filepaths: list of str\n Paths to PDF files.\n\n out_filepath: str\n Path to the result PDF file.\n\n Returns\n -------\n path: str\n The output file path."}
{"_id": "q_17190", "text": "Navigate an open ftplib.FTP to appropriate directory for ClinVar VCF files.\n\n Args:\n ftp: (type: ftplib.FTP) an open connection to ftp.ncbi.nlm.nih.gov\n build: (type: string) genome build, either 'b37' or 'b38'"}
{"_id": "q_17191", "text": "Return ClinVarAllele data as dict object."}
{"_id": "q_17192", "text": "Parse frequency data in ClinVar VCF"}
{"_id": "q_17193", "text": "Parse alleles for ClinVar VCF, overrides parent method."}
{"_id": "q_17194", "text": "Returns back a class decorator that enables registering Blox to this factory"}
{"_id": "q_17195", "text": "make some basic checks on the inputs to make sure they are valid"}
{"_id": "q_17196", "text": "make some basic checks on the function to make sure it is valid"}
{"_id": "q_17197", "text": "internal recursion routine called by the run method that generates\n all input combinations"}
{"_id": "q_17198", "text": "setDefaultRedisConnectionParams - Sets the default parameters used when connecting to Redis.\n\n\t\t This should be the args to redis.Redis in dict (kwargs) form.\n\n\t\t @param connectionParams <dict> - A dict of connection parameters.\n\t\t Common keys are:\n\n\t\t host <str> - hostname/ip of Redis server (default '127.0.0.1')\n\t\t port <int> - Port number\t\t\t(default 6379)\n\t\t db <int> - Redis DB number\t\t(default 0)\n\n\t\t Omitting any of those keys will ensure the default value listed is used.\n\n\t\t This connection info will be used by default for all connections to Redis, unless explicitly set otherwise.\n\t\t The common way to override is to define REDIS_CONNECTION_PARAMS on a model, or use AltConnectedModel = MyModel.connectAlt( PARAMS )\n\n\t\t Any omitted fields in these connection overrides will inherit the value from the global default.\n\n\t\t For example, if your global default connection params define host = 'example.com', port=15000, and db=0, \n\t\t and then one of your models has\n\t\t \n\t\t REDIS_CONNECTION_PARAMS = { 'db' : 1 }\n\t\t \n\t\t as an attribute, then that model's connection will inherit host='example.com\" and port=15000 but override db and use db=1\n\n\n\t\t NOTE: Calling this function will clear the connection_pool attribute of all stored managed connections, disconnect all managed connections,\n\t\t and close-out the connection pool.\n\t\t It may not be safe to call this function while other threads are potentially hitting Redis (not that it would make sense anyway...)\n\n\t\t @see clearRedisPools for more info"}
{"_id": "q_17199", "text": "getRedisPool - Returns and possibly also creates a Redis connection pool\n\t\t\tbased on the REDIS_CONNECTION_PARAMS passed in.\n\n\t\t\tThe goal of this method is to keep a small connection pool rolling\n\t\t\tto each unique Redis instance, otherwise during network issues etc\n\t\t\tpython-redis will leak connections and in short-order can exhaust\n\t\t\tall the ports on a system. There's probably also some minor\n\t\t\tperformance gain in sharing Pools.\n\n\t\t\tWill modify \"params\", if \"host\" and/or \"port\" are missing, will fill\n\t\t\tthem in with defaults, and prior to return will set \"connection_pool\"\n\t\t\ton params, which will allow immediate return on the next call,\n\t\t\tand allow access to the pool directly from the model object.\n\n\t\t\t@param params <dict> - REDIS_CONNECTION_PARAMS - kwargs to redis.Redis\n\n\t\t\t@return redis.ConnectionPool corrosponding to this unique server."}
{"_id": "q_17200", "text": "pprint - Pretty-print a dict representation of this object.\n\n\t\t\t@param stream <file/None> - Either a stream to output, or None to default to sys.stdout"}
{"_id": "q_17201", "text": "hasUnsavedChanges - Check if any unsaved changes are present in this model, or if it has never been saved.\n\n\t\t\t@param cascadeObjects <bool> default False, if True will check if any foreign linked objects themselves have unsaved changes (recursively).\n\t\t\t\tOtherwise, will just check if the pk has changed.\n\n\t\t\t@return <bool> - True if any fields have changed since last fetch, or if never saved. Otherwise, False"}
{"_id": "q_17202", "text": "save - Save this object.\n\t\t\t\n\t\t\tWill perform an \"insert\" if this object had not been saved before,\n\t\t\t otherwise will update JUST the fields changed on THIS INSTANCE of the model.\n\n\t\t\t i.e. If you have two processes fetch the same object and change different fields, they will not overwrite\n\t\t\t eachother, but only save the ones each process changed.\n\n\t\t\tIf you want to save multiple objects of type MyModel in a single transaction,\n\t\t\tand you have those objects in a list, myObjs, you can do the following:\n\n\t\t\t\tMyModel.saver.save(myObjs)\n\n\t\t\t@param cascadeSave <bool> Default True - If True, any Foreign models linked as attributes that have been altered\n\t\t\t or created will be saved with this object. If False, only this object (and the reference to an already-saved foreign model) will be saved.\n\n\t\t\t@see #IndexedRedisSave.save\n\n\t\t\t@return <list> - Single element list, id of saved object (if successful)"}
{"_id": "q_17203", "text": "hasSameValues - Check if this and another model have the same fields and values.\n\n\t\t\tThis does NOT include id, so the models can have the same values but be different objects in the database.\n\n\t\t\t@param other <IndexedRedisModel> - Another model\n\n\t\t\t@param cascadeObject <bool> default True - If True, foreign link values with changes will be considered a difference.\n\t\t\t\tOtherwise, only the immediate values are checked.\n\n\t\t\t@return <bool> - True if all fields have the same value, otherwise False"}
{"_id": "q_17204", "text": "saveToExternal - Saves this object to a different Redis than that specified by REDIS_CONNECTION_PARAMS on this model.\n\n\t\t\t@param redisCon <dict/redis.Redis> - Either a dict of connection params, a la REDIS_CONNECTION_PARAMS, or an existing Redis connection.\n\t\t\t\tIf you are doing a lot of bulk copies, it is recommended that you create a Redis connection and pass it in rather than establish a new\n\t\t\t\tconnection with each call.\n\n\t\t\t@note - You will generate a new primary key relative to the external Redis environment. If you need to reference a \"shared\" primary key, it is better\n\t\t\t\t\tto use an indexed field than the internal pk."}
{"_id": "q_17205", "text": "reload - Reload this object from the database, overriding any local changes and merging in any updates.\n\n\n\t\t @param cascadeObjects <bool> Default True. If True, foreign-linked objects will be reloaded if their values have changed\n\t\t since last save/fetch. If False, only if the pk changed will the foreign linked objects be reloaded.\n\n @raises KeyError - if this object has not been saved (no primary key)\n\n @return - Dict with the keys that were updated. Key is field name that was updated,\n\t\t and value is tuple of (old value, new value). \n\n\t\t NOTE: Currently, this will cause a fetch of all Foreign Link objects, one level"}
{"_id": "q_17206", "text": "copyModel - Copy this model, and return that copy.\n\n\t\t\t The copied model will have all the same data, but will have a fresh instance of the FIELDS array and all members,\n\t\t\t and the INDEXED_FIELDS array.\n\t\t\t \n\t\t\t This is useful for converting, like changing field types or whatever, where you can load from one model and save into the other.\n\n\t\t\t@return <IndexedRedisModel> - A copy class of this model class with a unique name."}
{"_id": "q_17207", "text": "connectAlt - Create a class of this model which will use an alternate connection than the one specified by REDIS_CONNECTION_PARAMS on this model.\n\n\t\t\t@param redisConnectionParams <dict> - Dictionary of arguments to redis.Redis, same as REDIS_CONNECTION_PARAMS.\n\n\t\t\t@return - A class that can be used in all the same ways as the existing IndexedRedisModel, but that connects to a different instance.\n\n\t\t\t The fields and key will be the same here, but the connection will be different. use #copyModel if you want an independent class for the model"}
{"_id": "q_17208", "text": "_get_new_connection - Get a new connection\n\t\t\tinternal"}
{"_id": "q_17209", "text": "_get_connection - Maybe get a new connection, or reuse if passed in.\n\t\t\t\tWill share a connection with a model\n\t\t\tinternal"}
{"_id": "q_17210", "text": "_rem_id_from_index - Removes an id from an index\n\t\t\tinternal"}
{"_id": "q_17211", "text": "_peekNextID - Look at, but don't increment the primary key for this model.\n\t\t\t\tInternal.\n\n\t\t\t@return int - next pk"}
{"_id": "q_17212", "text": "Internal for handling filters; the guts of .filter and .filterInline"}
{"_id": "q_17213", "text": "count - gets the number of records matching the filter criteria\n\n\t\t\tExample:\n\t\t\t\ttheCount = Model.objects.filter(field1='value').count()"}
{"_id": "q_17214", "text": "exists - Tests whether a record holding the given primary key exists.\n\n\t\t\t@param pk - Primary key (see getPk method)\n\n\t\t\tExample usage: Waiting for an object to be deleted without fetching the object or running a filter. \n\n\t\t\tThis is a very cheap operation.\n\n\t\t\t@return <bool> - True if object with given pk exists, otherwise False"}
{"_id": "q_17215", "text": "all - Get the underlying objects which match the filter criteria.\n\n\t\t\tExample: objs = Model.objects.filter(field1='value', field2='value2').all()\n\n\t\t\t@param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\t\t\t@return - Objects of the Model instance associated with this query."}
{"_id": "q_17216", "text": "allOnlyFields - Get the objects which match the filter criteria, only fetching given fields.\n\n\t\t\t@param fields - List of fields to fetch\n\n\t\t\t@param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\n\t\t\t@return - Partial objects with only the given fields fetched"}
{"_id": "q_17217", "text": "allOnlyIndexedFields - Get the objects which match the filter criteria, only fetching indexed fields.\n\n\t\t\t@return - Partial objects with only the indexed fields fetched"}
{"_id": "q_17218", "text": "Random - Returns a random record in current filterset.\n\n\n\t\t\t@param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\t\t\t@return - Instance of Model object, or None if no items math current filters"}
{"_id": "q_17219", "text": "delete - Deletes all entries matching the filter criteria"}
{"_id": "q_17220", "text": "get - Get a single value with the internal primary key.\n\n\n\t\t\t@param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\t\t\t@param pk - internal primary key (can be found via .getPk() on an item)"}
{"_id": "q_17221", "text": "getOnlyFields - Gets only certain fields from a paticular primary key. For working on entire filter set, see allOnlyFields\n\n\t\t\t@param pk <int> - Primary Key\n\n\t\t\t@param fields list<str> - List of fields\n\n\t\t\t@param cascadeFetch <bool> Default False, If True, all Foreign objects associated with this model\n\t\t\t will be fetched immediately. If False, foreign objects will be fetched on-access.\n\n\n\t\t\treturn - Partial objects with only fields applied"}
{"_id": "q_17222", "text": "deleteByPk - Delete object associated with given primary key"}
{"_id": "q_17223", "text": "deleteMultiple - Delete multiple objects\n\n\t\t\t@param objs - List of objects\n\n\t\t\t@return - Number of objects deleted"}
{"_id": "q_17224", "text": "deleteMultipleByPks - Delete multiple objects given their primary keys\n\n\t\t\t@param pks - List of primary keys\n\n\t\t\t@return - Number of objects deleted"}
{"_id": "q_17225", "text": "create an input file using jinja2 by filling a template\n with the values from the option variable passed in."}
{"_id": "q_17226", "text": "Returns a blox template from an html string"}
{"_id": "q_17227", "text": "Returns a blox template from a file stream object"}
{"_id": "q_17228", "text": "Returns a blox template from a valid file path"}
{"_id": "q_17229", "text": "Cast an arbitrary object or sequence to a string type"}
{"_id": "q_17230", "text": "Cast an arbitrary sequence to a boolean type"}
{"_id": "q_17231", "text": "Accumulate all dictionary and named arguments as\n keyword argument dictionary. This is generally useful for\n functions that try to automatically resolve inputs.\n\n Examples:\n >>> @keywords\n >>> def test(*args, **kwargs):\n >>> return kwargs\n >>>\n >>> print test({'one': 1}, two=2)\n {'one': 1, 'two': 2}"}
{"_id": "q_17232", "text": "Modify the encoding entry in the XML file.\n\n Parameters\n ----------\n filepath: str\n Path to the file to be modified.\n\n src_enc: str\n Encoding that is written in the file\n\n dst_enc: str\n Encoding to be set in the file."}
{"_id": "q_17233", "text": "getCompressMod - Return the module used for compression on this field\n\n\t\t\t@return <module> - The module for compression"}
{"_id": "q_17234", "text": "Save `text` in a qrcode svg image file.\n\n Parameters\n ----------\n text: str\n The string to be codified in the QR image.\n\n out_filepath: str\n Path to the output file\n\n color: str\n A RGB color expressed in 6 hexadecimal values.\n\n box_size: scalar\n Size of the QR code boxes."}
{"_id": "q_17235", "text": "Set the gromacs input data using the supplied input options, run\n gromacs and extract and return the required outputs."}
{"_id": "q_17236", "text": "Call CLI command with arguments and returns its return value.\n\n Parameters\n ----------\n cmd_name: str\n Command name or full path to the binary file.\n\n arg_strings: str\n Argument strings list.\n\n Returns\n -------\n return_value\n Command return value."}
{"_id": "q_17237", "text": "toBytes - Convert a value to bytes using the encoding specified on this field\n\n\t\t\t@param value <str> - The field to convert to bytes\n\n\t\t\t@return <bytes> - The object encoded using the codec specified on this field.\n\n\t\t\tNOTE: This method may go away."}
{"_id": "q_17238", "text": "Call PDFLatex to convert TeX files to PDF.\n\n Parameters\n ----------\n tex_file: str\n Path to the input LateX file.\n\n output_file: str\n Path to the output PDF file.\n If None, will use the same output directory as the tex_file.\n\n output_format: str\n Output file format. Choices: 'pdf' or 'dvi'. Default: 'pdf'\n\n Returns\n -------\n return_value\n PDFLatex command call return value."}
{"_id": "q_17239", "text": "Like functools.partial but instead of using the new kwargs, keeps the old ones."}
{"_id": "q_17240", "text": "Create an OrderedDict\n\n :param hierarchy: a dictionary\n :param level: single key\n :return: deeper dictionary"}
{"_id": "q_17241", "text": "Returns a transformed Geometry.\n\n Arguments:\n geom -- any coercible Geometry value or Envelope\n to_sref -- SpatialReference or EPSG ID as int"}
{"_id": "q_17242", "text": "Expands this envelope by the given Envelope or tuple.\n\n Arguments:\n other -- Envelope, two-tuple, or four-tuple"}
{"_id": "q_17243", "text": "Returns the intersection of this and another Envelope."}
{"_id": "q_17244", "text": "Creates a table from arrays Z, N and M\n\n Example:\n ________\n\n >>> Z = [82, 82, 83]\n >>> N = [126, 127, 130]\n >>> M = [-21.34, -18.0, -14.45]\n >>> Table.from_ZNM(Z, N, M, name='Custom Table')\n Z N\n 82 126 -21.34\n 127 -18.00\n 83 130 -14.45\n Name: Custom Table, dtype: float64"}
{"_id": "q_17245", "text": "Selects nuclei according to a condition on Z,N or M\n\n Parameters\n ----------\n condition : function,\n Can have one of the signatures f(M), f(Z,N) or f(Z, N, M)\n must return a boolean value\n name: string, optional name for the resulting Table\n\n Example:\n --------\n Select all nuclei with A > 160:\n\n >>> A_gt_160 = lambda Z,N: Z + N > 160\n >>> Table('AME2003').select(A_gt_160)"}
{"_id": "q_17246", "text": "Return a selection of the Table at positions given by ``nuclei``\n\n Parameters\n ----------\n nuclei: list of tuples\n A list where each element is tuple of the form (Z,N)\n\n Example\n -------\n Return binding energies at magic nuclei:\n\n >>> magic_nuclei = [(20,28), (50,50), (50,82), (82,126)]\n >>> Table('AME2012').binding_energy.at(magic_nuclei)\n Z N\n 20 28 416.014215\n 50 50 825.325172\n 82 1102.876416\n 82 126 1636.486450"}
{"_id": "q_17247", "text": "Select nuclei which also belong to ``table``\n\n Parameters\n ----------\n table: Table, Table object\n\n Example:\n ----------\n Table('AME2003').intersection(Table('AME1995'))"}
{"_id": "q_17248", "text": "Select nuclei not in table\n\n Parameters\n ----------\n table: Table, Table object from where nuclei should be removed\n\n Example:\n ----------\n Find the new nuclei in AME2003 with Z,N >= 8:\n\n >>> Table('AME2003').not_in(Table('AME1995'))[8:,8:].count\n 389"}
{"_id": "q_17249", "text": "Selects odd-even nuclei from the table"}
{"_id": "q_17250", "text": "Selects even-even nuclei from the table"}
{"_id": "q_17251", "text": "Calculate error difference\n\n Parameters\n ----------\n relative_to : string,\n a valid mass table name.\n\n Example:\n ----------\n >>> Table('DUZU').error(relative_to='AME2003')"}
{"_id": "q_17252", "text": "Calculate root mean squared error\n\n Parameters\n ----------\n relative_to : string,\n a valid mass table name.\n\n Example:\n ----------\n >>> template = '{0:10}|{1:^6.2f}|{2:^6.2f}|{3:^6.2f}'\n >>> print 'Model ', 'AME95 ', 'AME03 ', 'AME12 ' # Table header\n ... for name in Table.names:\n ... print template.format(name, Table(name).rmse(relative_to='AME1995'),\n ... Table(name).rmse(relative_to='AME2003'),\n ... Table(name).rmse(relative_to='AME2012'))\n Model AME95 AME03 AME12\n AME2003 | 0.13 | 0.00 | 0.13\n AME2003all| 0.42 | 0.40 | 0.71\n AME2012 | 0.16 | 0.13 | 0.00\n AME2012all| 0.43 | 0.43 | 0.69\n AME1995 | 0.00 | 0.13 | 0.16\n AME1995all| 0.00 | 0.17 | 0.21\n DUZU | 0.52 | 0.52 | 0.76\n FRDM95 | 0.79 | 0.78 | 0.95\n KTUY05 | 0.78 | 0.77 | 1.03\n ETFSI12 | 0.84 | 0.84 | 1.04\n HFB14 | 0.84 | 0.83 | 1.02"}
{"_id": "q_17253", "text": "Return 2 neutron separation energy"}
{"_id": "q_17254", "text": "Return 1 neutron separation energy"}
{"_id": "q_17255", "text": "Return 2 proton separation energy"}
{"_id": "q_17256", "text": "Return 1 proton separation energy"}
{"_id": "q_17257", "text": "Helper function for derived quantities"}
{"_id": "q_17258", "text": "Create a numpy.ndarray with all observed fields and\n computed teff and luminosity values."}
{"_id": "q_17259", "text": "Return the numpy array with rounded teff and luminosity columns."}
{"_id": "q_17260", "text": "Given a cluster create a Bokeh plot figure using the\n cluster's image."}
{"_id": "q_17261", "text": "Returns rounded teff and luminosity lists."}
{"_id": "q_17262", "text": "Given a cluster create a Bokeh plot figure creating an\n H-R diagram."}
{"_id": "q_17263", "text": "Given a numpy array calculate what the ranges of the H-R\n diagram should be."}
{"_id": "q_17264", "text": "Given a numpy array create a Bokeh plot figure creating an\n H-R diagram."}
{"_id": "q_17265", "text": "Filter the cluster data catalog into the filtered_data\n catalog, which is what is shown in the H-R diagram.\n\n Filter on the values of the sliders, as well as the lasso\n selection in the skyviewer."}
{"_id": "q_17266", "text": "This functions gives the user a way to change the data that is given as input."}
{"_id": "q_17267", "text": "Performs a bruteforce for the given users, password, domain on the given host."}
{"_id": "q_17268", "text": "Computes the key from the salt and the master password."}
{"_id": "q_17269", "text": "Initialize a database.\n\n :param database_path: The absolute path to the database to initialize."}
{"_id": "q_17270", "text": "Modify an existing domain.\n\n :param domain_name: The name of the domain to modify.\n :param new_salt: Whether to generate a new salt for the domain.\n :param username: If given, change domain username to this value.\n :returns: The modified :class:`Domain <pwm.core.Domain>` object."}
{"_id": "q_17271", "text": "Create a new domain entry in the database.\n\n :param username: The username to associate with this domain.\n :param alphabet: A character set restriction to impose on keys generated for this domain.\n :param length: The length of the generated key, in case of restrictions on the site."}
{"_id": "q_17272", "text": "Set the access and modified times of the file specified by path."}
{"_id": "q_17273", "text": "Strip \\\\?\\ prefix in init phase"}
{"_id": "q_17274", "text": "Return the path always without the \\\\?\\ prefix."}
{"_id": "q_17275", "text": "Print the given line to stdout"}
{"_id": "q_17276", "text": "Extract messages from Handlebars templates.\n\n It returns an iterator yielding tuples in the following form ``(lineno,\n funcname, message, comments)``.\n\n TODO: Things to improve:\n --- Return comments"}
{"_id": "q_17277", "text": "Strips labels."}
{"_id": "q_17278", "text": "Remove namespace in the passed document in place."}
{"_id": "q_17279", "text": "Returns a GDAL virtual filesystem prefixed path.\n\n Arguments:\n path -- file path as str"}
{"_id": "q_17280", "text": "Check to see if this URI is retrievable by this Retriever implementation\n\n :param uri: the URI of the resource to be retrieved\n :type uri: str\n :return: True if it can be, False if not\n :rtype: bool"}
{"_id": "q_17281", "text": "Subscribes `callable` to listen to events of `name` type. The\n parameters passed to `callable` are dependent on the specific\n event being triggered."}
{"_id": "q_17282", "text": "Configures this engine based on the options array passed into\n `argv`. If `argv` is ``None``, then ``sys.argv`` is used instead.\n During configuration, the command line options are merged with\n previously stored values. Then the logging subsystem and the\n database model are initialized, and all storable settings are\n serialized to configurations files."}
{"_id": "q_17283", "text": "Handle provided columns and if necessary, convert columns to a list for \n internal strage.\n\n :columns: A sequence of columns for the table. Can be list, comma\n -delimited string, or IntEnum."}
{"_id": "q_17284", "text": "Execute a DML query \n\n :sql_string: An SQL string template\n :*args: Arguments to be passed for query parameters.\n :commit: Whether or not to commit the transaction after the query\n :returns: Psycopg2 result"}
{"_id": "q_17285", "text": "Execute a SELECT statement \n\n :sql_string: An SQL string template\n :columns: A list of columns to be returned by the query\n :*args: Arguments to be passed for query parameters.\n :returns: Psycopg2 result"}
{"_id": "q_17286", "text": "Retreive a single record from the table. Lots of reasons this might be\n best implemented in the model\n\n :pk: The primary key ID for the record\n :returns: List of single result"}
{"_id": "q_17287", "text": "Creates the final payload based on the x86 and x64 meterpreters."}
{"_id": "q_17288", "text": "Combines the files 1 and 2 into 3."}
{"_id": "q_17289", "text": "Starts the exploiting phase, you should run setup before running this function.\n if auto is set, this function will fire the exploit to all systems. Otherwise a curses interface is shown."}
{"_id": "q_17290", "text": "Exploits a single ip, exploit is based on the given operating system."}
{"_id": "q_17291", "text": "Create server instance with an optional WebSocket handler\n\n For pure WebSocket server ``app`` may be ``None`` but an attempt to access\n any path other than ``ws_path`` will cause server error.\n \n :param host: hostname or IP\n :type host: str\n :param port: server port\n :type port: int\n :param app: WSGI application\n :param server_class: WSGI server class, defaults to AsyncWsgiServer\n :param handler_class: WSGI handler class, defaults to AsyncWsgiHandler\n :param ws_handler_class: WebSocket hanlder class, defaults to ``None``\n :param ws_path: WebSocket path on the server, defaults to '/ws'\n :type ws_path: str, optional\n :return: initialized server instance"}
{"_id": "q_17292", "text": "Poll active sockets once\n\n This method can be used to allow aborting server polling loop\n on some condition.\n\n :param timeout: polling timeout"}
{"_id": "q_17293", "text": "Start serving HTTP requests\n\n This method blocks the current thread.\n\n :param poll_interval: polling timeout\n :return:"}
{"_id": "q_17294", "text": "write triples to file."}
{"_id": "q_17295", "text": "Prints an overview of the tags of the hosts."}
{"_id": "q_17296", "text": "Starts the loop to provide the data from jackal."}
{"_id": "q_17297", "text": "Creates a search query based on the section of the config file."}
{"_id": "q_17298", "text": "Creates the workers based on the given configfile to provide named pipes in the directory."}
{"_id": "q_17299", "text": "Loads the config and handles the workers."}
{"_id": "q_17300", "text": "Replace isocode by its language equivalent\n\n :param isocode: Three character long language code\n :param lang: Lang in which to return the language name\n :return: Full Text Language Name"}
{"_id": "q_17301", "text": "Take a string of form %citation_type|passage% and format it for human\n\n :param string: String of formation %citation_type|passage%\n :param lang: Language to translate to\n :return: Human Readable string\n\n .. note :: To Do : Use i18n tools and provide real i18n"}
{"_id": "q_17302", "text": "Connect to a service to see if it is a http or https server."}
{"_id": "q_17303", "text": "Imports the given nmap result."}
{"_id": "q_17304", "text": "Start an nmap process with the given args on the given ips."}
{"_id": "q_17305", "text": "Scans the given hosts with nmap."}
{"_id": "q_17306", "text": "Scans available smb services in the database for smb signing and ms17-010."}
{"_id": "q_17307", "text": "Rename endpoint function name to avoid conflict when namespacing is set to true\n\n :param fn_name: Name of the route function\n :param instance: Instance bound to the function\n :return: Name of the new namespaced function name"}
{"_id": "q_17308", "text": "Retrieve the best matching locale using request headers\n\n .. note:: Probably one of the thing to enhance quickly.\n\n :rtype: str"}
{"_id": "q_17309", "text": "Retrieve and transform a list of references.\n\n Returns the inventory collection object with its metadata and a callback function taking a level parameter \\\n and returning a list of strings.\n\n :param objectId: Collection Identifier\n :type objectId: str\n :param subreference: Subreference from which to retrieve children\n :type subreference: str\n :param collection: Collection object bearing metadata\n :type collection: Collection\n :param export_collection: Return collection metadata\n :type export_collection: bool\n :return: Returns either the list of references, or the text collection object with its references as tuple\n :rtype: (Collection, [str]) or [str]"}
{"_id": "q_17310", "text": "Retrieve the passage identified by the parameters\n\n :param objectId: Collection Identifier\n :type objectId: str\n :param subreference: Subreference of the passage\n :type subreference: str\n :return: An object bearing metadata and its text\n :rtype: InteractiveTextualNode"}
{"_id": "q_17311", "text": "Generates a SEO friendly string for given collection\n\n :param collection: Collection object to generate string for\n :param parent: Current collection parent\n :return: SEO/URL Friendly string"}
{"_id": "q_17312", "text": "Build an ancestor or descendant dict view based on selected information\n\n :param member: Current Member to build for\n :param collection: Collection from which we retrieved it\n :param lang: Language to express data in\n :return:"}
{"_id": "q_17313", "text": "Build member list for given collection\n\n :param collection: Collection to build dict view of for its members\n :param lang: Language to express data in\n :return: List of basic objects"}
{"_id": "q_17314", "text": "Build parents list for given collection\n\n :param collection: Collection to build dict view of for its members\n :param lang: Language to express data in\n :return: List of basic objects"}
{"_id": "q_17315", "text": "Retrieve the top collections of the inventory\n\n :param lang: Lang in which to express main data\n :type lang: str\n :return: Collections information and template\n :rtype: {str: Any}"}
{"_id": "q_17316", "text": "Text exemplar references browsing route function\n\n :param objectId: Collection identifier\n :type objectId: str\n :param lang: Lang in which to express main data\n :type lang: str\n :return: Template and required information about text with its references"}
{"_id": "q_17317", "text": "Provides a redirect to the first passage of given objectId\n\n :param objectId: Collection identifier\n :type objectId: str\n :return: Redirection to the first passage of given text"}
{"_id": "q_17318", "text": "Retrieve the text of the passage\n\n :param objectId: Collection identifier\n :type objectId: str\n :param lang: Lang in which to express main data\n :type lang: str\n :param subreference: Reference identifier\n :type subreference: str\n :return: Template, collections metadata and Markup object representing the text\n :rtype: {str: Any}"}
{"_id": "q_17319", "text": "Merge and register assets, both as routes and dictionary\n\n :return: None"}
{"_id": "q_17320", "text": "Create blueprint and register rules\n\n :return: Blueprint of the current nemo app\n :rtype: flask.Blueprint"}
{"_id": "q_17321", "text": "Create a view\n\n :param name: Name of the route function to use for the view.\n :type name: str\n :return: Route function which makes use of Nemo context (such as menu informations)\n :rtype: function"}
{"_id": "q_17322", "text": "Retrieve main parent collections of a repository\n\n :param lang: Language to retrieve information in\n :return: Sorted collections representations"}
{"_id": "q_17323", "text": "This function is built to provide cache keys for templates\n\n :param endpoint: Current endpoint\n :param kwargs: Keyword Arguments\n :return: tuple of i18n dependant cache key and i18n ignoring cache key\n :rtype: tuple(str)"}
{"_id": "q_17324", "text": "Render a route template and adds information to this route.\n\n :param template: Template name.\n :type template: str\n :param kwargs: dictionary of named arguments used to be passed to the template\n :type kwargs: dict\n :return: Http Response with rendered template\n :rtype: flask.Response"}
{"_id": "q_17325", "text": "Register the app using Blueprint\n\n :return: Nemo blueprint\n :rtype: flask.Blueprint"}
{"_id": "q_17326", "text": "Register plugins in Nemo instance\n\n - Clear routes first if asked by one plugin\n - Clear assets if asked by one plugin and replace by the last plugin registered static_folder\n - Register each plugin\n - Append plugin routes to registered routes\n - Append plugin filters to registered filters\n - Append templates directory to given namespaces\n - Append assets (CSS, JS, statics) to given resources \n - Append render view (if exists) to Nemo.render stack"}
{"_id": "q_17327", "text": "Handle a list of references depending on the text identifier using the chunker dictionary.\n\n :param text: Text object from which comes the references\n :type text: MyCapytains.resources.texts.api.Text\n :param reffs: List of references to transform\n :type reffs: References\n :return: Transformed list of references\n :rtype: [str]"}
{"_id": "q_17328", "text": "Obtains the data from the pipe and appends the given tag."}
{"_id": "q_17329", "text": "Returns the EPSG ID as int if it exists."}
{"_id": "q_17330", "text": "Creates the section value if it does not exists and sets the value.\n Use write_config to actually set the value."}
{"_id": "q_17331", "text": "This function tries to retrieve the value from the configfile\n otherwise will return a default."}
{"_id": "q_17332", "text": "Returns the configuration directory"}
{"_id": "q_17333", "text": "Write the current config to disk to store them."}
{"_id": "q_17334", "text": "Track the specified remote branch if it is not already tracked."}
{"_id": "q_17335", "text": "Checkout, update and branch from the specified branch."}
{"_id": "q_17336", "text": "write_targets will write the contents of ips and ldap_strings to the targets_file."}
{"_id": "q_17337", "text": "Starts the ntlmrelayx.py and responder processes.\n Assumes you have these programs in your path."}
{"_id": "q_17338", "text": "Function that gets called on each event from pyinotify."}
{"_id": "q_17339", "text": "Watches directory for changes"}
{"_id": "q_17340", "text": "Terminate the processes."}
{"_id": "q_17341", "text": "This function waits for the relay and responding processes to exit.\n Captures KeyboardInterrupt to shutdown these processes."}
{"_id": "q_17342", "text": "Make breadcrumbs for a route\n\n :param kwargs: dictionary of named arguments used to construct the view\n :type kwargs: dict\n :return: List of dict items the view can use to construct the link.\n :rtype: {str: list({ \"link\": str, \"title\", str, \"args\", dict})}"}
{"_id": "q_17343", "text": "This function obtains hosts from core and starts a nessus scan on these hosts.\n The nessus tag is appended to the host tags."}
{"_id": "q_17344", "text": "Creates a scan with the given host ips\n Returns the scan id of the created object."}
{"_id": "q_17345", "text": "Bases the comparison of the datastores on URI alone."}
{"_id": "q_17346", "text": "Route to retrieve annotations by target\n\n :param target_urn: The CTS URN for which to retrieve annotations \n :type target_urn: str\n :return: a JSON string containing count and list of resources\n :rtype: {str: Any}"}
{"_id": "q_17347", "text": "Main entry point for the CLI."}
{"_id": "q_17348", "text": "Initialize loggers."}
{"_id": "q_17349", "text": "Returns the label for a given Enum key"}
{"_id": "q_17350", "text": "Returns the verbose name for a given enum value"}
{"_id": "q_17351", "text": "Update the content of a single file."}
{"_id": "q_17352", "text": "Tries to perform a zone transfer."}
{"_id": "q_17353", "text": "Resolves the list of domains and returns the ips."}
{"_id": "q_17354", "text": "Parses the list of ips, turns these into ranges based on the netmask given.\n Set include_public to True to include public IP adresses."}
{"_id": "q_17355", "text": "Uses the command line arguments to fill the search function and call it."}
{"_id": "q_17356", "text": "Uses the command line arguments to fill the count function and call it."}
{"_id": "q_17357", "text": "Returns a generator that maps the input of the pipe to an elasticsearch object.\n Will call id_to_object if it cannot serialize the data from json."}
{"_id": "q_17358", "text": "Resolves an ip adres to a range object, creating it if it doesn't exists."}
{"_id": "q_17359", "text": "Argparser option with search functionality specific for ranges."}
{"_id": "q_17360", "text": "Resolves the given id to a user object, if it doesn't exists it will be created."}
{"_id": "q_17361", "text": "Returns a list that maps the input of the pipe to an elasticsearch object.\n Will call id_to_object if it cannot serialize the data from json."}
{"_id": "q_17362", "text": "Consumes an ET protocol tree and converts it to state.Command commands"}
{"_id": "q_17363", "text": "Returns a dictionary of enabled GDAL Driver metadata keyed by the\n 'ShortName' attribute."}
{"_id": "q_17364", "text": "Returns the gdal.Driver for a path or None based on the file extension.\n\n Arguments:\n path -- file path as str with a GDAL supported file extension"}
{"_id": "q_17365", "text": "Converts an OGR polygon to a 2D NumPy array.\n\n Arguments:\n geom -- OGR Geometry\n size -- array size in pixels as a tuple of (width, height)\n affine -- AffineTransform"}
{"_id": "q_17366", "text": "Returns a Raster instance.\n\n Arguments:\n path -- local or remote path as str or file-like object\n Keyword args:\n mode -- gdal constant representing access mode"}
{"_id": "q_17367", "text": "Returns an in-memory raster initialized from a pixel buffer.\n\n Arguments:\n data -- byte buffer of raw pixel data\n size -- two or three-tuple of (xsize, ysize, bandcount)\n bandtype -- band data type"}
{"_id": "q_17368", "text": "Returns a copied Raster instance.\n\n Arguments:\n source -- the source Raster instance or filepath as str\n dest -- destination filepath as str"}
{"_id": "q_17369", "text": "Returns a dict of driver specific raster creation options.\n\n See GDAL format docs at http://www.gdal.org/formats_list.html"}
{"_id": "q_17370", "text": "Returns a new Raster instance.\n\n gdal.Driver.Create() does not support all formats.\n\n Arguments:\n path -- file object or path as str\n size -- two or three-tuple of (xsize, ysize, bandcount)\n bandtype -- GDAL pixel data type"}
{"_id": "q_17371", "text": "Sets the affine transformation.\n\n Intercepts the gdal.Dataset call to ensure use as a property setter.\n\n Arguments:\n affine -- AffineTransform or six-tuple of geotransformation values"}
{"_id": "q_17372", "text": "Returns an NDArray, optionally subset by spatial envelope.\n\n Keyword args:\n envelope -- coordinate extent tuple or Envelope"}
{"_id": "q_17373", "text": "Returns the underlying ImageDriver instance."}
{"_id": "q_17374", "text": "Returns a MaskedArray using nodata values.\n\n Keyword args:\n geometry -- any geometry, envelope, or coordinate extent tuple"}
{"_id": "q_17375", "text": "Returns raster data bytes for partial or full extent.\n\n Overrides gdal.Dataset.ReadRaster() with the full raster size by\n default."}
{"_id": "q_17376", "text": "Returns a new instance resampled to provided size.\n\n Arguments:\n size -- tuple of x,y image dimensions"}
{"_id": "q_17377", "text": "Save this instance to the path and format provided.\n\n Arguments:\n to -- output path as str, file, or MemFileIO instance\n Keyword args:\n driver -- GDAL driver name as string or ImageDriver"}
{"_id": "q_17378", "text": "Sets the spatial reference.\n\n Intercepts the gdal.Dataset call to ensure use as a property setter.\n\n Arguments:\n sref -- SpatialReference or any format supported by the constructor"}
{"_id": "q_17379", "text": "Returns a new reprojected instance.\n\n Arguments:\n to_sref -- spatial reference as a proj4 or wkt string, or a\n SpatialReference\n Keyword args:\n dest -- filepath as str\n interpolation -- GDAL interpolation type"}
{"_id": "q_17380", "text": "retrieves a named charset or treats the input as a custom alphabet and use that"}
{"_id": "q_17381", "text": "gets a chunk from the input data, converts it to a number and\n encodes that number"}
{"_id": "q_17382", "text": "partition the data into chunks and retrieve the chunk at the given index"}
{"_id": "q_17383", "text": "Initializes the indices"}
{"_id": "q_17384", "text": "Cache result of function call."}
{"_id": "q_17385", "text": "Parse the entry into a computer object."}
{"_id": "q_17386", "text": "Parse the file and extract the computers, import the computers that resolve into jackal."}
{"_id": "q_17387", "text": "Parses a single entry from the domaindump"}
{"_id": "q_17388", "text": "Make an autocomplete API request\n\n This can be used to find cities and/or hurricanes by name\n\n :param string query: city\n :param string country: restrict search to a specific country. Must be a two letter country code\n :param boolean hurricanes: whether to search for hurricanes or not\n :param boolean cities: whether to search for cities or not\n :param integer timeout: timeout of the api request\n :returns: result of the autocomplete API request\n :rtype: dict"}
{"_id": "q_17389", "text": "Make an API request\n\n :param string key: API key to use\n :param list features: features to request. It must be a subset of :data:`FEATURES`\n :param string query: query to send\n :param integer timeout: timeout of the request\n :returns: result of the API request\n :rtype: dict"}
{"_id": "q_17390", "text": "Try to convert a string to unicode using different encodings"}
{"_id": "q_17391", "text": "Handle HTTP GET requests on an authentication endpoint.\n\n Authentication flow begins when ``params`` has a ``login`` key with a value\n of ``start``. For instance, ``/auth/twitter?login=start``.\n\n :param str provider: An provider to obtain a user ID from.\n :param str request_url: The authentication endpoint/callback.\n :param dict params: GET parameters from the query string.\n :param str token_secret: An app secret to encode/decode JSON web tokens.\n :param str token_cookie: The current JSON web token, if available.\n :return: A dict containing any of the following possible keys:\n\n ``status``: an HTTP status code the server should sent\n\n ``redirect``: where the client should be directed to continue the flow\n\n ``set_token_cookie``: contains a JSON web token and should be stored by\n the client and passed in the next call.\n\n ``provider_user_id``: the user ID from the login provider\n\n ``provider_user_name``: the user name from the login provider"}
{"_id": "q_17392", "text": "Method to call to get a serializable object for json.dump or jsonify based on the target\n\n :return: dict"}
{"_id": "q_17393", "text": "index all triples into indexes and return their mappings"}
{"_id": "q_17394", "text": "Transform triple index into a 1-D numpy array."}
{"_id": "q_17395", "text": "Packs a list of triple indexes into a 2D numpy array."}
{"_id": "q_17396", "text": "Uses a union find to find segment."}
{"_id": "q_17397", "text": "Returns the model properties as a dict"}
{"_id": "q_17398", "text": "Create a usable data structure for serializing."}
{"_id": "q_17399", "text": "Catch exceptions with a prompt for post-mortem analyzis"}
{"_id": "q_17400", "text": "Clearer data printing"}
{"_id": "q_17401", "text": "Connects to the remote master and continuously receives calls, executes\n them, then returns a response until interrupted."}
{"_id": "q_17402", "text": "Runs a pool of workers which connect to a remote HighFive master and begin\n executing calls."}
{"_id": "q_17403", "text": "Logs an operation done on an entity, possibly with other arguments"}
{"_id": "q_17404", "text": "Logs a new state of an entity"}
{"_id": "q_17405", "text": "Logs an update done on an entity"}
{"_id": "q_17406", "text": "Logs an error"}
{"_id": "q_17407", "text": "Add message to queue and start processing the queue."}
{"_id": "q_17408", "text": "Create the message to turn light on."}
{"_id": "q_17409", "text": "Create the message to turn switch on."}
{"_id": "q_17410", "text": "Scale brightness from 0..255 to 1..32."}
{"_id": "q_17411", "text": "Create the message to turn light or switch off."}
{"_id": "q_17412", "text": "If the queue is not empty, process the queue."}
{"_id": "q_17413", "text": "Send msg to LightwaveRF hub."}
{"_id": "q_17414", "text": "Generates a wrapped adapter for the given object\n\n Parameters\n ----------\n obj : list, buffer, array, or file\n\n Raises\n ------\n ValueError\n If presented with an object that cannot be adapted\n\n Returns\n -------\n CMPH capable adapter"}
{"_id": "q_17415", "text": "Sets the nature of this YearlyFinancials.\n Nature of the balancesheet\n\n :param nature: The nature of this YearlyFinancials.\n :type: str"}
{"_id": "q_17416", "text": "Decorator that provides a dictionary cursor to the calling function\n\n Adds the cursor as the second argument to the calling functions\n\n Requires that the function being decorated is an instance of a class or object\n that yields a cursor from a get_cursor(cursor_type=CursorType.DICT) coroutine or provides such an object\n as the first argument in its signature\n\n Yields:\n A client-side dictionary cursor"}
{"_id": "q_17417", "text": "Decorator that provides a cursor to the calling function\n\n Adds the cursor as the second argument to the calling functions\n\n Requires that the function being decorated is an instance of a class or object\n that yields a cursor from a get_cursor() coroutine or provides such an object\n as the first argument in its signature\n\n Yields:\n A client-side cursor"}
{"_id": "q_17418", "text": "Decorator that provides a namedtuple cursor to the calling function\n\n Adds the cursor as the second argument to the calling functions\n\n Requires that the function being decorated is an instance of a class or object\n that yields a cursor from a get_cursor(cursor_type=CursorType.NAMEDTUPLE) coroutine or provides such an object\n as the first argument in its signature\n\n Yields:\n A client-side namedtuple cursor"}
{"_id": "q_17419", "text": "gives the number of records in the table\n\n Args:\n table: a string indicating the name of the table\n\n Returns:\n an integer indicating the number of records in the table"}
{"_id": "q_17420", "text": "Creates an insert statement with only chosen fields\n\n Args:\n table: a string indicating the name of the table\n values: a dict of fields and values to be inserted\n\n Returns:\n A 'Record' object with table columns as properties"}
{"_id": "q_17421", "text": "Creates an update query with only chosen fields\n Supports only a single field where clause\n\n Args:\n table: a string indicating the name of the table\n values: a dict of fields and values to be inserted\n where_keys: list of dictionary\n example of where keys: [{'name':('>', 'cip'),'url':('=', 'cip.com'},{'type':{'<=', 'manufacturer'}}]\n where_clause will look like ((name>%s and url=%s) or (type <= %s))\n items within each dictionary get 'AND'-ed and dictionaries themselves get 'OR'-ed\n\n Returns:\n an integer indicating count of rows deleted"}
{"_id": "q_17422", "text": "Creates a delete query with where keys\n Supports multiple where clause with and or or both\n\n Args:\n table: a string indicating the name of the table\n where_keys: list of dictionary\n example of where keys: [{'name':('>', 'cip'),'url':('=', 'cip.com'},{'type':{'<=', 'manufacturer'}}]\n where_clause will look like ((name>%s and url=%s) or (type <= %s))\n items within each dictionary get 'AND'-ed and dictionaries themselves get 'OR'-ed\n\n Returns:\n an integer indicating count of rows deleted"}
{"_id": "q_17423", "text": "Creates a select query for selective columns with where keys\n Supports multiple where claus with and or or both\n\n Args:\n table: a string indicating the name of the table\n order_by: a string indicating column name to order the results on\n columns: list of columns to select from\n where_keys: list of dictionary\n limit: the limit on the number of results\n offset: offset on the results\n\n example of where keys: [{'name':('>', 'cip'),'url':('=', 'cip.com'},{'type':{'<=', 'manufacturer'}}]\n where_clause will look like ((name>%s and url=%s) or (type <= %s))\n items within each dictionary get 'AND'-ed and across dictionaries get 'OR'-ed\n\n Returns:\n A list of 'Record' object with table columns as properties"}
{"_id": "q_17424", "text": "Run a raw sql query\n\n Args:\n query : query string to execute\n values : tuple of values to be used with the query\n\n Returns:\n result of query as list of named tuple"}
{"_id": "q_17425", "text": "Update values of configuration section with dict.\n\n Args:\n sct_dict (dict): dict indexed with option names. Undefined\n options are discarded.\n conf_arg (bool): if True, only options that can be set in a config\n file are updated."}
{"_id": "q_17426", "text": "Restore default values of options in this section."}
{"_id": "q_17427", "text": "Set the list of config files.\n\n Args:\n config_files (pathlike): path of config files, given in the order\n of reading."}
{"_id": "q_17428", "text": "Iterator over sections, option names, and option values.\n\n This iterator is also implemented at the section level. The two loops\n produce the same output::\n\n for sct, opt, val in conf.opt_vals_():\n print(sct, opt, val)\n\n for sct in conf.sections_():\n for opt, val in conf[sct].opt_vals_():\n print(sct, opt, val)\n\n Yields:\n tuples with sections, option names, and option values."}
{"_id": "q_17429", "text": "Iterator over sections, option names, and option metadata.\n\n This iterator is also implemented at the section level. The two loops\n produce the same output::\n\n for sct, opt, meta in conf.defaults_():\n print(sct, opt, meta.default)\n\n for sct in conf.sections_():\n for opt, meta in conf[sct].defaults_():\n print(sct, opt, meta.default)\n\n Yields:\n tuples with sections, option names, and :class:`Conf` instances\n holding option metadata."}
{"_id": "q_17430", "text": "Create config file.\n\n Create config file in :attr:`config_files_[index]`.\n\n Parameters:\n index(int): index of config file.\n update (bool): if set to True and :attr:`config_files_` already\n exists, its content is read and all the options it sets are\n kept in the produced config file."}
{"_id": "q_17431", "text": "Read config files and set config values accordingly.\n\n Returns:\n (dict, list, list): respectively content of files, list of\n missing/empty files and list of files for which a parsing error\n arised."}
{"_id": "q_17432", "text": "List of cli strings for a given option."}
{"_id": "q_17433", "text": "List of config sections used by a command.\n\n Args:\n cmd (str): command name, set to ``None`` or ``''`` for the bare\n command.\n\n Returns:\n list of str: list of configuration sections used by that command."}
{"_id": "q_17434", "text": "Build command line argument parser.\n\n Returns:\n :class:`argparse.ArgumentParser`: the command line argument parser.\n You probably won't need to use it directly. To parse command line\n arguments and update the :class:`ConfigurationManager` instance\n accordingly, use the :meth:`parse_args` method."}
{"_id": "q_17435", "text": "Write zsh _arguments compdef for a given command.\n\n Args:\n zcf (file): zsh compdef file.\n cmd (str): command name, set to None or '' for bare command.\n grouping (bool): group options (zsh>=5.4).\n add_help (bool): add an help option."}
{"_id": "q_17436", "text": "Write zsh compdef script.\n\n Args:\n path (path-like): desired path of the compdef script.\n cmd (str): command name that should be completed.\n cmds (str): extra command names that should be completed.\n sourceable (bool): if True, the generated file will contain an\n explicit call to ``compdef``, which means it can be sourced\n to activate CLI completion."}
{"_id": "q_17437", "text": "Build a list of all options for a given command.\n\n Args:\n cmd (str): command name, set to None or '' for bare command.\n add_help (bool): add an help option.\n\n Returns:\n list of str: list of CLI options strings."}
{"_id": "q_17438", "text": "Write bash complete script.\n\n Args:\n path (path-like): desired path of the complete script.\n cmd (str): command name that should be completed.\n cmds (str): extra command names that should be completed."}
{"_id": "q_17439", "text": "Starts a new HighFive master at the given host and port, and returns it."}
{"_id": "q_17440", "text": "Called when a remote worker connection has been found. Finishes setting\n up the protocol object."}
{"_id": "q_17441", "text": "Called when a complete line is found from the remote worker. Decodes\n a response object from the line, then passes it to the worker object."}
{"_id": "q_17442", "text": "Called when the connection to the remote worker is broken. Closes the\n worker."}
{"_id": "q_17443", "text": "Called when a job has been found for the worker to run. Sends the job's\n RPC to the remote worker."}
{"_id": "q_17444", "text": "Closes the worker. No more jobs will be handled by the worker, and any\n running job is immediately returned to the job manager."}
{"_id": "q_17445", "text": "Runs a job set which consists of the jobs in an iterable job list."}
{"_id": "q_17446", "text": "Called when a state change has occurred. Waiters are notified that a\n change has occurred."}
{"_id": "q_17447", "text": "Adds a new result."}
{"_id": "q_17448", "text": "If there is still a job in the job iterator, loads it and increments\n the active job count."}
{"_id": "q_17449", "text": "Adds the result of a completed job to the result list, then decrements\n the active job count. If the job set is already complete, the result is\n simply discarded instead."}
{"_id": "q_17450", "text": "Cancels the job set. The job set is immediately finished, and all\n queued jobs are discarded."}
{"_id": "q_17451", "text": "Waits until the job set is finished. Returns immediately if the job set\n is already finished."}
{"_id": "q_17452", "text": "Distributes jobs from the active job set to any waiting get_job\n callbacks."}
{"_id": "q_17453", "text": "Adds a job set to the manager's queue. If there is no job set running,\n it is activated immediately. A new job set handle is returned."}
{"_id": "q_17454", "text": "Returns a job to its source job set to be run again later."}
{"_id": "q_17455", "text": "Called when a job set has been completed or cancelled. If the job set\n was active, the next incomplete job set is loaded from the job set\n queue and is activated."}
{"_id": "q_17456", "text": "Closes the job manager. No more jobs will be assigned, no more job sets\n will be added, and any queued or active job sets will be cancelled."}
{"_id": "q_17457", "text": "This method is used to append content of the `text`\n argument to the `out` argument.\n\n Depending on how many lines in the text, a\n padding can be added to all lines except the first\n one.\n\n Concatenation result is appended to the `out` argument."}
{"_id": "q_17458", "text": "Returns true if the regex matches the object, or a string in the object\n if it is some sort of container.\n\n :param regex: A regex.\n :type regex: ``regex``\n :param obj: An arbitrary object.\n :type object: ``object``\n\n :rtype: ``bool``"}
{"_id": "q_17459", "text": "Lists all available instances.\n\n :param latest: If true, ignores the cache and grabs the latest list.\n :type latest: ``bool``\n :param filters: Filters to apply to results. A result will only be shown\n if it includes all text in all filters.\n :type filters: [``str``]\n :param exclude: The opposite of filters. Results will be rejected if they\n include any of these strings.\n :type exclude: [``str``]\n :param limit: Maximum number of entries to show (default no maximum).\n :type limit: ``int`` or ``NoneType``\n\n :return: A list of host entries.\n :rtype: ``list`` of :py:class:`HostEntry`"}
{"_id": "q_17460", "text": "Use the environment to get the current region"}
{"_id": "q_17461", "text": "Prints the public dns name of `name`, if it exists.\n\n :param name: The instance name.\n :type name: ``str``"}
{"_id": "q_17462", "text": "Deserialize a HostEntry from a dictionary.\n\n This is more or less the same as calling\n HostEntry(**entry_dict), but is clearer if something is\n missing.\n\n :param entry_dict: A dictionary in the format outputted by to_dict().\n :type entry_dict: ``dict``\n\n :return: A HostEntry object.\n :rtype: ``cls``"}
{"_id": "q_17463", "text": "Sorts a list of entries by the given attribute."}
{"_id": "q_17464", "text": "Returns a representation of the host as a single line, with columns\n joined by ``sep``.\n\n :param additional_columns: Columns to show in addition to defaults.\n :type additional_columns: ``list`` of ``str``\n :param only_show: A specific list of columns to show.\n :type only_show: ``NoneType`` or ``list`` of ``str``\n :param sep: The column separator to use.\n :type sep: ``str``\n\n :rtype: ``str``"}
{"_id": "q_17465", "text": "Returns whether the instance matches the given filter text.\n\n :param _filter: A regex filter. If it starts with `<identifier>:`, then\n the part before the colon will be used as an attribute\n and the part after will be applied to that attribute.\n :type _filter: ``basestring``\n\n :return: True if the entry matches the filter.\n :rtype: ``bool``"}
{"_id": "q_17466", "text": "Returns the best name to display for this host. Uses the instance\n name if available; else just the public IP.\n\n :rtype: ``str``"}
{"_id": "q_17467", "text": "Helper function to traverse an element tree rooted at element, yielding nodes matching the query."}
{"_id": "q_17468", "text": "Given a simplified XPath query string, returns an array of normalized query parts."}
{"_id": "q_17469", "text": "Inserts a new element as a child of this element, before the specified index or sibling.\n\n :param before: An :class:`XmlElement` or a numeric index to insert the new node before\n :param name: The tag name to add\n :param attrs: Attributes for the new tag\n :param data: CDATA for the new tag\n :returns: The newly-created element\n :rtype: :class:`XmlElement`"}
{"_id": "q_17470", "text": "A generator yielding children of this node.\n\n :param name: If specified, only consider elements with this tag name\n :param reverse: If ``True``, children will be yielded in reverse declaration order"}
{"_id": "q_17471", "text": "Returns a canonical path to this element, relative to the root node.\n\n :param include_root: If ``True``, include the root node in the path. Defaults to ``False``."}
{"_id": "q_17472", "text": "Recursively find any descendants of this node with the given tag name. If a tag name is omitted, this will\n yield every descendant node.\n\n :param name: If specified, only consider elements with this tag name\n :returns: A generator yielding descendants of this node"}
{"_id": "q_17473", "text": "Returns the next sibling of this node.\n\n :param name: If specified, only consider elements with this tag name\n :rtype: :class:`XmlElement`"}
{"_id": "q_17474", "text": "Parses the HTML table into a list of dictionaries, each of which\n represents a single observation."}
{"_id": "q_17475", "text": "Calculates cache key based on `args` and `kwargs`.\n `args` and `kwargs` must be instances of hashable types."}
{"_id": "q_17476", "text": "Cache result of function execution into the django cache backend.\n Calculate cache key based on `prefix`, `args` and `kwargs` of the function.\n For using like object method set `method=True`."}
{"_id": "q_17477", "text": "Wrapper around Django's ORM `get` functionality.\n Wrap anything that raises ObjectDoesNotExist exception\n and provide the default value if necessary.\n `default` by default is None. `default` can be any callable,\n if it is callable it will be called when ObjectDoesNotExist\n exception will be raised."}
{"_id": "q_17478", "text": "Return only the part of the row which should be printed."}
{"_id": "q_17479", "text": "Attach the event time, as unix epoch"}
{"_id": "q_17480", "text": "Configure and return a new logger for hivy modules"}
{"_id": "q_17481", "text": "Implement celery workers using json and redis"}
{"_id": "q_17482", "text": "Return status report"}
{"_id": "q_17483", "text": "Define a configuration section handling config file.\n\n Returns:\n dict of ConfOpt: it defines the 'create', 'update', 'edit' and 'editor'\n configuration options."}
{"_id": "q_17484", "text": "Set options from a list of section.option=value string.\n\n Args:\n conf (:class:`~loam.manager.ConfigurationManager`): the conf to update.\n optstrs (list of str): the list of 'section.option=value' formatted\n string."}
{"_id": "q_17485", "text": "Implement the behavior of a subcmd using config_conf_section\n\n Args:\n conf (:class:`~loam.manager.ConfigurationManager`): it should contain a\n section created with :func:`config_conf_section` function.\n config (str): name of the configuration section created with\n :func:`config_conf_section` function."}
{"_id": "q_17486", "text": "Create completion files for bash and zsh.\n\n Args:\n climan (:class:`~loam.cli.CLIManager`): CLI manager.\n path (path-like): directory in which the config files should be\n created. It is created if it doesn't exist.\n cmd (str): command name that should be completed.\n cmds (str): extra command names that should be completed.\n zsh_sourceable (bool): if True, the generated file will contain an\n explicit call to ``compdef``, which means it can be sourced\n to activate CLI completion."}
{"_id": "q_17487", "text": "Writes a single observation to the output file.\n\n If the ``observation_data`` parameter is a dictionary, it is\n converted to a list to keep a consisted field order (as described\n in format specification). Otherwise it is assumed that the data\n is a raw record ready to be written to file.\n\n :param observation_data: a single observation as a dictionary or list"}
{"_id": "q_17488", "text": "Takes a dictionary of observation data and converts it to a list\n of fields according to AAVSO visual format specification.\n\n :param cls: current class\n :param observation_data: a single observation as a dictionary"}
{"_id": "q_17489", "text": "Renders a list of columns.\n\n :param columns: A list of columns, where each column is a list of strings.\n :type columns: [[``str``]]\n :param write_borders: Whether to write the top and bottom borders.\n :type write_borders: ``bool``\n :param column_colors: A list of coloring functions, one for each column.\n Optional.\n :type column_colors: [``str`` -> ``str``] or ``NoneType``\n\n :return: The rendered columns.\n :rtype: ``str``"}
{"_id": "q_17490", "text": "Renders a table. A table is a list of rows, each of which is a list\n of arbitrary objects. The `.str` method will be called on each element\n of the row. Jagged tables are ok; in this case, each row will be expanded\n to the maximum row length.\n\n :param table: A list of rows, as described above.\n :type table: [[``object``]]\n :param write_borders: Whether there should be a border on the top and\n bottom. Defaults to ``True``.\n :type write_borders: ``bool``\n :param column_colors: An optional list of coloring *functions* to be\n applied to each cell in each column. If provided,\n the list's length must be equal to the maximum\n number of columns. ``None`` can be mixed in to this\n list so that a selection of columns can be colored.\n :type column_colors: [``str`` -> ``str``] or ``NoneType``\n\n :return: The rendered table.\n :rtype: ``str``"}
{"_id": "q_17491", "text": "Prepare the rows so they're all strings, and all the same length.\n\n :param table: A 2D grid of anything.\n :type table: [[``object``]]\n\n :return: A table of strings, where every row is the same length.\n :rtype: [[``str``]]"}
{"_id": "q_17492", "text": "Returns a function that colors a string with a number from 0 to 255."}
{"_id": "q_17493", "text": "Hashes a string and returns a number between ``min`` and ``max``."}
{"_id": "q_17494", "text": "Returns a random color between min and max."}
{"_id": "q_17495", "text": "Reads stdin, exits with a message if interrupted, EOF, or a quit message.\n\n :return: The entered input. Converts to an integer if possible.\n :rtype: ``str`` or ``int``"}
{"_id": "q_17496", "text": "Verify http header token authentification"}
{"_id": "q_17497", "text": "Flask decorator protecting ressources using token scheme"}
{"_id": "q_17498", "text": "Utility for logbook information injection"}
{"_id": "q_17499", "text": "Runs multiple commands, optionally in parallel. Each command should be\n a dictionary with a 'command' key and optionally 'description' and\n 'write_stdin' keys."}
{"_id": "q_17500", "text": "Return the net work days according to RH's calendar."}
{"_id": "q_17501", "text": "Queries bash to find the path to a commmand on the system."}
{"_id": "q_17502", "text": "Uses hostname and other info to construct an SSH command."}
{"_id": "q_17503", "text": "Performs an SCP command where the remote_path is the source and the\n local_path is a format string, formatted individually for each host\n being copied from so as to create one or more distinct paths on the\n local system.\n\n :param entries: A list of entries.\n :type entries: ``list`` of :py:class:`HostEntry`\n :param remote_path: The source path on the remote machine(s).\n :type remote_path: ``str``\n :param local_path: A format string for the path on the local machine.\n :type local_path: ``str``\n :param profile: The profile, holding username/idfile info, etc.\n :type profile: :py:class:`Profile`"}
{"_id": "q_17504", "text": "Runs the given command over SSH in parallel on all hosts in `entries`.\n\n :param entries: The host entries the hostnames from.\n :type entries: ``list`` of :py:class:`HostEntry`\n :param username: To use a specific username.\n :type username: ``str`` or ``NoneType``\n :param idfile: The SSH identity file to use, or none.\n :type idfile: ``str`` or ``NoneType``\n :param command: The command to run.\n :type command: ``str``\n :param parallel: If true, commands will be run in parallel.\n :type parallel: ``bool``"}
{"_id": "q_17505", "text": "SSH into to a host.\n\n :param entry: The host entry to pull the hostname from.\n :type entry: :py:class:`HostEntry`\n :param username: To use a specific username.\n :type username: ``str`` or ``NoneType``\n :param idfile: The SSH identity file to use, if supplying a username.\n :type idfile: ``str`` or ``NoneType``\n :param tunnel: Host to tunnel SSH command through.\n :type tunnel: ``str`` or ``NoneType``\n\n :return: An exit status code.\n :rtype: ``int``"}
{"_id": "q_17506", "text": "Takes arguments parsed from argparse and returns a profile."}
{"_id": "q_17507", "text": "Get the name of the view function used to prevent having to set the tag\n manually for every endpoint"}
{"_id": "q_17508", "text": "Extract, transform, and load metadata from Lander-based projects.\n\n Parameters\n ----------\n session : `aiohttp.ClientSession`\n Your application's aiohttp client session.\n See http://aiohttp.readthedocs.io/en/stable/client.html.\n github_api_token : `str`\n A GitHub personal API token. See the `GitHub personal access token\n guide`_.\n ltd_product_data : `dict`\n Contents of ``metadata.yaml``, obtained via `download_metadata_yaml`.\n Data for this technote from the LTD Keeper API\n (``GET /products/<slug>``). Usually obtained via\n `lsstprojectmeta.ltd.get_ltd_product`.\n mongo_collection : `motor.motor_asyncio.AsyncIOMotorCollection`, optional\n MongoDB collection. This should be the common MongoDB collection for\n LSST projectmeta JSON-LD records. If provided, ths JSON-LD is upserted\n into the MongoDB collection.\n\n Returns\n -------\n metadata : `dict`\n JSON-LD-formatted dictionary.\n\n Raises\n ------\n NotLanderPageError\n Raised when the LTD product cannot be interpreted as a Lander page\n because the ``/metadata.jsonld`` file is absent. This implies that\n the LTD product *could* be of a different format.\n\n .. `GitHub personal access token guide`: https://ls.st/41d"}
{"_id": "q_17509", "text": "Relate this package component to the supplied part."}
{"_id": "q_17510", "text": "Return a list of parts related to this one via reltype."}
{"_id": "q_17511", "text": "Load relationships from source XML."}
{"_id": "q_17512", "text": "Add a part to the package.\n\n\t\tIt will also add a content-type - by default an override. If\n\t\toverride is False then it will add a content-type for the extension\n\t\tif one isn't already present."}
{"_id": "q_17513", "text": "Load a part into this package based on its relationship type"}
{"_id": "q_17514", "text": "Get the correct content type for a given name"}
{"_id": "q_17515", "text": "given an element, parse out the proper ContentType"}
{"_id": "q_17516", "text": "Converts a Open511 JSON document to XML.\n\n lang: the appropriate language code\n\n Takes a dict deserialized from JSON, returns an lxml Element.\n\n Accepts only the full root-level JSON object from an Open511 response."}
{"_id": "q_17517", "text": "Converts a Open511 JSON fragment to XML.\n\n Takes a dict deserialized from JSON, returns an lxml Element.\n\n This won't provide a conforming document if you pass in a full JSON document;\n it's for translating little fragments, and is mostly used internally."}
{"_id": "q_17518", "text": "Given a dict deserialized from a GeoJSON object, returns an lxml Element\n of the corresponding GML geometry."}
{"_id": "q_17519", "text": "Transform a GEOS or OGR geometry object into an lxml Element\n for the GML geometry."}
{"_id": "q_17520", "text": "Builds a final copy of the token using the given secret key.\n\n :param secret_key(string): The secret key that corresponds to this builder's access key."}
{"_id": "q_17521", "text": "Finds the maximum radius and npnp in the force field.\n\n Returns\n -------\n (max_rad, max_npnp): (float, float)\n Maximum radius and npnp distance in the loaded force field."}
{"_id": "q_17522", "text": "Makes a dictionary containing PyAtomData for the force field.\n\n Returns\n -------\n ff_params_struct_dict: dict\n Dictionary containing PyAtomData structs for the force field\n parameters for each atom in the force field."}
{"_id": "q_17523", "text": "Convert an Open511 document between formats.\n input_doc - either an lxml open511 Element or a deserialized JSON dict\n output_format - short string name of a valid output format, as listed above"}
{"_id": "q_17524", "text": "Get the document content in the specified markup format.\n\n Parameters\n ----------\n format : `str`, optional\n Output format (such as ``'html5'`` or ``'plain'``).\n mathjax : `bool`, optional\n Allow pandoc to use MathJax math markup.\n smart : `True`, optional\n Allow pandoc to create \"smart\" unicode punctuation.\n extra_args : `list`, optional\n Additional command line flags to pass to Pandoc. See\n `lsstprojectmeta.pandoc.convert.convert_text`.\n\n Returns\n -------\n output_text : `str`\n Converted content."}
{"_id": "q_17525", "text": "Get the document abstract in the specified markup format.\n\n Parameters\n ----------\n format : `str`, optional\n Output format (such as ``'html5'`` or ``'plain'``).\n deparagraph : `bool`, optional\n Remove the paragraph tags from single paragraph content.\n mathjax : `bool`, optional\n Allow pandoc to use MathJax math markup.\n smart : `True`, optional\n Allow pandoc to create \"smart\" unicode punctuation.\n extra_args : `list`, optional\n Additional command line flags to pass to Pandoc. See\n `lsstprojectmeta.pandoc.convert.convert_text`.\n\n Returns\n -------\n output_text : `str`\n Converted content or `None` if the title is not available in\n the document."}
{"_id": "q_17526", "text": "Get the document authors in the specified markup format.\n\n Parameters\n ----------\n format : `str`, optional\n Output format (such as ``'html5'`` or ``'plain'``).\n deparagraph : `bool`, optional\n Remove the paragraph tags from single paragraph content.\n mathjax : `bool`, optional\n Allow pandoc to use MathJax math markup.\n smart : `True`, optional\n Allow pandoc to create \"smart\" unicode punctuation.\n extra_args : `list`, optional\n Additional command line flags to pass to Pandoc. See\n `lsstprojectmeta.pandoc.convert.convert_text`.\n\n Returns\n -------\n output_text : `list` of `str`\n Sequence of author names in the specified output markup format."}
{"_id": "q_17527", "text": "Parse the title from TeX source.\n\n Sets these attributes:\n\n - ``_title``\n - ``_short_title``"}
{"_id": "q_17528", "text": "r\"\"\"Parse the author from TeX source.\n\n Sets the ``_authors`` attribute.\n\n Goal is to parse::\n\n \\author{\n A.~Author,\n B.~Author,\n and\n C.~Author}\n\n Into::\n\n ['A. Author', 'B. Author', 'C. Author']"}
{"_id": "q_17529", "text": "r\"\"\"Load the BibTeX bibliography referenced by the document.\n\n This method triggered by the `bib_db` attribute and populates the\n `_bib_db` private attribute.\n\n The ``\\bibliography`` command is parsed to identify the bibliographies\n referenced by the document."}
{"_id": "q_17530", "text": "r\"\"\"Parse the ``\\date`` command, falling back to getting the\n most recent Git commit date and the current datetime.\n\n Result is available from the `revision_datetime` attribute."}
{"_id": "q_17531", "text": "Create a JSON-LD representation of this LSST LaTeX document.\n\n Parameters\n ----------\n url : `str`, optional\n URL where this document is published to the web. Prefer\n the LSST the Docs URL if possible.\n Example: ``'https://ldm-151.lsst.io'``.\n code_url : `str`, optional\n Path the the document's repository, typically on GitHub.\n Example: ``'https://github.com/lsst/LDM-151'``.\n ci_url : `str`, optional\n Path to the continuous integration service dashboard for this\n document's repository.\n Example: ``'https://travis-ci.org/lsst/LDM-151'``.\n readme_url : `str`, optional\n URL to the document repository's README file. Example:\n ``https://raw.githubusercontent.com/lsst/LDM-151/master/README.rst``.\n license_id : `str`, optional\n License identifier, if known. The identifier should be from the\n listing at https://spdx.org/licenses/. Example: ``CC-BY-4.0``.\n\n Returns\n -------\n jsonld : `dict`\n JSON-LD-formatted dictionary."}
{"_id": "q_17532", "text": "Returns True if database server is running, False otherwise."}
{"_id": "q_17533", "text": "Saves the state of a database to a file.\n\n Parameters\n ----------\n name: str\n the database to be backed up.\n filename: str\n path to a file where database backup will be written."}
{"_id": "q_17534", "text": "Loads state of a backup file to a database.\n\n Note\n ----\n If database name does not exist, it will be created.\n\n Parameters\n ----------\n name: str\n the database to which backup will be restored.\n filename: str\n path to a file contain a postgres database backup."}
{"_id": "q_17535", "text": "Connects the database client shell to the database.\n\n Parameters\n ----------\n expect_module: str\n the database to which backup will be restored."}
{"_id": "q_17536", "text": "Say something in the afternoon"}
{"_id": "q_17537", "text": "Return a zipped package as a readable stream"}
{"_id": "q_17538", "text": "Return a generator yielding each of the segments who's names\n\t\tmatch name."}
{"_id": "q_17539", "text": "Copy objects from one directory in a bucket to another directory in\n the same bucket.\n\n Object metadata is preserved while copying, with the following exceptions:\n\n - If a new surrogate key is provided it will replace the original one.\n - If ``cache_control`` and ``surrogate_control`` values are provided they\n will replace the old one.\n\n Parameters\n ----------\n bucket_name : `str`\n Name of an S3 bucket.\n src_path : `str`\n Source directory in the S3 bucket. The ``src_path`` should ideally end\n in a trailing `'/'`. E.g. `'dir/dir2/'`.\n dest_path : `str`\n Destination directory in the S3 bucket. The ``dest_path`` should\n ideally end in a trailing `'/'`. E.g. `'dir/dir2/'`. The destination\n path cannot contain the source path.\n aws_access_key_id : `str`\n The access key for your AWS account. Also set\n ``aws_secret_access_key``.\n aws_secret_access_key : `str`\n The secret key for your AWS account.\n aws_profile : `str`, optional\n Name of AWS profile in :file:`~/.aws/credentials`. Use this instead\n of ``aws_access_key_id`` and ``aws_secret_access_key`` for file-based\n credentials.\n surrogate_key : `str`, optional\n The surrogate key to insert in the header of all objects in the\n ``x-amz-meta-surrogate-key`` field. This key is used to purge\n builds from the Fastly CDN when Editions change.\n If `None` then no header will be set.\n If the object already has a ``x-amz-meta-surrogate-key`` header then\n it will be replaced.\n cache_control : `str`, optional\n This sets (and overrides) the ``Cache-Control`` header on the copied\n files. The ``Cache-Control`` header specifically dictates how content\n is cached by the browser (if ``surrogate_control`` is also set).\n surrogate_control : `str`, optional\n This sets (and overrides) the ``x-amz-meta-surrogate-control`` header\n on the copied files. The ``Surrogate-Control``\n or ``x-amz-meta-surrogate-control`` header is used in priority by\n Fastly to givern it's caching. This caching policy is *not* passed\n to the browser.\n create_directory_redirect_object : `bool`, optional\n Create a directory redirect object for the root directory. The\n directory redirect object is an empty S3 object named after the\n directory (without a trailing slash) that contains a\n ``x-amz-meta-dir-redirect=true`` HTTP header. LSST the Docs' Fastly\n VCL is configured to redirect requests for a directory path to the\n directory's ``index.html`` (known as *courtesy redirects*).\n\n Raises\n ------\n ltdconveyor.s3.S3Error\n Thrown by any unexpected faults from the S3 API.\n RuntimeError\n Thrown when the source and destination directories are the same."}
{"_id": "q_17540", "text": "Command line entrypoint to reduce technote metadata."}
{"_id": "q_17541", "text": "Run a pipeline to process extract, transform, and load metadata for\n multiple LSST the Docs-hosted projects\n\n Parameters\n ----------\n session : `aiohttp.ClientSession`\n Your application's aiohttp client session.\n See http://aiohttp.readthedocs.io/en/stable/client.html.\n product_urls : `list` of `str`\n List of LSST the Docs product URLs.\n github_api_token : `str`\n A GitHub personal API token. See the `GitHub personal access token\n guide`_.\n mongo_collection : `motor.motor_asyncio.AsyncIOMotorCollection`, optional\n MongoDB collection. This should be the common MongoDB collection for\n LSST projectmeta JSON-LD records."}
{"_id": "q_17542", "text": "Allows a decorator to be called with or without keyword arguments."}
{"_id": "q_17543", "text": "Upload a directory of files to S3.\n\n This function places the contents of the Sphinx HTML build directory\n into the ``/path_prefix/`` directory of an *existing* S3 bucket.\n Existing files on S3 are overwritten; files that no longer exist in the\n ``source_dir`` are deleted from S3.\n\n Parameters\n ----------\n bucket_name : `str`\n Name of the S3 bucket where documentation is uploaded.\n path_prefix : `str`\n The root directory in the bucket where documentation is stored.\n source_dir : `str`\n Path of the Sphinx HTML build directory on the local file system.\n The contents of this directory are uploaded into the ``/path_prefix/``\n directory of the S3 bucket.\n upload_dir_redirect_objects : `bool`, optional\n A feature flag to enable uploading objects to S3 for every directory.\n These objects contain ``x-amz-meta-dir-redirect=true`` HTTP headers\n that tell Fastly to issue a 301 redirect from the directory object to\n the `index.html`` in that directory.\n surrogate_key : `str`, optional\n The surrogate key to insert in the header of all objects\n in the ``x-amz-meta-surrogate-key`` field. This key is used to purge\n builds from the Fastly CDN when Editions change.\n If `None` then no header will be set.\n cache_control : `str`, optional\n This sets the ``Cache-Control`` header on the uploaded\n files. The ``Cache-Control`` header specifically dictates how content\n is cached by the browser (if ``surrogate_control`` is also set).\n surrogate_control : `str`, optional\n This sets the ``x-amz-meta-surrogate-control`` header\n on the uploaded files. The ``Surrogate-Control``\n or ``x-amz-meta-surrogate-control`` header is used in priority by\n Fastly to givern it's caching. This caching policy is *not* passed\n to the browser.\n acl : `str`, optional\n The pre-canned AWS access control list to apply to this upload.\n Can be ``'public-read'``, which allow files to be downloaded\n over HTTP by the public. See\n https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl\n for an overview of S3's pre-canned ACL lists. Note that ACL settings\n are not validated locally. Default is `None`, meaning that no ACL\n is applied to an individual object. In this case, use ACLs applied\n to the bucket itself.\n aws_access_key_id : `str`, optional\n The access key for your AWS account. Also set\n ``aws_secret_access_key``.\n aws_secret_access_key : `str`, optional\n The secret key for your AWS account.\n aws_profile : `str`, optional\n Name of AWS profile in :file:`~/.aws/credentials`. Use this instead\n of ``aws_access_key_id`` and ``aws_secret_access_key`` for file-based\n credentials.\n\n Notes\n -----\n ``cache_control`` and ``surrogate_control`` can be used together.\n ``surrogate_control`` takes priority in setting Fastly's POP caching,\n while ``cache_control`` then sets the browser's caching. For example:\n\n - ``cache_control='no-cache'``\n - ``surrogate_control='max-age=31536000'``\n\n together will ensure that the browser always does an ETAG server query,\n but that Fastly will cache the content for one year (or until purged).\n This configuration is good for files that are frequently changed in place.\n\n For immutable uploads simply using ``cache_control`` is more efficient\n since it allows the browser to also locally cache content.\n\n .. seelso:\n\n - `Fastly: Cache control tutorial\n <https://docs.fastly.com/guides/tutorials/cache-control-tutorial>`_.\n - `Google: HTTP caching <http://ls.st/39v>`_."}
{"_id": "q_17544", "text": "Upload a file to the S3 bucket.\n\n This function uses the mimetypes module to guess and then set the\n Content-Type and Encoding-Type headers.\n\n Parameters\n ----------\n local_path : `str`\n Full path to a file on the local file system.\n bucket_path : `str`\n Destination path (also known as the key name) of the file in the\n S3 bucket.\n bucket : boto3 Bucket instance\n S3 bucket.\n metadata : `dict`, optional\n Header metadata values. These keys will appear in headers as\n ``x-amz-meta-*``.\n acl : `str`, optional\n A pre-canned access control list. See\n https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl\n Default is `None`, mean that no ACL is applied to the object.\n cache_control : `str`, optional\n The cache-control header value. For example, ``'max-age=31536000'``."}
{"_id": "q_17545", "text": "Upload an arbitrary object to an S3 bucket.\n\n Parameters\n ----------\n bucket_path : `str`\n Destination path (also known as the key name) of the file in the\n S3 bucket.\n content : `str` or `bytes`, optional\n Object content.\n bucket : boto3 Bucket instance\n S3 bucket.\n metadata : `dict`, optional\n Header metadata values. These keys will appear in headers as\n ``x-amz-meta-*``.\n acl : `str`, optional\n A pre-canned access control list. See\n https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl\n Default is `None`, meaning that no ACL is applied to the object.\n cache_control : `str`, optional\n The cache-control header value. For example, ``'max-age=31536000'``.\n content_type : `str`, optional\n The object's content type (such as ``text/html``). If left unset,\n no MIME type is passed to boto3 (which defaults to\n ``binary/octet-stream``)."}
{"_id": "q_17546", "text": "List all file-type object names that exist at the root of this\n bucket directory.\n\n Parameters\n ----------\n dirname : `str`\n Directory name in the bucket relative to ``bucket_root/``.\n\n Returns\n -------\n filenames : `list`\n List of file names (`str`), relative to ``bucket_root/``, that\n exist at the root of ``dirname``."}
{"_id": "q_17547", "text": "List all names of directories that exist at the root of this\n bucket directory.\n\n Note that *directories* don't exist in S3; rather directories are\n inferred from path names.\n\n Parameters\n ----------\n dirname : `str`\n Directory name in the bucket relative to ``bucket_root``.\n\n Returns\n -------\n dirnames : `list`\n List of directory names (`str`), relative to ``bucket_root/``,\n that exist at the root of ``dirname``."}
{"_id": "q_17548", "text": "Delete a file from the bucket.\n\n Parameters\n ----------\n filename : `str`\n Name of the file, relative to ``bucket_root/``."}
{"_id": "q_17549", "text": "Ensure a token is in the Click context object or authenticate and obtain\n the token from LTD Keeper.\n\n Parameters\n ----------\n ctx : `click.Context`\n The Click context. ``ctx.obj`` must be a `dict` that contains keys:\n ``keeper_hostname``, ``username``, ``password``, ``token``. This\n context object is prepared by the main Click group,\n `ltdconveyor.cli.main.main`."}
{"_id": "q_17550", "text": "Create a GitHub token for an integration installation.\n\n Parameters\n ----------\n installation_id : `int`\n Installation ID. This is available in the URL of the integration's\n **installation** ID.\n integration_jwt : `bytes`\n The integration's JSON Web Token (JWT). You can create this with\n `create_jwt`.\n\n Returns\n -------\n token_obj : `dict`\n GitHub token object. Includes the fields:\n\n - ``token``: the token string itself.\n - ``expires_at``: date time string when the token expires.\n\n Example\n -------\n The typical workflow for authenticating to an integration installation is:\n\n .. code-block:: python\n\n from dochubadapter.github import auth\n jwt = auth.create_jwt(integration_id, private_key_path)\n token_obj = auth.get_installation_token(installation_id, jwt)\n print(token_obj['token'])\n\n Notes\n -----\n See\n https://developer.github.com/early-access/integrations/authentication/#as-an-installation\n for more information"}
{"_id": "q_17551", "text": "Create a JSON Web Token to authenticate a GitHub Integration or\n installation.\n\n Parameters\n ----------\n integration_id : `int`\n Integration ID. This is available from the GitHub integration's\n homepage.\n private_key_path : `str`\n Path to the integration's private key (a ``.pem`` file).\n\n Returns\n -------\n jwt : `bytes`\n JSON Web Token that is good for 9 minutes.\n\n Notes\n -----\n The JWT is encoded with the RS256 algorithm. It includes a payload with\n fields:\n\n - ``'iat'``: The current time, as an `int` timestamp.\n - ``'exp'``: Expiration time, as an `int timestamp. The expiration\n time is set of 9 minutes in the future (maximum allowance is 10 minutes).\n - ``'iss'``: The integration ID (`int`).\n\n For more information, see\n https://developer.github.com/early-access/integrations/authentication/."}
{"_id": "q_17552", "text": "r\"\"\"Get all macro definitions from TeX source, supporting multiple\n declaration patterns.\n\n Parameters\n ----------\n tex_source : `str`\n TeX source content.\n\n Returns\n -------\n macros : `dict`\n Keys are macro names (including leading ``\\``) and values are the\n content (as `str`) of the macros.\n\n Notes\n -----\n This function uses the following function to scrape macros of different\n types:\n\n - `get_def_macros`\n - `get_newcommand_macros`\n\n This macro scraping has the following caveats:\n\n - Macro definition (including content) must all occur on one line.\n - Macros with arguments are not supported."}
{"_id": "q_17553", "text": "r\"\"\"Get all ``\\def`` macro definition from TeX source.\n\n Parameters\n ----------\n tex_source : `str`\n TeX source content.\n\n Returns\n -------\n macros : `dict`\n Keys are macro names (including leading ``\\``) and values are the\n content (as `str`) of the macros.\n\n Notes\n -----\n ``\\def`` macros with arguments are not supported."}
{"_id": "q_17554", "text": "r\"\"\"Get all ``\\newcommand`` macro definition from TeX source.\n\n Parameters\n ----------\n tex_source : `str`\n TeX source content.\n\n Returns\n -------\n macros : `dict`\n Keys are macro names (including leading ``\\``) and values are the\n content (as `str`) of the macros.\n\n Notes\n -----\n ``\\newcommand`` macros with arguments are not supported."}
{"_id": "q_17555", "text": "Makes a naive datetime.datetime in a given time zone aware."}
{"_id": "q_17556", "text": "Makes an aware datetime.datetime naive in a given time zone."}
{"_id": "q_17557", "text": "Converts a datetime to the timezone of this Schedule."}
{"_id": "q_17558", "text": "Returns an iterator of Period tuples for every day this event is in effect, between range_start\n and range_end."}
{"_id": "q_17559", "text": "Returns an iterator of Period tuples for continuous stretches of time during\n which this event is in effect, between range_start and range_end."}
{"_id": "q_17560", "text": "A set of integers representing the weekdays the schedule recurs on,\n with Monday = 0 and Sunday = 6."}
{"_id": "q_17561", "text": "Speak loudly! FIVE! Use upper case!"}
{"_id": "q_17562", "text": "Get content of lsst-texmf bibliographies.\n\n BibTeX content is downloaded from GitHub (``master`` branch of\n https://github.com/lsst/lsst-texmf or retrieved from an in-memory cache.\n\n Parameters\n ----------\n bibtex_filenames : sequence of `str`, optional\n List of lsst-texmf BibTeX files to retrieve. These can be the filenames\n of lsst-bibtex files (for example, ``['lsst.bib', 'lsst-dm.bib']``)\n or names without an extension (``['lsst', 'lsst-dm']``). The default\n (recommended) is to get *all* lsst-texmf bibliographies:\n\n .. code-block:: python\n\n ['lsst', 'lsst-dm', 'refs', 'books', 'refs_ads']\n\n Returns\n -------\n bibtex : `dict`\n Dictionary with keys that are bibtex file names (such as ``'lsst'``,\n ``'lsst-dm'``). Values are the corresponding bibtex file content\n (`str`)."}
{"_id": "q_17563", "text": "Make a pybtex BibliographyData instance from standard lsst-texmf\n bibliography files and user-supplied bibtex content.\n\n Parameters\n ----------\n lsst_bib_names : sequence of `str`, optional\n Names of lsst-texmf BibTeX files to include. For example:\n\n .. code-block:: python\n\n ['lsst', 'lsst-dm', 'refs', 'books', 'refs_ads']\n\n Default is `None`, which includes all lsst-texmf bibtex files.\n\n bibtex : `str`\n BibTeX source content not included in lsst-texmf. This can be content\n from a import ``local.bib`` file.\n\n Returns\n -------\n bibliography : `pybtex.database.BibliographyData`\n A pybtex bibliography database that includes all given sources:\n lsst-texmf bibliographies and ``bibtex``."}
{"_id": "q_17564", "text": "Get a usable URL from a pybtex entry.\n\n Parameters\n ----------\n entry : `pybtex.database.Entry`\n A pybtex bibliography entry.\n\n Returns\n -------\n url : `str`\n Best available URL from the ``entry``.\n\n Raises\n ------\n NoEntryUrlError\n Raised when no URL can be made from the bibliography entry.\n\n Notes\n -----\n The order of priority is:\n\n 1. ``url`` field\n 2. ``ls.st`` URL from the handle for ``@docushare`` entries.\n 3. ``adsurl``\n 4. DOI"}
{"_id": "q_17565", "text": "Get and format author-year text from a pybtex entry to emulate\n natbib citations.\n\n Parameters\n ----------\n entry : `pybtex.database.Entry`\n A pybtex bibliography entry.\n parens : `bool`, optional\n Whether to add parentheses around the year. Default is `False`.\n\n Returns\n -------\n authoryear : `str`\n The author-year citation text."}
{"_id": "q_17566", "text": "Extract, transform, and load Sphinx-based technote metadata.\n\n Parameters\n ----------\n session : `aiohttp.ClientSession`\n Your application's aiohttp client session.\n See http://aiohttp.readthedocs.io/en/stable/client.html.\n github_api_token : `str`\n A GitHub personal API token. See the `GitHub personal access token\n guide`_.\n ltd_product_data : `dict`\n Contents of ``metadata.yaml``, obtained via `download_metadata_yaml`.\n Data for this technote from the LTD Keeper API\n (``GET /products/<slug>``). Usually obtained via\n `lsstprojectmeta.ltd.get_ltd_product`.\n mongo_collection : `motor.motor_asyncio.AsyncIOMotorCollection`, optional\n MongoDB collection. This should be the common MongoDB collection for\n LSST projectmeta JSON-LD records. If provided, ths JSON-LD is upserted\n into the MongoDB collection.\n\n Returns\n -------\n metadata : `dict`\n JSON-LD-formatted dictionary.\n\n Raises\n ------\n NotSphinxTechnoteError\n Raised when the LTD product cannot be interpreted as a Sphinx-based\n technote project because it's missing a metadata.yaml file in its\n GitHub repository. This implies that the LTD product *could* be of a\n different format.\n\n .. `GitHub personal access token guide`: https://ls.st/41d"}
{"_id": "q_17567", "text": "Return the timezone. If none is set use system timezone"}
{"_id": "q_17568", "text": "Convert any timestamp into a datetime and save as _time"}
{"_id": "q_17569", "text": "Return a dict that represents the DayOneEntry"}
{"_id": "q_17570", "text": "Create and return full file path for DayOne entry"}
{"_id": "q_17571", "text": "Delete all objects in the S3 bucket named ``bucket_name`` that are\n found in the ``root_path`` directory.\n\n Parameters\n ----------\n bucket_name : `str`\n Name of an S3 bucket.\n root_path : `str`\n Directory in the S3 bucket that will be deleted.\n aws_access_key_id : `str`\n The access key for your AWS account. Also set\n ``aws_secret_access_key``.\n aws_secret_access_key : `str`\n The secret key for your AWS account.\n aws_profile : `str`, optional\n Name of AWS profile in :file:`~/.aws/credentials`. Use this instead\n of ``aws_access_key_id`` and ``aws_secret_access_key`` for file-based\n credentials.\n\n Raises\n ------\n ltdconveyor.s3.S3Error\n Thrown by any unexpected faults from the S3 API."}
{"_id": "q_17572", "text": "Get project's home URL based on settings.PROJECT_HOME_NAMESPACE.\n\n Returns None if PROJECT_HOME_NAMESPACE is not defined in settings."}
{"_id": "q_17573", "text": "A template tag to return the project's home URL and label\n formatted as a Bootstrap 4 breadcrumb.\n\n PROJECT_HOME_NAMESPACE must be defined in settings, for example:\n PROJECT_HOME_NAMESPACE = 'project_name:index_view'\n\n Usage Example:\n {% load project_home_tags %}\n\n <ol class=\"breadcrumb\">\n {% project_home_breadcrumb_bs4 %} {# <--- #}\n <li class=\"breadcrumb-item\" aria-label=\"breadcrumb\"><a href=\"{% url 'app:namespace' %}\">List of Objects</a></li>\n <li class=\" breadcrumb-item active\" aria-label=\"breadcrumb\" aria-current=\"page\">Object Detail</li>\n </ol>\n\n This gets converted into:\n <ol class=\"breadcrumb\">\n <li class=\"breadcrumb-item\" aria-label=\"breadcrumb\"><a href=\"{% url 'project_name:index_view' %}\">Home</a></li> {# <--- #}\n <li class=\"breadcrumb-item\" aria-label=\"breadcrumb\"><a href=\"{% url 'app:namespace' %}\">List of Objects</a></li>\n <li class=\" breadcrumb-item active\" aria-label=\"breadcrumb\" aria-current=\"page\">Object Detail</li>\n </ol>\n\n By default, the link's text is 'Home'. A project-wide label can be\n defined with PROJECT_HOME_LABEL in settings. Both the default and\n the project-wide label can be overridden by passing a string to\n the template tag.\n\n For example:\n {% project_home_breadcrumb_bs4 'Custom Label' %}"}
{"_id": "q_17574", "text": "The entry point for a yaz script\n\n This will almost always be called from a python script in\n the following manner:\n\n if __name__ == \"__main__\":\n yaz.main()\n\n This function will perform the following steps:\n\n 1. It will load any additional python code from\n the yaz_extension python module located in the\n ~/.yaz directory when LOAD_YAZ_EXTENSION is True\n and the yaz_extension module exists\n\n 2. It collects all yaz tasks and plugins. When WHITE_LIST\n is a non-empty list, only the tasks and plugins located\n therein will be considered\n\n 3. It will parse arguments from ARGV, or the command line\n when ARGV is not given, resulting in a yaz task or a parser\n help message.\n\n 4. When a suitable task is found, this task is executed. In\n case of a task which is part of a plugin, i.e. class, then\n this plugin is initialized, possibly resulting in other\n plugins to also be initialized if there are marked as\n `@yaz.dependency`."}
{"_id": "q_17575", "text": "Returns a tree of Task instances\n\n The tree is comprised of dictionaries containing strings for\n keys and either dictionaries or Task instances for values.\n\n When WHITE_LIST is given, only the tasks and plugins in this\n list will become part of the task tree. The WHITE_LIST may\n contain either strings, corresponding to the task of plugin\n __qualname__, or, preferable, the WHITE_LIST contains\n links to the task function or plugin class instead."}
{"_id": "q_17576", "text": "Declare a function or method to be a Yaz task\n\n @yaz.task\n def talk(message: str = \"Hello World!\"):\n return message\n\n Or... group multiple tasks together\n\n class Tools(yaz.Plugin):\n @yaz.task\n def say(self, message: str = \"Hello World!\"):\n return message\n\n @yaz.task(option__choices=[\"A\", \"B\", \"C\"])\n def choose(self, option: str = \"A\"):\n return option"}
{"_id": "q_17577", "text": "Returns a list of parameters"}
{"_id": "q_17578", "text": "Returns an instance of a fully initialized plugin class\n\n Every plugin class is kept in a plugin cache, effectively making\n every plugin into a singleton object.\n\n When a plugin has a yaz.dependency decorator, it will be called\n as well, before the instance is returned."}
{"_id": "q_17579", "text": "Given an lxml Element of a GML geometry, returns a dict in GeoJSON format."}
{"_id": "q_17580", "text": "Returns a masked array with anything outside of values masked.\n The minv and maxv parameters take precendence over any dict values.\n The valid_range attribute takes precendence over the valid_min and\n valid_max attributes."}
{"_id": "q_17581", "text": "If input object is an ndarray it will be converted into a dict\n holding dtype, shape and the data, base64 encoded."}
{"_id": "q_17582", "text": "Get lines sampled accross all threads, in order\n from most to least sampled."}
{"_id": "q_17583", "text": "leftSibling\n previousSibling\n leftSib\n prevSib\n lsib\n psib\n \n have the same parent,and on the left"}
{"_id": "q_17584", "text": "rightSibling\n nextSibling\n rightSib\n nextSib\n rsib\n nsib\n \n have the same parent,and on the right"}
{"_id": "q_17585", "text": "rightCousin\n nextCousin\n rightCin\n nextCin\n rcin\n ncin\n \n parents are neighbors,and on the right"}
{"_id": "q_17586", "text": "_creat_child_desc\n update depth,parent_breadth_path,parent_path,sib_seq,path,lsib_path,rsib_path,lcin_path,rcin_path"}
{"_id": "q_17587", "text": "_upgrade_breadth_info\n update breadth, breadth_path, and add desc to desc_level"}
{"_id": "q_17588", "text": "Parse command content from the LaTeX source.\n\n Parameters\n ----------\n source : `str`\n The full source of the tex document.\n\n Yields\n ------\n parsed_command : `ParsedCommand`\n Yields parsed commands instances for each occurence of the command\n in the source."}
{"_id": "q_17589", "text": "r\"\"\"Attempt to parse a single token on the first line of this source.\n\n This method is used for parsing whitespace-delimited arguments, like\n ``\\input file``. The source should ideally contain `` file`` along\n with a newline character.\n\n >>> source = 'Line 1\\n' r'\\input test.tex' '\\nLine 2'\n >>> LatexCommand._parse_whitespace_argument(source, 'input')\n 'test.tex'\n\n Bracket delimited arguments (``\\input{test.tex}``) are handled in\n the normal logic of `_parse_command`."}
{"_id": "q_17590", "text": "Returns a list of TMDDEventConverter elements.\n\n doc is an XML Element containing one or more <FEU> events"}
{"_id": "q_17591", "text": "Returns a Pandas DataFrame of the data.\n This always returns positive down depths"}
{"_id": "q_17592", "text": "Load a pre-made query.\n\n These queries are distributed with lsstprojectmeta. See\n :file:`lsstrojectmeta/data/githubv4/README.rst` inside the\n package repository for details on available queries.\n\n Parameters\n ----------\n query_name : `str`\n Name of the query, such as ``'technote_repo'``.\n\n Returns\n -------\n github_query : `GitHubQuery\n A GitHub query or mutation object that you can pass to\n `github_request` to execute the request itself."}
{"_id": "q_17593", "text": "Detect if the upload should be skipped based on the\n ``TRAVIS_EVENT_TYPE`` environment variable.\n\n Returns\n -------\n should_skip : `bool`\n True if the upload should be skipped based on the combination of\n ``TRAVIS_EVENT_TYPE`` and user settings."}
{"_id": "q_17594", "text": "Get the datetime for the most recent commit to a project that\n affected certain types of content.\n\n Parameters\n ----------\n extensions : sequence of 'str'\n Extensions of files to consider in getting the most recent commit\n date. For example, ``('rst', 'svg', 'png')`` are content extensions\n for a Sphinx project. **Extension comparision is case sensitive.** add\n uppercase variants to match uppercase extensions.\n acceptance_callback : callable\n Callable function whose sole argument is a file path, and returns\n `True` or `False` depending on whether the file's commit date should\n be considered or not. This callback is only run on files that are\n included by ``extensions``. Thus this callback is a way to exclude\n specific files that would otherwise be included by their extension.\n root_dir : 'str`, optional\n Only content contained within this root directory is considered.\n This directory must be, or be contained by, a Git repository. This is\n the current working directory by default.\n\n Returns\n -------\n commit_date : `datetime.datetime`\n Datetime of the most recent content commit.\n\n Raises\n ------\n RuntimeError\n Raised if no content files are found."}
{"_id": "q_17595", "text": "Register a new build for a product on LSST the Docs.\n\n Wraps ``POST /products/{product}/builds/``.\n\n Parameters\n ----------\n host : `str`\n Hostname of LTD Keeper API server.\n keeper_token : `str`\n Auth token (`ltdconveyor.keeper.get_keeper_token`).\n product : `str`\n Name of the product in the LTD Keeper service.\n git_refs : `list` of `str`\n List of Git refs that correspond to the version of the build. Git refs\n can be tags or branches.\n\n Returns\n -------\n build_info : `dict`\n LTD Keeper build resource.\n\n Raises\n ------\n ltdconveyor.keeper.KeeperError\n Raised if there is an error communicating with the LTD Keeper API."}
{"_id": "q_17596", "text": "Deeply updates a dictionary. List values are concatenated.\n\n Args:\n d (dict): First dictionary which will be updated\n u (dict): Second dictionary use to extend the first one\n\n Returns:\n dict: The merge dictionary"}
{"_id": "q_17597", "text": "Returns variables that match specific conditions.\n\n * Can pass in key=value parameters and variables are returned that\n contain all of the matches. For example,\n\n >>> # Get variables with x-axis attribute.\n >>> vs = nc.get_variables_by_attributes(axis='X')\n >>> # Get variables with matching \"standard_name\" attribute.\n >>> nc.get_variables_by_attributes(standard_name='northward_sea_water_velocity')\n\n * Can pass in key=callable parameter and variables are returned if the\n callable returns True. The callable should accept a single parameter,\n the attribute value. None is given as the attribute value when the\n attribute does not exist on the variable. For example,\n\n >>> # Get Axis variables.\n >>> vs = nc.get_variables_by_attributes(axis=lambda v: v in ['X', 'Y', 'Z', 'T'])\n >>> # Get variables that don't have an \"axis\" attribute.\n >>> vs = nc.get_variables_by_attributes(axis=lambda v: v is None)\n >>> # Get variables that have a \"grid_mapping\" attribute.\n >>> vs = nc.get_variables_by_attributes(grid_mapping=lambda v: v is not None)"}
{"_id": "q_17598", "text": "vfuncs can be any callable that accepts a single argument, the\n Variable object, and returns a dictionary of new attributes to\n set. These will overwrite existing attributes"}
{"_id": "q_17599", "text": "Decorate a function that uses pypandoc to ensure that pandoc is\n installed if necessary."}
{"_id": "q_17600", "text": "ltd is a command-line client for LSST the Docs.\n\n Use ltd to upload new site builds, and to work with the LTD Keeper API."}
{"_id": "q_17601", "text": "Edit a part from an OOXML Package without unzipping it"}
{"_id": "q_17602", "text": "List the contents of a subdirectory of a zipfile"}
{"_id": "q_17603", "text": "recursively call os.path.split until we have all of the components\n\tof a pathname suitable for passing back to os.path.join."}
{"_id": "q_17604", "text": "Given a path to a part in a zip file, return a path to the file and\n\tthe path to the part.\n\n\tAssuming /foo.zipx exists as a file,\n\n\t>>> find_file('/foo.zipx/dir/part') # doctest: +SKIP\n\t('/foo.zipx', '/dir/part')\n\n\t>>> find_file('/foo.zipx') # doctest: +SKIP\n\t('/foo.zipx', '')"}
{"_id": "q_17605", "text": "Give preference to an XML_EDITOR or EDITOR defined in the\n\t\tenvironment. Otherwise use notepad on Windows and edit on other\n\t\tplatforms."}
{"_id": "q_17606", "text": "Decode a JSON-LD dataset, including decoding datetime\n strings into `datetime.datetime` objects.\n\n Parameters\n ----------\n encoded_dataset : `str`\n The JSON-LD dataset encoded as a string.\n\n Returns\n -------\n jsonld_dataset : `dict`\n A JSON-LD dataset.\n\n Examples\n --------\n\n >>> doc = '{\"dt\": \"2018-01-01T12:00:00Z\"}'\n >>> decode_jsonld(doc)\n {'dt': datetime.datetime(2018, 1, 1, 12, 0, tzinfo=datetime.timezone.utc)}"}
{"_id": "q_17607", "text": "Encode values as JSON strings.\n\n This method overrides the default implementation from\n `json.JSONEncoder`."}
{"_id": "q_17608", "text": "Process the astroid node stream."}
{"_id": "q_17609", "text": "Generates an html chart from either a pandas dataframe, a dictionnary,\n a list or an Altair Data object and optionally write it to a file"}
{"_id": "q_17610", "text": "Generate html from an Altair chart object and optionally write it to a file"}
{"_id": "q_17611", "text": "Serialize to an Altair chart object from either a pandas dataframe, a dictionnary,\n a list or an Altair Data object"}
{"_id": "q_17612", "text": "Writes a chart's html to a file"}
{"_id": "q_17613", "text": "Get the right chart class from a string"}
{"_id": "q_17614", "text": "Encode the fields in Altair format"}
{"_id": "q_17615", "text": "Get all git repositories within this environment"}
{"_id": "q_17616", "text": "Install a python package using pip"}
{"_id": "q_17617", "text": "Link to a GitHub user.\n\n Returns 2 part tuple containing list of nodes to insert into the\n document and a list of system messages. Both are allowed to be\n empty.\n\n :param name: The role name used in the document.\n :param rawtext: The entire markup snippet, with role.\n :param text: The text marked with the role.\n :param lineno: The line number where rawtext appears in the input.\n :param inliner: The inliner instance that called us.\n :param options: Directive options for customization.\n :param content: The directive content for customization."}
{"_id": "q_17618", "text": "Returns the nb quantiles for datas in a dataframe"}
{"_id": "q_17619", "text": "Returns the root mean square error betwwen a and b"}
{"_id": "q_17620", "text": "Returns the mean fractionalized bias error"}
{"_id": "q_17621", "text": "Computes the correlation between a and b, says the Pearson's correlation\n coefficient R"}
{"_id": "q_17622", "text": "Geometric mean bias"}
{"_id": "q_17623", "text": "Geometric mean variance"}
{"_id": "q_17624", "text": "Figure of merit in time"}
{"_id": "q_17625", "text": "Path to environments site-packages"}
{"_id": "q_17626", "text": "Prior to activating, store everything necessary to deactivate this\n environment."}
{"_id": "q_17627", "text": "Remove this environment"}
{"_id": "q_17628", "text": "Command used to launch this application module"}
{"_id": "q_17629", "text": "Returns the tarball URL inferred from an app.json, if present."}
{"_id": "q_17630", "text": "Brings up a Heroku app."}
{"_id": "q_17631", "text": "Create a virtual environment. You can pass either the name of a new\n environment to create in your CPENV_HOME directory OR specify a full path\n to create an environment outisde your CPENV_HOME.\n\n Create an environment in CPENV_HOME::\n\n >>> cpenv.create('myenv')\n\n Create an environment elsewhere::\n\n >>> cpenv.create('~/custom_location/myenv')\n\n :param name_or_path: Name or full path of environment\n :param config: Environment configuration including dependencies etc..."}
{"_id": "q_17632", "text": "Remove an environment or module\n\n :param name_or_path: name or path to environment or module"}
{"_id": "q_17633", "text": "Activates and launches a module\n\n :param module_name: name of module to launch"}
{"_id": "q_17634", "text": "Deactivates an environment by restoring all env vars to a clean state\n stored prior to activating environments"}
{"_id": "q_17635", "text": "Returns a list of available modules."}
{"_id": "q_17636", "text": "Add a module to CPENV_ACTIVE_MODULES environment variable"}
{"_id": "q_17637", "text": "Remove a module from CPENV_ACTIVE_MODULES environment variable"}
{"_id": "q_17638", "text": "Format a list of environments and modules for terminal output"}
{"_id": "q_17639", "text": "Show context info"}
{"_id": "q_17640", "text": "Add an environment to the cache. Allows you to activate the environment\n by name instead of by full path"}
{"_id": "q_17641", "text": "Remove a cached environment. Removed paths will no longer be able to\n be activated by name"}
{"_id": "q_17642", "text": "Create a new template module.\n\n You can also specify a filesystem path like \"./modules/new_module\""}
{"_id": "q_17643", "text": "Add a module to an environment. PATH can be a git repository path or\n a filesystem path."}
{"_id": "q_17644", "text": "Resolves VirtualEnvironments with a relative or absolute path"}
{"_id": "q_17645", "text": "Resolves VirtualEnvironments in EnvironmentCache"}
{"_id": "q_17646", "text": "Resolves module in previously resolved environment."}
{"_id": "q_17647", "text": "Resolves modules in currently active environment."}
{"_id": "q_17648", "text": "Decorator implementing Iterator interface with nicer manner.\n\n Example\n -------\n\n @iter_attribute('my_attr'):\n class DecoratedClass:\n ...\n\n Warning:\n ========\n\n When using PyCharm or MYPY you'll probably see issues with decorated class not being recognized as Iterator.\n That's an issue which I could not overcome yet, it's probably due to the fact that interpretation of object\n is being done statically rather than dynamically. MYPY checks for definition of methods in class code which\n changes at runtime. Since __iter__ and __next__ are added dynamically MYPY cannot find those\n defined in objects before object of a class is created. Possible workarounds for this issue are:\n\n 1. Define ``dummy`` __iter__ class like:\n\n @iter_attribute('attr')\n class Test:\n def __init__(self) -> None:\n self.attr = [1, 2, 3]\n\n def __iter__(self):\n pass\n\n 2. After creating object use cast or assert function denoting that particular instance inherits\n from collections.Iterator:\n\n assert isinstance(my_object, collections.Iterator)\n\n\n :param iterable_name: string representing attribute name which has to be iterated\n :return: DecoratedClass with implemented '__iter__' and '__next__' methods."}
{"_id": "q_17649", "text": "Returns a view of the array with axes transposed.\n\n For a 1-D array, this has no effect.\n For a 2-D array, this is the usual matrix transpose.\n For an n-D array, if axes are given, their order indicates how the\n axes are permuted\n\n Args:\n a (array_like): Input array.\n axes (list of int, optional): By default, reverse the dimensions,\n otherwise permute the axes according to the values given."}
{"_id": "q_17650", "text": "Roll the specified axis backwards, until it lies in a given position.\n\n Args:\n a (array_like): Input array.\n axis (int): The axis to roll backwards. The positions of the other axes \n do not change relative to one another.\n start (int, optional): The axis is rolled until it lies before this \n position. The default, 0, results in a \"complete\" roll.\n\n Returns:\n res (ndarray)"}
{"_id": "q_17651", "text": "Insert a new axis, corresponding to a given position in the array shape\n\n Args:\n a (array_like): Input array.\n axis (int): Position (amongst axes) where new axis is to be inserted."}
{"_id": "q_17652", "text": "Join a sequence of arrays together. \n Will aim to join `ndarray`, `RemoteArray`, and `DistArray` without moving \n their data, if they happen to be on different engines.\n\n Args:\n tup (sequence of array_like): Arrays to be concatenated. They must have\n the same shape, except in the dimension corresponding to `axis`.\n axis (int, optional): The axis along which the arrays will be joined.\n\n Returns: \n res: `ndarray`, if inputs were all local\n `RemoteArray`, if inputs were all on the same remote engine\n `DistArray`, if inputs were already scattered on different engines"}
{"_id": "q_17653", "text": "Compute the arithmetic mean along the specified axis.\n\n Returns the average of the array elements. The average is taken over\n the flattened array by default, otherwise over the specified axis.\n `float64` intermediate and return values are used for integer inputs.\n\n Parameters\n ----------\n a : array_like\n Array containing numbers whose mean is desired. If `a` is not an\n array, a conversion is attempted.\n axis : None or int or tuple of ints, optional\n Axis or axes along which the means are computed. The default is to\n compute the mean of the flattened array.\n If this is a tuple of ints, a mean is performed over multiple axes,\n instead of a single axis or all the axes as before.\n dtype : data-type, optional\n Type to use in computing the mean. For integer inputs, the default\n is `float64`; for floating point inputs, it is the same as the\n input dtype.\n out : ndarray, optional\n Alternate output array in which to place the result. The default\n is ``None``; if provided, it must have the same shape as the\n expected output, but the type will be cast if necessary.\n See `doc.ufuncs` for details.\n keepdims : bool, optional\n If this is set to True, the axes which are reduced are left\n in the result as dimensions with size one. With this option,\n the result will broadcast correctly against the original `arr`.\n\n Returns\n -------\n m : ndarray, see dtype parameter above\n\n Notes\n -----\n np.mean fails to pass the keepdims parameter to ndarray subclasses.\n That is the main reason we implement this function."}
{"_id": "q_17654", "text": "`ax` is a valid candidate for a distributed axis if the given\n subarray shapes are all the same when ignoring axis `ax`"}
{"_id": "q_17655", "text": "Returns True if successful, False if failure"}
{"_id": "q_17656", "text": "Return a command to launch a subshell"}
{"_id": "q_17657", "text": "Generate a prompt with a given prefix\n\n linux/osx: [prefix] user@host cwd $\n win: [prefix] cwd:"}
{"_id": "q_17658", "text": "Append a file to file repository.\n\n For file monitoring, monitor instance needs file.\n Please put the name of file to `file` argument.\n\n :param file: the name of file you want monitor."}
{"_id": "q_17659", "text": "Append files to file repository.\n \n ModificationMonitor can append files to repository using this.\n Please put the list of file names to `filelist` argument.\n\n :param filelist: the list of file nmaes"}
{"_id": "q_17660", "text": "Run file modification monitor.\n\n The monitor can catch file modification using timestamp and file body. \n Monitor has timestamp data and file body data. And insert timestamp \n data and file body data before into while roop. In while roop, monitor \n get new timestamp and file body, and then monitor compare new timestamp\n to originaltimestamp. If new timestamp and file body differ original,\n monitor regard thease changes as `modification`. Then monitor create\n instance of FileModificationObjectManager and FileModificationObject,\n and monitor insert FileModificationObject to FileModificationObject-\n Manager. Then, yield this object.\n\n :param sleep: How times do you sleep in while roop."}
{"_id": "q_17661", "text": "Decorator that invokes `add_status_job`.\n\n ::\n\n @app.status_job\n def postgresql():\n # query/ping postgres\n\n @app.status_job(name=\"Active Directory\")\n def active_directory():\n # query active directory\n\n @app.status_job(timeout=5)\n def paypal():\n # query paypal, timeout after 5 seconds"}
{"_id": "q_17662", "text": "Get a random date between two dates"}
{"_id": "q_17663", "text": "Page through text by feeding it to another program. Invoking a\n pager through this might support colors."}
{"_id": "q_17664", "text": "Returns a prepared ``Session`` instance."}
{"_id": "q_17665", "text": "Sends an API request to Heroku.\n\n :param method: HTTP method.\n :param endpoint: API endpoint, e.g. ``/apps``.\n :param data: A dict sent as JSON in the body of the request.\n :returns: A dict represntation of the JSON response."}
{"_id": "q_17666", "text": "Creates an app-setups build. Returns response data as a dict.\n\n :param tarball_url: URL of a tarball containing an ``app.json``.\n :param env: Dict containing environment variable overrides.\n :param app_name: Name of the Heroku app to create.\n :returns: Response data as a ``dict``."}
{"_id": "q_17667", "text": "Checks the status of an app-setups build.\n\n :param build_id: ID of the build to check.\n :returns: ``True`` if succeeded, ``False`` if pending."}
{"_id": "q_17668", "text": "generator that returns an unique string\n\n :param prefix: prefix of string\n :param cache: cache used to store the last used number\n\n >>> next(sequence('abc'))\n 'abc-0'\n >>> next(sequence('abc'))\n 'abc-1'"}
{"_id": "q_17669", "text": "Decorator that stores function results in a dictionary to be used on the\n next time that the same arguments were informed."}
{"_id": "q_17670", "text": "wraps a function so that produce unique results\n\n :param func:\n :param num_args:\n\n >>> import random\n >>> choices = [1,2]\n >>> a = unique(random.choice, 1)\n >>> a,b = a(choices), a(choices)\n >>> a == b\n False"}
{"_id": "q_17671", "text": "Add any sub commands to the argument parser.\n\n :param parser: The argument parser object"}
{"_id": "q_17672", "text": "Gets the description of the command. If its not supplied the first sentence of the doc string is used."}
{"_id": "q_17673", "text": "Gets the help text for the command. If its not supplied the doc string is used."}
{"_id": "q_17674", "text": "Runs the command passing in the parsed arguments.\n\n :param args: The arguments to run the command with. If ``None`` the arguments\n are gathered from the argument parser. This is automatically set when calling\n sub commands and in most cases should not be set for the root command.\n\n :return: The status code of the action (0 on success)"}
{"_id": "q_17675", "text": "Encode wrapper for a dataset with maximum value\n\n Datasets can be one or two dimensional\n Strings are ignored as ordinal encoding"}
{"_id": "q_17676", "text": "Get all available athletes\n This method is cached to prevent unnecessary calls to GC."}
{"_id": "q_17677", "text": "Get all activity data for the last activity\n\n Keyword arguments:"}
{"_id": "q_17678", "text": "Actually do the request for activity list\n This call is slow and therefore this method is memory cached.\n\n Keyword arguments:\n athlete -- Full name of athlete"}
{"_id": "q_17679", "text": "Construct athlete endpoint from host and athlete name\n\n Keyword arguments:\n athlete -- Full athlete name"}
{"_id": "q_17680", "text": "Do actual GET request to GC REST API\n Also validates responses.\n\n Keyword arguments:\n endpoint -- full endpoint for GET request"}
{"_id": "q_17681", "text": "Creates a Heroku app-setup build.\n\n :param tarball_url: URL of a tarball containing an ``app.json``.\n :param env: (optional) Dict containing environment variable overrides.\n :param app_name: (optional) Name of the Heroku app to create.\n :returns: A tuple with ``(build_id, app_name)``."}
{"_id": "q_17682", "text": "if view is string based, must be a full path"}
{"_id": "q_17683", "text": "Render the axes data into the dict data"}
{"_id": "q_17684", "text": "Renders the chart context and axes into the dict data"}
{"_id": "q_17685", "text": "Shows the chart URL in a webbrowser\n\n Other arguments passed to webbrowser.open"}
{"_id": "q_17686", "text": "Download the chart from the URL into a filename as a PNG\n\n The filename defaults to the chart title (chtt) if any"}
{"_id": "q_17687", "text": "return a random floating number\n\n :param min: minimum value\n :param max: maximum value\n :param decimal_places: decimal places\n :return:"}
{"_id": "q_17688", "text": "Assign an entity name based on the class immediately inhering from Base.\n\n This is needed because we don't want\n entity names to come from any class that simply inherits our classes,\n just the ones in our module.\n\n For example, if you create a class Project2 that exists outside of\n kalibro_client and inherits from Project, it's entity name should still\n be Project."}
{"_id": "q_17689", "text": "Validate all the entries in the environment cache."}
{"_id": "q_17690", "text": "Load the environment cache from disk."}
{"_id": "q_17691", "text": "This function takes a text and shows it via an environment specific\n pager on stdout.\n\n .. versionchanged:: 3.0\n Added the `color` flag.\n\n :param text: the text to page.\n :param color: controls if the pager supports ANSI colors or not. The\n default is autodetection."}
{"_id": "q_17692", "text": "Retrieve objects that have been distributed, making them local again"}
{"_id": "q_17693", "text": "Apply a function in parallel to each element of the input"}
{"_id": "q_17694", "text": "Returns True if path is a git repository."}
{"_id": "q_17695", "text": "Returns True if path is in CPENV_HOME"}
{"_id": "q_17696", "text": "Get environment path from redirect file"}
{"_id": "q_17697", "text": "Walk down a directory tree. Same as os.walk but allows for a depth limit\n via depth argument"}
{"_id": "q_17698", "text": "Add a sequence value to env dict"}
{"_id": "q_17699", "text": "Returns an unused random filepath."}
{"_id": "q_17700", "text": "Encode current environment as yaml and store in path or a temporary\n file. Return the path to the stored environment."}
{"_id": "q_17701", "text": "Returns the URL to the upstream data source for the given URI based on configuration"}
{"_id": "q_17702", "text": "Return request object for calling the upstream"}
{"_id": "q_17703", "text": "Returns time to live in seconds. 0 means no caching.\n\n Criteria:\n - response code 200\n - read-only method (GET, HEAD, OPTIONS)\n Plus http headers:\n - cache-control: option1, option2, ...\n where options are:\n private | public\n no-cache\n no-store\n max-age: seconds\n s-maxage: seconds\n must-revalidate\n proxy-revalidate\n - expires: Thu, 01 Dec 1983 20:00:00 GMT\n - pragma: no-cache (=cache-control: no-cache)\n\n See http://www.mobify.com/blog/beginners-guide-to-http-cache-headers/\n\n TODO: tests"}
{"_id": "q_17704", "text": "Build a JWKS from the signing keys belonging to the self signer\n\n :return: Dictionary"}
{"_id": "q_17705", "text": "Starting with a signed JWT or a JSON document unpack and verify all\n the separate metadata statements.\n\n :param ms_dict: Metadata statement as a dictionary\n :param jwt_ms: Metadata statement as JWT\n :param keyjar: Keys that should be used to verify the signature of the\n document\n :param cls: What type (Class) of metadata statement this is\n :param liss: list of FO identifiers that matters. The rest will be \n ignored\n :return: A ParseInfo instance"}
{"_id": "q_17706", "text": "Remove MS paths that are marked to be used for another usage\n\n :param metadata: Metadata statement as dictionary\n :param federation_usage: In which context this is expected to used.\n :return: Filtered Metadata statement."}
{"_id": "q_17707", "text": "Add signed metadata statements to a request\n\n :param req: The request \n :param sms_dict: A dictionary with FO IDs as keys and signed metadata\n statements (sms) or uris pointing to sms as values.\n :return: The updated request"}
{"_id": "q_17708", "text": "Guarantee the existence of a basic MANIFEST.in.\n\n manifest doc: http://docs.python.org/distutils/sourcedist.html#manifest\n\n `options.paved.dist.manifest.include`: set of files (or globs) to include with the `include` directive.\n\n `options.paved.dist.manifest.recursive_include`: set of files (or globs) to include with the `recursive-include` directive.\n\n `options.paved.dist.manifest.prune`: set of files (or globs) to exclude with the `prune` directive.\n\n `options.paved.dist.manifest.include_sphinx_docroot`: True -> sphinx docroot is added as `graft`\n\n `options.paved.dist.manifest.include_sphinx_docroot`: True -> sphinx builddir is added as `prune`"}
{"_id": "q_17709", "text": "Add logging option to an ArgumentParser."}
{"_id": "q_17710", "text": "Apply logging options produced by LogLevelAction and LogFileAction.\n\n More often then not this function is not needed, the actions have already\n been taken during the parse, but it can be used in the case they need to be\n applied again (e.g. when command line opts take precedence but were\n overridded by a fileConfig, etc.)."}
{"_id": "q_17711", "text": "Format a UUID string\n\n :param str uuid: UUID to format\n :param int max_length: Maximum length of result string (> 3)\n :return: Formatted UUID\n :rtype: str\n :raises ValueError: If *max_length* is not larger than 3\n\n This function formats a UUID so it is not longer than *max_length*\n characters. The resulting string is returned. It does so by replacing\n characters at the end of the *uuid* with three dots, if necessary.\n The idea is that the start of the *uuid* is the most important part\n to be able to identify the related entity.\n\n The default *max_length* is 10, which will result in a string\n containing the first 7 characters of the *uuid* passed in. Most of\n the time, such a string is still unique within a collection of UUIDs."}
{"_id": "q_17712", "text": "Finds anagrams in word.\n\n Args:\n word: the string to base our search off of\n sowpods: boolean to declare TWL or SOWPODS words file\n start: a string of starting characters to find anagrams based on\n end: a string of ending characters to find anagrams based on\n\n Yields:\n a tuple of (word, score) that can be made with the input_word"}
{"_id": "q_17713", "text": "attempts to get next and previous on updates"}
{"_id": "q_17714", "text": "Notify the client of the result of handling a request\n\n The payload contains two elements:\n\n - client_id\n - result\n\n The *client_id* is the id of the client to notify. It is assumed\n that the notifier service is able to identify the client by this id\n and that it can pass the *result* to it.\n\n The *result* always contains a *status_code* element. In case the\n message passed in is not None, it will also contain a *message*\n element.\n\n In case the notifier service does not exist or returns an error,\n an error message will be logged to *stderr*."}
{"_id": "q_17715", "text": "Returns the exception's name in an AMP Command friendly format.\n\n For example, given a class named ``ExampleExceptionClass``, returns\n ``\"EXAMPLE_EXCEPTION_CLASS\"``."}
{"_id": "q_17716", "text": "Retrieves the setting value whose name is indicated by name_hyphen.\n\n Values starting with $ are assumed to reference environment variables,\n and the value stored in environment variables is retrieved. It's an\n error if thes corresponding environment variable it not set."}
{"_id": "q_17717", "text": "This method does the work of updating settings. Can be passed with\n enforce_helpstring = False which you may want if allowing end users to\n add arbitrary metadata via the settings system.\n\n Preferable to use update_settings (without leading _) in code to do the\n right thing and always have docstrings."}
{"_id": "q_17718", "text": "Detect if we get a class or a name, convert a name to a class."}
{"_id": "q_17719", "text": "Asserts that the class has a docstring, returning it if successful."}
{"_id": "q_17720", "text": "Transforms a Go Metrics API metric result into a list of\n values for a given window period.\n\n start and end are expected to be Unix timestamps in microseconds."}
{"_id": "q_17721", "text": "Validate the given 1-based page number."}
{"_id": "q_17722", "text": "Get absolute path to resource, works for dev and for PyInstaller"}
{"_id": "q_17723", "text": "Add new block of logbook selection windows. Only 5 allowed."}
{"_id": "q_17724", "text": "Return selected log books by type."}
{"_id": "q_17725", "text": "Verify enetered user name is on accepted MCC logbook list."}
{"_id": "q_17726", "text": "Parse xml elements for pretty printing"}
{"_id": "q_17727", "text": "Convert supplied QPixmap object to image file."}
{"_id": "q_17728", "text": "Process log information and push to selected logbooks."}
{"_id": "q_17729", "text": "Create graphical objects for menus."}
{"_id": "q_17730", "text": "Display menus and connect even signals."}
{"_id": "q_17731", "text": "Populate log program list to correspond with log type selection."}
{"_id": "q_17732", "text": "Add menus to parent gui."}
{"_id": "q_17733", "text": "Iteratively remove graphical objects from layout."}
{"_id": "q_17734", "text": "Adds labels to a plot."}
{"_id": "q_17735", "text": "Update the database with model schema. Shorthand for `paver manage syncdb`."}
{"_id": "q_17736", "text": "Run the dev server.\n\n Uses `django_extensions <http://pypi.python.org/pypi/django-extensions/0.5>`, if\n available, to provide `runserver_plus`.\n\n Set the command to use with `options.paved.django.runserver`\n Set the port to use with `options.paved.django.runserver_port`"}
{"_id": "q_17737", "text": "Run South's schemamigration command."}
{"_id": "q_17738", "text": "Given configuration initiate an InternalSigningService instance\n\n :param config: The signing service configuration\n :param entity_id: The entity identifier\n :return: A InternalSigningService instance"}
{"_id": "q_17739", "text": "Creates a signed JWT\n\n :param req: Original metadata statement as a\n :py:class:`MetadataStatement` instance\n :param receiver: The intended audience for the JWS\n :param iss: Issuer or the JWT\n :param lifetime: Lifetime of the signature\n :param sign_alg: Which signature algorithm to use\n :param aud: The audience, a list of receivers.\n :return: A signed JWT"}
{"_id": "q_17740", "text": "Uses POST to send a first metadata statement signing request to\n a signing service.\n\n :param req: The metadata statement that the entity wants signed\n :return: returns a dictionary with 'sms' and 'loc' as keys."}
{"_id": "q_17741", "text": "Uses PUT to update an earlier accepted and signed metadata statement.\n\n :param location: A URL to which the update request is sent\n :param req: The diff between what is registereed with the signing\n service and what it should be.\n :return: returns a dictionary with 'sms' and 'loc' as keys."}
{"_id": "q_17742", "text": "Uses GET to get a newly signed metadata statement.\n\n :param location: A URL to which the request is sent\n :return: returns a dictionary with 'sms' and 'loc' as keys."}
{"_id": "q_17743", "text": "Yield bundle contents from the given dict.\n\n Each item yielded will be either a string representing a file path\n or a bundle."}
{"_id": "q_17744", "text": "Return a bundle initialised by the given dict."}
{"_id": "q_17745", "text": "Return all html tags for all asset_type"}
{"_id": "q_17746", "text": "This static method validates a BioMapMapper definition.\n It returns None on success and throws an exception otherwise."}
{"_id": "q_17747", "text": "Returns all data entries for a particular key. Default is the main key.\n\n Args:\n\n key (str): key whose values to return (default: main key)\n\n Returns:\n\n List of all data entries for the key"}
{"_id": "q_17748", "text": "Get modules by project_abspath and packages_scan.\n\n Traverse all files under folder packages_scan which set by customer.\n And get all modules name."}
{"_id": "q_17749", "text": "Import customer's service module."}
{"_id": "q_17750", "text": "This function takes a date string in various formats\n and converts it to a normalized and validated date range. A list\n with two elements is returned, lower and upper date boundary.\n\n Valid inputs are, for example:\n 2012 => Jan 1 20012 - Dec 31 2012 (whole year)\n 201201 => Jan 1 2012 - Jan 31 2012 (whole month)\n 2012101 => Jan 1 2012 - Jan 1 2012 (whole day)\n 2011-2011 => same as \"2011\", which means whole year 2012\n 2011-2012 => Jan 1 2011 - Dec 31 2012 (two years)\n 201104-2012 => Apr 1 2011 - Dec 31 2012\n 201104-201203 => Apr 1 2011 - March 31 2012\n 20110408-2011 => Apr 8 2011 - Dec 31 2011\n 20110408-201105 => Apr 8 2011 - May 31 2011\n 20110408-20110507 => Apr 8 2011 - May 07 2011\n 2011- => Jan 1 2012 - Dec 31 9999 (unlimited)\n 201104- => Apr 1 2011 - Dec 31 9999 (unlimited)\n 20110408- => Apr 8 2011 - Dec 31 9999 (unlimited)\n -2011 Jan 1 0000 - Dez 31 2011\n -201104 Jan 1 0000 - Apr 30, 2011\n -20110408 Jan 1 0000 - Apr 8, 2011"}
{"_id": "q_17751", "text": "Get Existing Message\n\n http://dev.wheniwork.com/#get-existing-message"}
{"_id": "q_17752", "text": "Returns a list of sites.\n\n http://dev.wheniwork.com/#listing-sites"}
{"_id": "q_17753", "text": "For all the datetime fields in \"datemap\" find that key in doc and map the datetime object to\n a strftime string. This pprint and others will print out readable datetimes."}
{"_id": "q_17754", "text": "Output a cursor to a filename or stdout if filename is \"-\".\n fmt defines whether we output CSV or JSON."}
{"_id": "q_17755", "text": "Output all fields using the fieldNames list. for fields in the list datemap indicates the field must\n be date"}
{"_id": "q_17756", "text": "Given a list of tasks to perform and a dependency graph, return the tasks\n that must be performed, in the correct order"}
{"_id": "q_17757", "text": "Add or create the default departments for the given project\n\n :param project: the project that needs default departments\n :type project: :class:`muke.models.Project`\n :returns: None\n :rtype: None\n :raises: None"}
{"_id": "q_17758", "text": "Add or create the default sequences for the given project\n\n :param project: the project that needs default sequences\n :type project: :class:`muke.models.Project`\n :returns: None\n :rtype: None\n :raises: None"}
{"_id": "q_17759", "text": "Add a rnd shot for every user in the project\n\n :param project: the project that needs its rnd shots updated\n :type project: :class:`muke.models.Project`\n :returns: None\n :rtype: None\n :raises: None"}
{"_id": "q_17760", "text": "Post save receiver for when a Project is saved.\n\n Creates a rnd shot for every user.\n\n On creations does:\n\n 1. create all default departments\n 2. create all default assettypes\n 3. create all default sequences\n\n :param sender: the project class\n :type sender: :class:`muke.models.Project`\n :returns: None\n :raises: None"}
{"_id": "q_17761", "text": "Returns a link to a view that moves the passed in object down in rank.\n\n :param obj:\n Object to move\n :param link_text:\n Text to display in the link. Defaults to \"down\"\n :returns:\n HTML link code to view for moving the object"}
{"_id": "q_17762", "text": "Sends a packet to a peer."}
{"_id": "q_17763", "text": "Shows a figure with a typical orientation so that x and y axes are set up as expected."}
{"_id": "q_17764", "text": "Shifts indicies as needed to account for one based indexing\n\n Positive indicies need to be reduced by one to match with zero based\n indexing.\n\n Zero is not a valid input, and as such will throw a value error.\n\n Arguments:\n index - index to shift"}
{"_id": "q_17765", "text": "Returns selected positions from cut input source in desired\n arrangement.\n\n Argument:\n line - input to cut"}
{"_id": "q_17766", "text": "Processes positions to account for ranges\n\n Arguments:\n positions - list of positions and/or ranges to process"}
{"_id": "q_17767", "text": "Performs cut for range from start position to end\n\n Arguments:\n line - input to cut\n start - start of range\n current_position - current position in main cut function"}
{"_id": "q_17768", "text": "Creates list of values in a range with output delimiters.\n\n Arguments:\n start - range start\n end - range end"}
{"_id": "q_17769", "text": "Read customer's config value by section and key.\n\n :param section: config file's section. i.e [default]\n :param key: config file's key under section. i.e packages_scan\n :param return_type: return value type, str | int | bool."}
{"_id": "q_17770", "text": "Locks the file by writing a '.lock' file.\n Returns True when the file is locked and\n False when the file was locked already"}
{"_id": "q_17771", "text": "Unlocks the file by remove a '.lock' file.\n Returns True when the file is unlocked and\n False when the file was unlocked already"}
{"_id": "q_17772", "text": "Initiate the local catalog and push it the cloud"}
{"_id": "q_17773", "text": "Initiate the local catalog by downloading the cloud catalog"}
{"_id": "q_17774", "text": "Return nodes in the path between 'a' and 'b' going from\n parent to child NOT including 'a'"}
{"_id": "q_17775", "text": "Cinder annotation for adding function to process cinder notification.\n\n if event_type include wildcard, will put {pattern: function} into process_wildcard dict\n else will put {event_type: function} into process dict\n\n :param arg: event_type of notification"}
{"_id": "q_17776", "text": "Neutron annotation for adding function to process neutron notification.\n\n if event_type include wildcard, will put {pattern: function} into process_wildcard dict\n else will put {event_type: function} into process dict\n\n :param arg: event_type of notification"}
{"_id": "q_17777", "text": "Glance annotation for adding function to process glance notification.\n\n if event_type include wildcard, will put {pattern: function} into process_wildcard dict\n else will put {event_type: function} into process dict\n\n :param arg: event_type of notification"}
{"_id": "q_17778", "text": "Swift annotation for adding function to process keystone notification.\n\n if event_type include wildcard, will put {pattern: function} into process_wildcard dict\n else will put {event_type: function} into process dict\n\n :param arg: event_type of notification"}
{"_id": "q_17779", "text": "Heat annotation for adding function to process heat notification.\n\n if event_type include wildcard, will put {pattern: function} into process_wildcard dict\n else will put {event_type: function} into process dict\n\n :param arg: event_type of notification"}
{"_id": "q_17780", "text": "Create and save an admin user.\n\n :param username:\n Admin account's username. Defaults to 'admin'\n :param email:\n Admin account's email address. Defaults to 'admin@admin.com'\n :param password:\n Admin account's password. Defaults to 'admin'\n :returns:\n Django user with staff and superuser privileges"}
{"_id": "q_17781", "text": "Returns a list of the messages from the django MessageMiddleware\n package contained within the given response. This is to be used during\n unit testing when trying to see if a message was set properly in a view.\n\n :param response: HttpResponse object, likely obtained through a\n test client.get() or client.post() call\n\n :returns: a list of tuples (message_string, message_level), one for each\n message in the response context"}
{"_id": "q_17782", "text": "Authenticates the superuser account via the web login."}
{"_id": "q_17783", "text": "Does a django test client ``get`` against the given url after\n logging in the admin first.\n\n :param url:\n URL to fetch\n :param response_code:\n Expected response code from the URL fetch. This value is\n asserted. Defaults to 200\n :param headers:\n Optional dictionary of headers to send in the request\n :param follow:\n When True, the get call will follow any redirect requests.\n Defaults to False.\n :returns:\n Django testing ``Response`` object"}
{"_id": "q_17784", "text": "Does a django test client ``post`` against the given url after\n logging in the admin first.\n\n :param url:\n URL to fetch\n :param data:\n Dictionary to form contents to post\n :param response_code:\n Expected response code from the URL fetch. This value is\n asserted. Defaults to 200\n :param headers:\n Optional dictionary of headers to send in with the request\n :returns:\n Django testing ``Response`` object"}
{"_id": "q_17785", "text": "Adds a factory.\n\n After calling this method, remote clients will be able to\n connect to it.\n\n This will call ``factory.doStart``."}
{"_id": "q_17786", "text": "Attempts to connect using a given factory.\n\n This will find the requested factory and use it to build a\n protocol as if the AMP protocol's peer was making the\n connection. It will create a transport for the protocol and\n connect it immediately. It will then store the protocol under\n a unique identifier, and return that identifier."}
{"_id": "q_17787", "text": "Receives some data for the given protocol."}
{"_id": "q_17788", "text": "Disconnects the given protocol."}
{"_id": "q_17789", "text": "Shorthand for ``callRemote``.\n\n This uses the factory's connection to the AMP peer."}
{"_id": "q_17790", "text": "Stores a reference to the connection, registers this protocol on\n the factory as one related to a multiplexed AMP connection,\n and sends currently buffered data. Gets rid of the buffer\n afterwards."}
{"_id": "q_17791", "text": "Received some data from the local side.\n\n If we have set up the multiplexed connection, sends the data\n over the multiplexed connection. Otherwise, buffers."}
{"_id": "q_17792", "text": "Actually sends data over the wire."}
{"_id": "q_17793", "text": "If we already have an AMP connection registered on the factory,\n get rid of it."}
{"_id": "q_17794", "text": "Some data was received from the remote end. Find the matching\n protocol and replay it."}
{"_id": "q_17795", "text": "The other side has asked us to disconnect."}
{"_id": "q_17796", "text": "Highest value of input image."}
{"_id": "q_17797", "text": "spawns a greenlet that does not print exceptions to the screen.\n if you use this function you MUST use this module's join or joinall otherwise the exception will be lost"}
{"_id": "q_17798", "text": "Setup argparser to process arguments and generate help"}
{"_id": "q_17799", "text": "Takes a string, centres it, and pads it on both sides"}
{"_id": "q_17800", "text": "Takes a string, and prints it with the time right aligned"}
{"_id": "q_17801", "text": "Takes the parts of a semantic version number, and returns a nicely\n formatted string."}
{"_id": "q_17802", "text": "Check that a value has physical type consistent with user-specified units\n\n Note that this does not convert the value, only check that the units have\n the right physical dimensionality.\n\n Parameters\n ----------\n name : str\n The name of the value to check (used for error messages).\n value : `numpy.ndarray` or instance of `numpy.ndarray` subclass\n The value to check.\n target_unit : unit\n The unit that the value should be convertible to.\n unit_framework : str\n The unit framework to use"}
{"_id": "q_17803", "text": "Apply standard padding.\n\n :Parameters:\n data_to_pad : byte string\n The data that needs to be padded.\n block_size : integer\n The block boundary to use for padding. The output length is guaranteed\n to be a multiple of ``block_size``.\n style : string\n Padding algorithm. It can be *'pkcs7'* (default), *'iso7816'* or *'x923'*.\n :Return:\n The original data with the appropriate padding added at the end."}
{"_id": "q_17804", "text": "Opens connection to S3 returning bucket and key"}
{"_id": "q_17805", "text": "Upload a local file to S3."}
{"_id": "q_17806", "text": "Creates an ical .ics file for an event using python-card-me."}
{"_id": "q_17807", "text": "Returns a list view of updates for a given event.\n If the event is over, it will be in chronological order.\n If the event is upcoming or still going,\n it will be in reverse chronological order."}
{"_id": "q_17808", "text": "Displays list of videos for given event."}
{"_id": "q_17809", "text": "Sign the extended request.\n\n :param req: Request, a :py:class:`fedoidcmsg.MetadataStatement' instance\n :param receiver: The intended user of this metadata statement\n :param aud: The audience, a list of receivers.\n :return: An augmented set of request arguments"}
{"_id": "q_17810", "text": "Only gathers metadata statements and returns them.\n\n :param fos: Signed metadata statements from these Federation Operators\n should be added.\n :param context: context of the metadata exchange\n :return: Dictionary with signed Metadata Statements as values"}
{"_id": "q_17811", "text": "Prints the anagram results sorted by score to stdout.\n\n Args:\n input_word: the base word we searched on\n anagrams: generator of (word, score) from anagrams_in_word\n by_length: a boolean to declare printing by length instead of score"}
{"_id": "q_17812", "text": "Argparse logic, command line options.\n\n Args:\n args: sys.argv[1:], everything passed to the program after its name\n\n Returns:\n A tuple of:\n a list of words/letters to search\n a boolean to declare if we want to use the sowpods words file\n a boolean to declare if we want to output anagrams by length\n a string of starting characters to find anagrams based on\n a string of ending characters to find anagrams based on\n\n Raises:\n SystemExit if the user passes invalid arguments, --version or --help"}
{"_id": "q_17813", "text": "Main command line entry point."}
{"_id": "q_17814", "text": "Invoked if a packet with an unregistered type was received.\n \n Default behaviour is to log and close the connection."}
{"_id": "q_17815", "text": "Called from remote to ask if a call made to here is still in progress."}
{"_id": "q_17816", "text": "Inserts Interpreter Library of imports into sketch in a very non-consensual way"}
{"_id": "q_17817", "text": "Defers to `amp.AmpList`, then gets the element from the list."}
{"_id": "q_17818", "text": "Parse simple JWKS or signed JWKS from the HTTP response.\n\n :param response: HTTP response from the 'jwks_uri' or 'signed_jwks_uri'\n endpoint\n :return: response parsed as JSON or None"}
{"_id": "q_17819", "text": "Sets the beam moments indirectly using Courant-Snyder parameters.\n\n Parameters\n ----------\n beta : float\n Courant-Snyder parameter :math:`\\\\beta`.\n alpha : float\n Courant-Snyder parameter :math:`\\\\alpha`.\n emit : float\n Beam emittance :math:`\\\\epsilon`.\n emit_n : float\n Normalized beam emittance :math:`\\\\gamma \\\\epsilon`."}
{"_id": "q_17820", "text": "Given a slice object, return appropriate values for use in the range function\n\n :param slice_obj: The slice object or integer provided in the `[]` notation\n :param length: For negative indexing we need to know the max length of the object."}
{"_id": "q_17821", "text": "Performs a pg_dump backup.\n\n It runs with the current systemuser's privileges, unless you specify\n username and password.\n\n By default pg_dump connects to the value given in the PGHOST environment\n variable.\n You can either specify \"hostname\" and \"port\" or a socket path.\n\n pg_dump expects the pg_dump-utility to be on $PATCH.\n Should that not be case you are allowed to specify a custom location with\n \"pg_dump_path\"\n\n Format is p (plain / default), c = custom, d = directory, t=tar\n\n returns statuscode and shelloutput"}
{"_id": "q_17822", "text": "Helper to add error to messages field. It fills placeholder with extra call parameters\n or values from message_value map.\n\n :param error_code: Error code to use\n :rparam error_code: str\n :param value: Value checked\n :param kwargs: Map of values to use in placeholders"}
{"_id": "q_17823", "text": "Returns a dictionary of all the files under a path."}
{"_id": "q_17824", "text": "Syncs a local directory with an S3 bucket.\n \n Currently does not delete files from S3 that are not in the local directory.\n\n path: The path to the directory to sync to S3\n bucket: The name of the bucket on S3"}
{"_id": "q_17825", "text": "Ensure the user has the necessary tokens for the specified services"}
{"_id": "q_17826", "text": "File copy that support compress and decompress of zip files"}
{"_id": "q_17827", "text": "Apply to the 'catalog' the changesets in the metafile list 'changesets"}
{"_id": "q_17828", "text": "When entering the context, spawns a greenlet that sleeps for `interval` seconds between `callback` executions.\n When leaving the context stops the greenlet.\n The yielded object is the `GeventLoop` object so the loop can be stopped from within the context.\n\n For example:\n ```\n with loop_in_background(60.0, purge_cache) as purge_cache_job:\n ...\n ...\n if should_stop_cache():\n purge_cache_job.stop()\n ```"}
{"_id": "q_17829", "text": "Main loop - used internally."}
{"_id": "q_17830", "text": "Starts the loop. Calling a running loop is an error."}
{"_id": "q_17831", "text": "Return an already closed read-only instance of Fridge.\n Arguments are the same as for the constructor."}
{"_id": "q_17832", "text": "Create a signed JWT containing a JWKS. The JWT is signed by one of the\n keys in the JWKS.\n\n :param keyjar: A KeyJar instance with at least one private signing key\n :param iss: issuer of the JWT, should be the owner of the keys\n :param kid: A key ID if a special key should be used otherwise one\n is picked at random.\n :param lifetime: The lifetime of the signed JWT\n :return: A signed JWT"}
{"_id": "q_17833", "text": "Fix common spacing errors caused by LaTeX's habit\n of using an inter-sentence space after any full stop."}
{"_id": "q_17834", "text": "Transform hyphens to various kinds of dashes"}
{"_id": "q_17835", "text": "Regex substitute target with replacement"}
{"_id": "q_17836", "text": "Upload the docs to a remote location via rsync.\n\n `options.paved.docs.rsync_location`: the target location to rsync files to.\n\n `options.paved.docs.path`: the path to the Sphinx folder (where the Makefile resides).\n\n `options.paved.docs.build_rel`: the path of the documentation\n build folder, relative to `options.paved.docs.path`."}
{"_id": "q_17837", "text": "Push Sphinx docs to github_ gh-pages branch.\n\n 1. Create file .nojekyll\n 2. Push the branch to origin/gh-pages\n after committing using ghp-import_\n\n Requirements:\n - easy_install ghp-import\n\n Options:\n - `options.paved.docs.*` is not used\n - `options.sphinx.docroot` is used (default=docs)\n - `options.sphinx.builddir` is used (default=.build)\n\n .. warning::\n This will DESTROY your gh-pages branch.\n If you love it, you'll want to take backups\n before playing with this. This script assumes\n that gh-pages is 100% derivative. You should\n never edit files in your gh-pages branch by hand\n if you're using this script because you will\n lose your work.\n\n .. _github: https://github.com\n .. _ghp-import: https://github.com/davisp/ghp-import"}
{"_id": "q_17838", "text": "Open your web browser and display the generated html documentation."}
{"_id": "q_17839", "text": "Tries to minimize the length of CSS code passed as parameter. Returns string."}
{"_id": "q_17840", "text": "A decorator for providing a unittest with a library and have it called only\n once."}
{"_id": "q_17841", "text": "Descover and load greencard tests."}
{"_id": "q_17842", "text": "Returns the Scrabble score of a letter.\n\n Args:\n letter: a single character string\n\n Raises:\n TypeError if a non-Scrabble character is supplied"}
{"_id": "q_17843", "text": "Opens the word list file.\n\n Args:\n sowpods: a boolean to declare using the sowpods list or TWL (default)\n start: a string of starting characters to find anagrams based on\n end: a string of ending characters to find anagrams based on\n\n Yeilds:\n a word at a time out of 178691 words for TWL, 267751 for sowpods. Much\n less if either start or end are used (filtering is applied here)"}
{"_id": "q_17844", "text": "Checks if the input word could be played with a full bag of tiles.\n\n Returns:\n True or false"}
{"_id": "q_17845", "text": "docstring for main"}
{"_id": "q_17846", "text": "docstring for argparse"}
{"_id": "q_17847", "text": "Create the tasks on the server"}
{"_id": "q_17848", "text": "Update existing tasks on the server"}
{"_id": "q_17849", "text": "Prompts the user for yes or no."}
{"_id": "q_17850", "text": "Prompts the user with custom options."}
{"_id": "q_17851", "text": "Reading the configure file and adds non-existing attributes to 'args"}
{"_id": "q_17852", "text": "Returns a copy of this object"}
{"_id": "q_17853", "text": "Returns a Tag with a given revision"}
{"_id": "q_17854", "text": "Tiles open figures."}
{"_id": "q_17855", "text": "When a Comment is added, updates the Update to set \"last_updated\" time"}
{"_id": "q_17856", "text": "Handle a JSON AMP dialect request.\n\n First, the JSON is parsed. Then, all JSON dialect specific\n values in the request are turned into the correct objects.\n Then, finds the correct responder function, calls it, and\n serializes the result (or error)."}
{"_id": "q_17857", "text": "Gets the command class and matching responder function for the\n given command name."}
{"_id": "q_17858", "text": "Parses all the values in the request that are in a form specific\n to the JSON AMP dialect."}
{"_id": "q_17859", "text": "Serializes the response to JSON, and writes it to the transport."}
{"_id": "q_17860", "text": "Tells the box receiver to stop receiving boxes."}
{"_id": "q_17861", "text": "Adds useful global items to the context for use in templates.\n\n * *request*: the request object\n * *HOST*: host name of server\n * *IN_ADMIN*: True if you are in the django admin area"}
{"_id": "q_17862", "text": "Create the challenge on the server"}
{"_id": "q_17863", "text": "Update existing challenge on the server"}
{"_id": "q_17864", "text": "Check if a challenge exists on the server"}
{"_id": "q_17865", "text": "Convert a JWKS to a KeyJar instance.\n\n :param jwks: String representation of a JWKS\n :return: A :py:class:`oidcmsg.key_jar.KeyJar` instance"}
{"_id": "q_17866", "text": "Returns position data.\n\n http://dev.wheniwork.com/#get-existing-position"}
{"_id": "q_17867", "text": "Returns a list of positions.\n\n http://dev.wheniwork.com/#listing-positions"}
{"_id": "q_17868", "text": "Handle HTTP exception\n\n :param werkzeug.exceptions.HTTPException exception: Raised exception\n\n A response is returned, as formatted by the :py:func:`response` function."}
{"_id": "q_17869", "text": "Returns True if the value given is a valid CSS colour, i.e. matches one\n of the regular expressions in the module or is in the list of\n predetefined values by the browser."}
{"_id": "q_17870", "text": "Reynold number utility function that return Reynold number for vehicle at specific length and speed.\n Optionally, it can also take account of temperature effect of sea water.\n\n Kinematic viscosity from: http://web.mit.edu/seawater/2017_MIT_Seawater_Property_Tables_r2.pdf\n\n :param length: metres length of the vehicle\n :param speed: m/s speed of the vehicle\n :param temperature: degree C \n :return: Reynolds number of the vehicle (dimensionless)"}
{"_id": "q_17871", "text": "Froude number utility function that return Froude number for vehicle at specific length and speed.\n\n :param speed: m/s speed of the vehicle\n :param length: metres length of the vehicle\n :return: Froude number of the vehicle (dimensionless)"}
{"_id": "q_17872", "text": "Residual resistance coefficient estimation from slenderness function, prismatic coefficient and Froude number.\n\n :param slenderness: Slenderness coefficient dimensionless :math:`L/(\u2207^{1/3})` where L is length of ship, \u2207 is displacement\n :param prismatic_coef: Prismatic coefficient dimensionless :math:`\u2207/(L\\cdot A_m)` where L is length of ship, \u2207 is displacement Am is midsection area of the ship\n :param froude_number: Froude number of the ship dimensionless \n :return: Residual resistance of the ship"}
{"_id": "q_17873", "text": "Assign values for the main dimension of a ship.\n\n :param length: metres length of the vehicle\n :param draught: metres draught of the vehicle\n :param beam: metres beam of the vehicle\n :param speed: m/s speed of the vehicle\n :param slenderness_coefficient: Slenderness coefficient dimensionless :math:`L/(\u2207^{1/3})` where L is length of ship,\n \u2207 is displacement\n :param prismatic_coefficient: Prismatic coefficient dimensionless :math:`\u2207/(L\\cdot A_m)` where L is length of ship,\n \u2207 is displacement Am is midsection area of the ship"}
{"_id": "q_17874", "text": "Return the maximum deck area of the ship\n\n :param water_plane_coef: optional water plane coefficient\n :return: Area of the deck"}
{"_id": "q_17875", "text": "Total propulsion power of the ship.\n\n :param propulsion_eff: Shaft efficiency of the ship\n :param sea_margin: Sea margin take account of interaction between ship and the sea, e.g. wave\n :return: Watts shaft propulsion power of the ship"}
{"_id": "q_17876", "text": "Configure the api to use given url and token or to get them from the\n Config."}
{"_id": "q_17877", "text": "Ensures that the request url is valid.\n Sometimes we have URLs that the server gives that are preformatted,\n sometimes we need to form our own."}
{"_id": "q_17878", "text": "Extract json from a response.\n Assumes response is valid otherwise.\n Internal use only."}
{"_id": "q_17879", "text": "This function deal with the cinder notification.\n\n First, find process from customer_process that not include wildcard.\n if not find from customer_process, then find process from customer_process_wildcard.\n if not find from customer_process_wildcard, then use ternya default process.\n :param body: dict of openstack notification.\n :param message: kombu Message class\n :return:"}
{"_id": "q_17880", "text": "This function deal with the glance notification.\n\n First, find process from customer_process that not include wildcard.\n if not find from customer_process, then find process from customer_process_wildcard.\n if not find from customer_process_wildcard, then use ternya default process.\n :param body: dict of openstack notification.\n :param message: kombu Message class\n :return:"}
{"_id": "q_17881", "text": "This function deal with the swift notification.\n\n First, find process from customer_process that not include wildcard.\n if not find from customer_process, then find process from customer_process_wildcard.\n if not find from customer_process_wildcard, then use ternya default process.\n :param body: dict of openstack notification.\n :param message: kombu Message class\n :return:"}
{"_id": "q_17882", "text": "Wrapper for gevent.joinall if the greenlet that waits for the joins is killed, it kills all the greenlets it\n joins for."}
{"_id": "q_17883", "text": "Creates an error Acknowledgement message.\n The message's code and message are taken from this exception.\n\n :return: the message representing this exception"}
{"_id": "q_17884", "text": "Serve app using wsgiref or provided server.\n\n Args:\n - server (callable): An callable"}
{"_id": "q_17885", "text": "\\\n Parses a binary protobuf message into a Message object."}
{"_id": "q_17886", "text": "Print 'msg' to stdout, and option 'log' at info level."}
{"_id": "q_17887", "text": "Print 'msg' to stderr, and option 'log' at info level."}
{"_id": "q_17888", "text": "Returns a list of the ancestors of this node."}
{"_id": "q_17889", "text": "Returns a list of descendents of this node."}
{"_id": "q_17890", "text": "Returns a list of nodes that would be removed if prune were called\n on this element."}
{"_id": "q_17891", "text": "A class decorator for Command classes to register in the default set."}
{"_id": "q_17892", "text": "A class decorator for Command classes to register."}
{"_id": "q_17893", "text": "If all of the constraints are satisfied with the given value, defers\n to the composed AMP argument's ``toString`` method."}
{"_id": "q_17894", "text": "Update existing task on the server"}
{"_id": "q_17895", "text": "Retrieve a task from the server"}
{"_id": "q_17896", "text": "Merges ``cdict`` into ``completers``. In the event that a key\n in cdict already exists in the completers dict a ValueError is raised\n iff ``regex`` false'y. If a regex str is provided it and the duplicate\n key are updated to be unique, and the updated regex is returned."}
{"_id": "q_17897", "text": "Init connection and consumer with openstack mq."}
{"_id": "q_17898", "text": "Init openstack nova mq\n\n 1. Check if enable listening nova notification\n 2. Create consumer\n\n :param mq: class ternya.mq.MQ"}
{"_id": "q_17899", "text": "Init openstack cinder mq\n\n 1. Check if enable listening cinder notification\n 2. Create consumer\n\n :param mq: class ternya.mq.MQ"}
{"_id": "q_17900", "text": "Init openstack heat mq\n\n 1. Check if enable listening heat notification\n 2. Create consumer\n\n :param mq: class ternya.mq.MQ"}
{"_id": "q_17901", "text": "Check if customer enable openstack component notification.\n\n :param openstack_component: Openstack component type."}
{"_id": "q_17902", "text": "Get music info from baidu music api"}
{"_id": "q_17903", "text": "process for downing music with multiple threads"}
{"_id": "q_17904", "text": "Returns user profile data.\n\n http://dev.wheniwork.com/#get-existing-user"}
{"_id": "q_17905", "text": "Returns a list of users.\n\n http://dev.wheniwork.com/#listing-users"}
{"_id": "q_17906", "text": "Recursively update the destination dict-like object with the source dict-like object.\n\n Useful for merging options and Bunches together!\n\n Based on:\n http://code.activestate.com/recipes/499335-recursively-update-a-dictionary-without-hitting-py/#c1"}
{"_id": "q_17907", "text": "When I Work GET method. Return representation of the requested\n resource."}
{"_id": "q_17908", "text": "When I Work POST method."}
{"_id": "q_17909", "text": "When I Work DELETE method."}
{"_id": "q_17910", "text": "Execute a code object\n \n The inputs and behavior of this function should match those of\n eval_ and exec_.\n\n .. _eval: https://docs.python.org/3/library/functions.html?highlight=eval#eval\n .. _exec: https://docs.python.org/3/library/functions.html?highlight=exec#exec\n\n .. note:: Need to figure out how the internals of this function must change for\n ``eval`` or ``exec``.\n\n :param code: a python code object\n :param globals_: optional globals dictionary\n :param _locals: optional locals dictionary"}
{"_id": "q_17911", "text": "Implement the CALL_FUNCTION_ operation.\n\n .. _CALL_FUNCTION: https://docs.python.org/3/library/dis.html#opcode-CALL_FUNCTION"}
{"_id": "q_17912", "text": "Delete existing shifts.\n\n http://dev.wheniwork.com/#delete-shift"}
{"_id": "q_17913", "text": "Gets images and videos to populate top assets.\n\n Map is built separately."}
{"_id": "q_17914", "text": "Overridden method that handles that re-ranking of objects and the\n integrity of the ``rank`` field.\n\n :param rerank:\n Added parameter, if True will rerank other objects based on the\n change in this save. Defaults to True."}
{"_id": "q_17915", "text": "Removes any blank ranks in the order."}
{"_id": "q_17916", "text": "Returns the field names of a Django model object.\n\n :param obj: the Django model class or object instance to get the fields\n from\n :param ignore_auto: ignore any fields of type AutoField. Defaults to True\n :param ignore_relations: ignore any fields that involve relations such as\n the ForeignKey or ManyToManyField\n :param exclude: exclude anything in this list from the results\n\n :returns: generator of found field names"}
{"_id": "q_17917", "text": "Register all HTTP error code error handlers\n\n Currently, errors are handled by the JSON error handler."}
{"_id": "q_17918", "text": "Plots but automatically resizes x axis.\n\n .. versionadded:: 1.4\n\n Parameters\n ----------\n args\n Passed on to :meth:`matplotlib.axis.Axis.plot`.\n ax : :class:`matplotlib.axis.Axis`, optional\n The axis to plot to.\n kwargs\n Passed on to :meth:`matplotlib.axis.Axis.plot`."}
{"_id": "q_17919", "text": "Passes the selected course as the first argument to func."}
{"_id": "q_17920", "text": "Passes the selected exercise as the first argument to func."}
{"_id": "q_17921", "text": "If func returns False the program exits immediately."}
{"_id": "q_17922", "text": "Configure tmc.py to use your account."}
{"_id": "q_17923", "text": "Download the exercises from the server."}
{"_id": "q_17924", "text": "Spawns a process with `command path-of-exercise`"}
{"_id": "q_17925", "text": "Select a course or an exercise."}
{"_id": "q_17926", "text": "Sends the selected exercise to the TMC pastebin."}
{"_id": "q_17927", "text": "Update the data of courses and or exercises from server."}
{"_id": "q_17928", "text": "Determine the type of x"}
{"_id": "q_17929", "text": "Apply the types on the elements of the line"}
{"_id": "q_17930", "text": "Convert a file to a .csv file"}
{"_id": "q_17931", "text": "Returns a link to the django admin change list with a filter set to\n only the object given.\n\n :param obj:\n Object to create the admin change list display link for\n :param display:\n Text to display in the link. Defaults to string call of the object\n :returns:\n Text containing HTML for a link"}
{"_id": "q_17932", "text": "Returns string representation of an object, either the default or based\n on the display template passed in."}
{"_id": "q_17933", "text": "Adds a ``list_display`` attribute that appears as a link to the\n django admin change page for the type of object being shown. Supports\n double underscore attribute name dereferencing.\n\n :param attr:\n Name of the attribute to dereference from the corresponding\n object, i.e. what will be lined to. This name supports double\n underscore object link referencing for ``models.ForeignKey``\n members.\n\n :param title:\n Title for the column of the django admin table. If not given it\n defaults to a capitalized version of ``attr``\n\n :param display:\n What to display as the text for the link being shown. If not\n given it defaults to the string representation of the object for\n the row: ``str(obj)`` . This parameter supports django\n templating, the context for which contains a dictionary key named\n \"obj\" with the value being the object for the row.\n\n Example usage:\n\n .. code-block:: python\n\n # ---- admin.py file ----\n\n base = fancy_modeladmin('id')\n base.add_link('author', 'Our Authors',\n '{{obj.name}} (id={{obj.id}})')\n\n @admin.register(Book)\n class BookAdmin(base):\n pass\n\n The django admin change page for the Book class would have a column\n for \"id\" and another titled \"Our Authors\". The \"Our Authors\" column\n would have a link for each Author object referenced by \"book.author\".\n The link would go to the Author django admin change listing. The\n display of the link would be the name of the author with the id in\n brakcets, e.g. \"Douglas Adams (id=42)\""}
{"_id": "q_17934", "text": "Adds a ``list_display`` attribute showing an object. Supports\n double underscore attribute name dereferencing.\n\n :param attr:\n Name of the attribute to dereference from the corresponding\n object, i.e. what will be lined to. This name supports double\n underscore object link referencing for ``models.ForeignKey``\n members.\n\n :param title:\n Title for the column of the django admin table. If not given it\n defaults to a capitalized version of ``attr``\n\n :param display:\n What to display as the text for the link being shown. If not\n given it defaults to the string representation of the object for\n the row: ``str(obj)``. This parameter supports django templating,\n the context for which contains a dictionary key named \"obj\" with\n the value being the object for the row."}
{"_id": "q_17935", "text": "Adds a ``list_display`` attribute showing a field in the object\n using a python %formatted string.\n\n :param field:\n Name of the field in the object.\n\n :param format_string:\n A old-style (to remain python 2.x compatible) % string formatter\n with a single variable reference. The named ``field`` attribute\n will be passed to the formatter using the \"%\" operator. \n\n :param title:\n Title for the column of the django admin table. If not given it\n defaults to a capitalized version of ``field``"}
{"_id": "q_17936", "text": "View decorator that enforces that the method was called using POST and\n contains a field containing a JSON dictionary. This method should\n only be used to wrap views and assumes the first argument of the method\n being wrapped is a ``request`` object.\n\n .. code-block:: python\n\n @json_post_required('data', 'json_data')\n def some_view(request):\n username = request.json_data['username']\n\n :param field:\n The name of the POST field that contains a JSON dictionary\n :param request_name:\n [optional] Name of the parameter on the request to put the\n deserialized JSON data. If not given the field name is used"}
{"_id": "q_17937", "text": "Perfoms a mysqldump backup.\n Create a database dump for the given database.\n returns statuscode and shelloutput"}
{"_id": "q_17938", "text": "Divergence of matched beam"}
{"_id": "q_17939", "text": "Performs some environment checks prior to the program's execution"}
{"_id": "q_17940", "text": "Prints latest tag's information"}
{"_id": "q_17941", "text": "Prompts user before proceeding"}
{"_id": "q_17942", "text": "Render ditaa code into a PNG output file."}
{"_id": "q_17943", "text": "Invoked in the 'finally' block of Application.run."}
{"_id": "q_17944", "text": "Add one tick to progress bar"}
{"_id": "q_17945", "text": "If called in the context of an exception, calls post_mortem; otherwise\n set_trace.\n ``ipdb`` is preferred over ``pdb`` if installed."}
{"_id": "q_17946", "text": "Push k to the top of the list\n\n >>> l = DLL()\n >>> l.push(1)\n >>> l\n [1]\n >>> l.push(2)\n >>> l\n [2, 1]\n >>> l.push(3)\n >>> l\n [3, 2, 1]"}
{"_id": "q_17947", "text": "Find the time this file was last modified.\n\n :param fname: File name\n :return: The last time the file was modified."}
{"_id": "q_17948", "text": "Find out if this item has been modified since last\n\n :param item: A key\n :return: True/False"}
{"_id": "q_17949", "text": "Goes through the directory and builds a local cache based on\n the content of the directory."}
{"_id": "q_17950", "text": "Completely resets the database. This means that all information in\n the local cache and on disc will be erased."}
{"_id": "q_17951", "text": "print loading message on screen\n\n .. note::\n\n loading message only write to `sys.stdout`\n\n\n :param int wait: seconds to wait\n :param str message: message to print\n :return: None"}
{"_id": "q_17952", "text": "a built-in wrapper make dry-run easier.\n you should use this instead use `os.system`\n\n .. note::\n\n to use it,you need add '--dry-run' option in\n your argparser options\n\n\n :param str cmd: command to execute\n :param bool fake_code: only display command\n when is True,default is False\n :return:"}
{"_id": "q_17953", "text": "Download the image and return the\n local path to the image file."}
{"_id": "q_17954", "text": "Returns a Corrected URL to be used for a Request\n as per the REST API."}
{"_id": "q_17955", "text": "Returns a template.Node subclass."}
{"_id": "q_17956", "text": "Find the stack frame of the caller so that we can note the source\n file name, line number and function name."}
{"_id": "q_17957", "text": "Determine if a PE_PE is contained within a EP_PKG or a C_C."}
{"_id": "q_17958", "text": "Convert a BridgePoint data type to a pyxtuml meta model type."}
{"_id": "q_17959", "text": "The two lists of attributes which relates two classes in an association."}
{"_id": "q_17960", "text": "Create a named tuple from a BridgePoint enumeration."}
{"_id": "q_17961", "text": "Create a python function from a BridgePoint bridge."}
{"_id": "q_17962", "text": "Create a python value from a BridgePoint constant."}
{"_id": "q_17963", "text": "Create a python property that interprets that action of a BridgePoint derived\n attribute."}
{"_id": "q_17964", "text": "Create a pyxtuml class from a BridgePoint class."}
{"_id": "q_17965", "text": "Create a pyxtuml association from a simple association in BridgePoint."}
{"_id": "q_17966", "text": "Create pyxtuml associations from a linked association in BridgePoint."}
{"_id": "q_17967", "text": "Create a pyxtuml meta model from a BridgePoint model. \n Optionally, restrict to classes and associations contained in the\n component c_c."}
{"_id": "q_17968", "text": "Sends REJECT reply."}
{"_id": "q_17969", "text": "Sends RAISE reply."}
{"_id": "q_17970", "text": "Dispatches the reply to the proper queue."}
{"_id": "q_17971", "text": "Main method.\n\n This method holds what you want to execute when\n the script is run on command line."}
{"_id": "q_17972", "text": "Guess the type name of a serialized value."}
{"_id": "q_17973", "text": "Deserialize a value of some type"}
{"_id": "q_17974", "text": "r'\\("}
{"_id": "q_17975", "text": "Retrieve a feature collection.\n\n If a feature collection with the given id does not\n exist, then ``None`` is returned.\n\n :param str content_id: Content identifier.\n :param [str] feature_names:\n A list of feature names to retrieve. When ``None``, all\n features are retrieved. Wildcards are allowed.\n :rtype: :class:`dossier.fc.FeatureCollection` or ``None``"}
{"_id": "q_17976", "text": "Returns an iterable of feature collections.\n\n This efficiently retrieves multiple FCs corresponding to the\n list of ids given. Tuples of identifier and feature collection\n are yielded. If the feature collection for a given id does not\n exist, then ``None`` is returned as the second element of the\n tuple.\n\n :param [str] content_ids: List of content ids.\n :param [str] feature_names:\n A list of feature names to retrieve. When ``None``, all\n features are retrieved. Wildcards are allowed.\n :rtype: Iterable of ``(content_id, FC)``"}
{"_id": "q_17977", "text": "Adds feature collections to the store.\n\n This efficiently adds multiple FCs to the store. The iterable\n of ``items`` given should yield tuples of ``(content_id, FC)``.\n\n :param items: Iterable of ``(content_id, FC)``.\n :param [str] feature_names:\n A list of feature names to retrieve. When ``None``, all\n features are retrieved. Wildcards are allowed."}
{"_id": "q_17978", "text": "Deletes the corresponding feature collection.\n\n If the FC does not exist, then this is a no-op."}
{"_id": "q_17979", "text": "Deletes the underlying ES index.\n\n Only use this if you know what you're doing. This destroys\n the entire underlying ES index, which could be shared by\n multiple distinct ElasticStore instances."}
{"_id": "q_17980", "text": "Scan for ids only in the given id ranges.\n\n :param key_ranges:\n ``key_ranges`` should be a list of pairs of ranges. The first\n value is the lower bound id and the second value is the\n upper bound id. Use ``()`` in either position to leave it\n unbounded. If no ``key_ranges`` are given, then all FCs in\n the store are returned.\n :param [str] feature_names:\n A list of feature names to retrieve. When ``None``, all\n features are retrieved. Wildcards are allowed.\n :rtype: Iterable of ``content_id``"}
{"_id": "q_17981", "text": "Scan for ids with a given prefix.\n\n :param str prefix: Identifier prefix.\n :param [str] feature_names:\n A list of feature names to retrieve. When ``None``, all\n features are retrieved. Wildcards are allowed.\n :rtype: Iterable of ``content_id``"}
{"_id": "q_17982", "text": "Fulltext search.\n\n Yields an iterable of triples (score, identifier, FC)\n corresponding to the search results of the fulltext search\n in ``query``. This will only search text indexed under the\n given feature named ``fname``.\n\n Note that, unless ``preserve_order`` is set to True, the\n ``score`` will always be 0.0, and the results will be\n unordered. ``preserve_order`` set to True will cause the\n results to be scored and be ordered by score, but you should\n expect to see a decrease in performance.\n\n :param str fname:\n The feature to search.\n :param unicode query:\n The query.\n :param [str] feature_names:\n A list of feature names to retrieve. When ``None``, all\n features are retrieved. Wildcards are allowed.\n :rtype: Iterable of ``(score, content_id, FC)``"}
{"_id": "q_17983", "text": "Fulltext search for identifiers.\n\n Yields an iterable of triples (score, identifier)\n corresponding to the search results of the fulltext search\n in ``query``. This will only search text indexed under the\n given feature named ``fname``.\n\n Note that, unless ``preserve_order`` is set to True, the\n ``score`` will always be 0.0, and the results will be\n unordered. ``preserve_order`` set to True will cause the\n results to be scored and be ordered by score, but you should\n expect to see a decrease in performance.\n\n :param str fname:\n The feature to search.\n :param unicode query:\n The query.\n :rtype: Iterable of ``(score, content_id)``"}
{"_id": "q_17984", "text": "Keyword scan for ids.\n\n This performs a keyword scan using the query given. A keyword\n scan searches for FCs with terms in each of the query's indexed\n fields.\n\n At least one of ``query_id`` or ``query_fc`` must be provided.\n If ``query_fc`` is ``None``, then the query is retrieved\n automatically corresponding to ``query_id``.\n\n :param str query_id: Optional query id.\n :param query_fc: Optional query feature collection.\n :type query_fc: :class:`dossier.fc.FeatureCollection`\n :rtype: Iterable of ``content_id``"}
{"_id": "q_17985", "text": "Low-level keyword index scan for ids.\n\n Retrieves identifiers of FCs that have a feature value\n ``val`` in the feature named ``fname``. Note that\n ``fname`` must be indexed.\n\n :param str fname: Feature name.\n :param str val: Feature value.\n :rtype: Iterable of ``content_id``"}
{"_id": "q_17986", "text": "Maps feature names to ES's \"_source\" field."}
{"_id": "q_17987", "text": "Create the index"}
{"_id": "q_17988", "text": "Retrieve the field mappings. Useful for debugging."}
{"_id": "q_17989", "text": "Retrieve the field types. Useful for debugging."}
{"_id": "q_17990", "text": "Take a feature collection in dict form and count its size in bytes."}
{"_id": "q_17991", "text": "Count bytes of all feature collections whose key satisfies one of\n the predicates in ``filter_preds``. The byte counts are binned\n by filter predicate."}
{"_id": "q_17992", "text": "construct a nice looking string for an FC"}
{"_id": "q_17993", "text": "Pickle and compress."}
{"_id": "q_17994", "text": "Escape the error, and wrap it in a span with class ``error-message``"}
{"_id": "q_17995", "text": "Displays the contact form and sends the email"}
{"_id": "q_17996", "text": "Create a human-readable representation of a link on the 'TO'-side"}
{"_id": "q_17997", "text": "Create a human-readable representation a unique identifier."}
{"_id": "q_17998", "text": "Check the model for uniqueness constraint violations."}
{"_id": "q_17999", "text": "Check the model for integrity violations across a subtype association."}
{"_id": "q_18000", "text": "try use gitconfig info.\n author,email etc."}
{"_id": "q_18001", "text": "Returns a index creation function.\n\n Returns a valid index ``create`` function for the feature names\n given. This can be used with the :meth:`Store.define_index`\n method to create indexes on any combination of features in a\n feature collection.\n\n :type feature_names: list(unicode)\n :rtype: ``(val -> index val)\n -> (content_id, FeatureCollection)\n -> generator of [index val]``"}
{"_id": "q_18002", "text": "Add feature collections to the store.\n\n Given an iterable of tuples of the form\n ``(content_id, feature collection)``, add each to the store\n and overwrite any that already exist.\n\n This method optionally accepts a keyword argument `indexes`,\n which by default is set to ``True``. When it is ``True``,\n it will *create* new indexes for each content object for all\n indexes defined on this store.\n\n Note that this will not update existing indexes. (There is\n currently no way to do this without running some sort of\n garbage collection process.)\n\n :param iterable items: iterable of\n ``(content_id, FeatureCollection)``.\n :type fc: :class:`dossier.fc.FeatureCollection`"}
{"_id": "q_18003", "text": "Deletes all storage.\n\n This includes every content object and all index data."}
{"_id": "q_18004", "text": "Retrieve feature collections in a range of ids.\n\n Returns a generator of content objects corresponding to the\n content identifier ranges given. `key_ranges` can be a possibly\n empty list of 2-tuples, where the first element of the tuple\n is the beginning of a range and the second element is the end\n of a range. To specify the beginning or end of the table, use\n an empty tuple `()`.\n\n If the list is empty, then this yields all content objects in\n the storage.\n\n :param key_ranges: as described in\n :meth:`kvlayer._abstract_storage.AbstractStorage`\n :rtype: generator of\n (``content_id``, :class:`dossier.fc.FeatureCollection`)."}
{"_id": "q_18005", "text": "Returns ids that match an indexed value.\n\n Returns a generator of content identifiers that have an entry\n in the index ``idx_name`` with value ``val`` (after index\n transforms are applied).\n\n If the index named by ``idx_name`` is not registered, then a\n :exc:`~exceptions.KeyError` is raised.\n\n :param unicode idx_name: name of index\n :param val: the value to use to search the index\n :type val: unspecified (depends on the index, usually ``unicode``)\n :rtype: generator of ``content_id``\n :raises: :exc:`~exceptions.KeyError`"}
{"_id": "q_18006", "text": "Returns ids that match a prefix of an indexed value.\n\n Returns a generator of content identifiers that have an entry\n in the index ``idx_name`` with prefix ``val_prefix`` (after\n index transforms are applied).\n\n If the index named by ``idx_name`` is not registered, then a\n :exc:`~exceptions.KeyError` is raised.\n\n :param unicode idx_name: name of index\n :param val_prefix: the value to use to search the index\n :type val: unspecified (depends on the index, usually ``unicode``)\n :rtype: generator of ``content_id``\n :raises: :exc:`~exceptions.KeyError`"}
{"_id": "q_18007", "text": "Returns ids that match a prefix of an indexed value, and the\n specific key that matched the search prefix.\n\n Returns a generator of (index key, content identifier) that\n have an entry in the index ``idx_name`` with prefix\n ``val_prefix`` (after index transforms are applied).\n\n If the index named by ``idx_name`` is not registered, then a\n :exc:`~exceptions.KeyError` is raised.\n\n :param unicode idx_name: name of index\n :param val_prefix: the value to use to search the index\n :type val: unspecified (depends on the index, usually ``unicode``)\n :rtype: generator of (``index key``, ``content_id``)\n :raises: :exc:`~exceptions.KeyError`"}
{"_id": "q_18008", "text": "Implementation for index_scan_prefix and\n index_scan_prefix_and_return_key, parameterized on return\n value function.\n\n retfunc gets passed a key tuple from the index:\n (index name, index value, content_id)"}
{"_id": "q_18009", "text": "Add an index to this store instance.\n\n Adds an index transform to the current FC store. Once an index\n with name ``idx_name`` is added, it will be available in all\n ``index_*`` methods. Additionally, the index will be automatically\n updated on calls to :meth:`~dossier.fc.store.Store.put`.\n\n If an index with name ``idx_name`` already exists, then it is\n overwritten.\n\n Note that indexes do *not* persist. They must be re-defined for\n each instance of :class:`Store`.\n\n For example, to add an index on the ``boNAME`` feature, you can\n use the ``feature_index`` helper function:\n\n .. code-block:: python\n\n store.define_index('boNAME',\n feature_index('boNAME'),\n lambda s: s.encode('utf-8'))\n\n Another example for creating an index on names:\n\n .. code-block:: python\n\n store.define_index('NAME',\n feature_index('canonical_name', 'NAME'),\n lambda s: s.lower().encode('utf-8'))\n\n :param idx_name: The name of the index. Must be UTF-8 encodable.\n :type idx_name: unicode\n :param create: A function that accepts the ``transform`` function and\n a pair of ``(content_id, fc)`` and produces a generator\n of index values from the pair given using ``transform``.\n :param transform: A function that accepts an arbitrary value and\n applies a transform to it. This transforms the\n *stored* value to the *index* value. This *must*\n produce a value with type `str` (or `bytes`)."}
{"_id": "q_18010", "text": "Returns a generator of index triples.\n\n Returns a generator of index keys for the ``ids_and_fcs`` pairs\n given. The index keys have the form ``(idx_name, idx_val,\n content_id)``.\n\n :type idx_name: unicode\n :type ids_and_fcs: ``[(content_id, FeatureCollection)]``\n :rtype: generator of ``(str, str, str)``"}
{"_id": "q_18011", "text": "Return a new filename to use as the combined file name for a\n bunch of files, based on the SHA of their contents.\n A precondition is that they all have the same file extension\n\n Given that the list of files can have different paths, we aim to use the\n most common path.\n\n Example:\n /somewhere/else/foo.js\n /somewhere/bar.js\n /somewhere/different/too/foobar.js\n The result will be\n /somewhere/148713695b4a4b9083e506086f061f9c.js\n\n Another thing to note, if the filenames have timestamps in them, combine\n them all and use the highest timestamp."}
{"_id": "q_18012", "text": "Extract the oritentation EXIF tag from the image, which should be a PIL Image instance,\n and if there is an orientation tag that would rotate the image, apply that rotation to\n the Image instance given to do an in-place rotation.\n\n :param Image im: Image instance to inspect\n :return: A possibly transposed image instance"}
{"_id": "q_18013", "text": "Start a new piece"}
{"_id": "q_18014", "text": "Start a new site."}
{"_id": "q_18015", "text": "Publish the site"}
{"_id": "q_18016", "text": "Returns a list of the branches"}
{"_id": "q_18017", "text": "Returns the currently active branch"}
{"_id": "q_18018", "text": "Create a patch between tags"}
{"_id": "q_18019", "text": "Create a callable that applies ``func`` to a value in a sequence.\n\n If the value is not a sequence or is an empty sequence then ``None`` is\n returned.\n\n :type func: `callable`\n :param func: Callable to be applied to each result.\n\n :type n: `int`\n :param n: Index of the value to apply ``func`` to."}
{"_id": "q_18020", "text": "Create a callable that applies ``func`` to every value in a sequence.\n\n If the value is not a sequence then an empty list is returned.\n\n :type func: `callable`\n :param func: Callable to be applied to the first result."}
{"_id": "q_18021", "text": "Parse a value as an integer.\n\n :type value: `unicode` or `bytes`\n :param value: Text value to parse\n\n :type base: `unicode` or `bytes`\n :param base: Base to assume ``value`` is specified in.\n\n :type encoding: `bytes`\n :param encoding: Encoding to treat ``bytes`` values as, defaults to\n ``utf-8``.\n\n :rtype: `int`\n :return: Parsed integer or ``None`` if ``value`` could not be parsed as an\n integer."}
{"_id": "q_18022", "text": "Parse a value as a boolean.\n\n :type value: `unicode` or `bytes`\n :param value: Text value to parse.\n\n :type true: `tuple` of `unicode`\n :param true: Values to compare, ignoring case, for ``True`` values.\n\n :type false: `tuple` of `unicode`\n :param false: Values to compare, ignoring case, for ``False`` values.\n\n :type encoding: `bytes`\n :param encoding: Encoding to treat `bytes` values as, defaults to\n ``utf-8``.\n\n :rtype: `bool`\n :return: Parsed boolean or ``None`` if ``value`` did not match ``true`` or\n ``false`` values."}
{"_id": "q_18023", "text": "Parse a value as a delimited list.\n\n :type value: `unicode` or `bytes`\n :param value: Text value to parse.\n\n :type parser: `callable` taking a `unicode` parameter\n :param parser: Callable to map over the delimited text values.\n\n :type delimiter: `unicode`\n :param delimiter: Delimiter text.\n\n :type encoding: `bytes`\n :param encoding: Encoding to treat `bytes` values as, defaults to\n ``utf-8``.\n\n :rtype: `list`\n :return: List of parsed values."}
{"_id": "q_18024", "text": "Parse a value as a POSIX timestamp in seconds.\n\n :type value: `unicode` or `bytes`\n :param value: Text value to parse, which should be the number of seconds\n since the epoch.\n\n :type _divisor: `float`\n :param _divisor: Number to divide the value by.\n\n :type tz: `tzinfo`\n :param tz: Timezone, defaults to UTC.\n\n :type encoding: `bytes`\n :param encoding: Encoding to treat `bytes` values as, defaults to\n ``utf-8``.\n\n :rtype: `datetime.datetime`\n :return: Parsed datetime or ``None`` if ``value`` could not be parsed."}
{"_id": "q_18025", "text": "Parse query parameters.\n\n :type expected: `dict` mapping `bytes` to `callable`\n :param expected: Mapping of query argument names to argument parsing\n callables.\n\n :type query: `dict` mapping `bytes` to `list` of `bytes`\n :param query: Mapping of query argument names to lists of argument values,\n this is the form that Twisted Web's `IRequest.args\n <twisted:twisted.web.iweb.IRequest.args>` value takes.\n\n :rtype: `dict` mapping `bytes` to `object`\n :return: Mapping of query argument names to parsed argument values."}
{"_id": "q_18026", "text": "Put metrics to cloudwatch. Metric shoult be instance or list of\n instances of CloudWatchMetric"}
{"_id": "q_18027", "text": "Adapt a result to `IResource`.\n\n Several adaptions are tried they are, in order: ``None``,\n `IRenderable <twisted:twisted.web.iweb.IRenderable>`, `IResource\n <twisted:twisted.web.resource.IResource>`, and `URLPath\n <twisted:twisted.python.urlpath.URLPath>`. Anything else is returned as\n is.\n\n A `URLPath <twisted:twisted.python.urlpath.URLPath>` is treated as\n a redirect."}
{"_id": "q_18028", "text": "Handle the result from `IResource.render`.\n\n If the result is a `Deferred` then return `NOT_DONE_YET` and add\n a callback to write the result to the request when it arrives."}
{"_id": "q_18029", "text": "get the xsd name of a S_DT"}
{"_id": "q_18030", "text": "Get the the referred attribute."}
{"_id": "q_18031", "text": "Build an xsd simpleType out of a S_CDT."}
{"_id": "q_18032", "text": "Build an xsd simpleType out of a S_EDT."}
{"_id": "q_18033", "text": "Build an xsd simpleType out of a S_UDT."}
{"_id": "q_18034", "text": "Build a partial xsd tree out of a S_DT and its sub types S_CDT, S_EDT, S_SDT and S_UDT."}
{"_id": "q_18035", "text": "Build an xsd complex element out of a O_OBJ, including its O_ATTR."}
{"_id": "q_18036", "text": "Indent an xml string with four spaces, and add an additional line break after each node."}
{"_id": "q_18037", "text": "Split an HTTP header whose components are separated with commas.\n\n Each component is then split on semicolons and the component arguments\n converted into a `dict`.\n\n @return: `list` of 2-`tuple` of `bytes`, `dict`\n @return: List of header arguments and mapping of component argument names\n to values."}
{"_id": "q_18038", "text": "Extract an encoding from a ``Content-Type`` header.\n\n @type requestHeaders: `twisted.web.http_headers.Headers`\n @param requestHeaders: Request headers.\n\n @type encoding: `bytes`\n @param encoding: Default encoding to assume if the ``Content-Type``\n header is lacking one. Defaults to ``UTF-8``.\n\n @rtype: `bytes`\n @return: Content encoding."}
{"_id": "q_18039", "text": "Create a nil-safe callable decorator.\n\n If the wrapped callable receives ``None`` as its argument, it will return\n ``None`` immediately."}
{"_id": "q_18040", "text": "Gets the full list of bikes from the bikeregister site.\n The data is hidden behind a form post request and so\n we need to extract an xsrf and session token with bs4.\n\n todo add pytest tests\n\n :return: All the currently registered bikes.\n :raise ApiError: When there was an error connecting to the API."}
{"_id": "q_18041", "text": "set positional information on a node"}
{"_id": "q_18042", "text": "r\"\\=\\="}
{"_id": "q_18043", "text": "r\"!\\="}
{"_id": "q_18044", "text": "r\"\\<\\="}
{"_id": "q_18045", "text": "r\"\\>\\="}
{"_id": "q_18046", "text": "r\"\\="}
{"_id": "q_18047", "text": "r\"\\."}
{"_id": "q_18048", "text": "r\"\\["}
{"_id": "q_18049", "text": "r\"\\?"}
{"_id": "q_18050", "text": "r\"\\<"}
{"_id": "q_18051", "text": "r\"\\>"}
{"_id": "q_18052", "text": "r\"\\+"}
{"_id": "q_18053", "text": "Get the version from version module without importing more than\n necessary."}
{"_id": "q_18054", "text": "check if a tx is confirmed, else resend it.\n\n :param use_open_peers: select random peers fro api/peers endpoint"}
{"_id": "q_18055", "text": "Create message content and properties to create queue with QMFv2\n\n :param name: Name of queue to create\n :type name: str\n :param strict: Whether command should fail when unrecognized properties are provided\n Not used by QMFv2\n Default: True\n :type strict: bool\n :param auto_delete: Whether queue should be auto deleted\n Default: False\n :type auto_delete: bool\n :param auto_delete_timeout: Timeout in seconds for auto deleting queue\n Default: 10\n :type auto_delete_timeout: int\n\n :returns: Tuple containing content and method properties"}
{"_id": "q_18056", "text": "Create message content and properties to list all exchanges with QMFv2\n\n :returns: Tuple containing content and query properties"}
{"_id": "q_18057", "text": "Create message content and properties to purge queue with QMFv2\n\n :param name: Name of queue to purge\n :type name: str\n\n :returns: Tuple containing content and method properties"}
{"_id": "q_18058", "text": "attachments should be a list of paths"}
{"_id": "q_18059", "text": "Returns the text from an image at a given url."}
{"_id": "q_18060", "text": "Get sub-command list\n\n .. note::\n\n Don't use logger handle this function errors.\n\n Because the error should be a code error,not runtime error.\n\n\n :return: `list` matched sub-parser"}
{"_id": "q_18061", "text": "Serialize an xtUML metamodel class."}
{"_id": "q_18062", "text": "Convert Hump style to underscore\n\n :param name: Hump Character\n :return: str"}
{"_id": "q_18063", "text": "Function for command line execution"}
{"_id": "q_18064", "text": "Runs the program. Takes a list of postcodes or coordinates and\n returns various information about them. If using the cli, make\n sure to update the bikes database with the -u command.\n\n Locations can be either a specific postcode, or a pair of coordinates.\n Coordinates are passed in the form \"55.948824,-3.196425\".\n\n :param locations: The list of postcodes or coordinates to search.\n :param random: The number of random postcodes to include.\n :param bikes: Includes a list of stolen bikes in that area.\n :param crime: Includes a list of committed crimes in that area.\n :param nearby: Includes a list of wikipedia articles in that area.\n :param json: Returns the data in json format.\n :param update_bikes: Whether to force update bikes.\n :param api_server: If given, the program will instead run a rest api.\n :param cross_origin:\n :param host:\n :param port: Defines the port to run the rest api on.\n :param db_path: The path to the sqlite db to use.\n :param verbose: The verbosity."}
{"_id": "q_18065", "text": "Adds to the context BiDi related variables\n\n LANGUAGE_DIRECTION -- Direction of current language ('ltr' or 'rtl')\n LANGUAGE_START -- Start of language layout ('right' for rtl, 'left'\n for 'ltr')\n LANGUAGE_END -- End of language layout ('left' for rtl, 'right'\n for 'ltr')\n LANGUAGE_MARKER -- Language marker entity ('&rlm;' for rtl, '&lrm'\n for ltr)"}
{"_id": "q_18066", "text": "Gets all the fuel prices within the specified radius."}
{"_id": "q_18067", "text": "Gets the fuel price trends for the given location and fuel types."}
{"_id": "q_18068", "text": "Fetches API reference data.\n\n :param modified_since: The response will be empty if no\n changes have been made to the reference data since this\n timestamp, otherwise all reference data will be returned."}
{"_id": "q_18069", "text": "Find links that correspond to the given arguments."}
{"_id": "q_18070", "text": "Formalize the association and expose referential attributes\n on instances."}
{"_id": "q_18071", "text": "Obtain the type of an attribute."}
{"_id": "q_18072", "text": "Obtain a sequence of all instances in the metamodel."}
{"_id": "q_18073", "text": "Define a new class in the metamodel, and return its metaclass."}
{"_id": "q_18074", "text": "Receives header, payload, and topics through a ZeroMQ socket.\n\n :param socket: a zmq socket.\n :param flags: zmq flags to receive messages.\n :param capture: a function to capture received messages."}
{"_id": "q_18075", "text": "This also finds code you are working on today!"}
{"_id": "q_18076", "text": "Take a string or list of strings and try to extract all the emails"}
{"_id": "q_18077", "text": "Collects methods which are speced as RPC."}
{"_id": "q_18078", "text": "If there is a postcode in the url it validates and normalizes it."}
{"_id": "q_18079", "text": "Called before template is applied."}
{"_id": "q_18080", "text": "A Component contains packageable elements"}
{"_id": "q_18081", "text": "A Package contains packageable elements"}
{"_id": "q_18082", "text": "Match a route parameter.\n\n `Any` is a synonym for `Text`.\n\n :type name: `bytes`\n :param name: Route parameter name.\n\n :type encoding: `bytes`\n :param encoding: Default encoding to assume if the ``Content-Type``\n header is lacking one.\n\n :return: ``callable`` suitable for use with `route` or `subroute`."}
{"_id": "q_18083", "text": "Match an integer route parameter.\n\n :type name: `bytes`\n :param name: Route parameter name.\n\n :type base: `int`\n :param base: Base to interpret the value in.\n\n :type encoding: `bytes`\n :param encoding: Default encoding to assume if the ``Content-Type``\n header is lacking one.\n\n :return: ``callable`` suitable for use with `route` or `subroute`."}
{"_id": "q_18084", "text": "Match a request path against our path components.\n\n The path components are always matched relative to their parent is in the\n resource hierarchy, in other words it is only possible to match URIs nested\n more deeply than the parent resource.\n\n :type components: ``iterable`` of `bytes` or `callable`\n :param components: Iterable of path components, to match against the\n request, either static strings or dynamic parameters. As a convenience,\n a single `bytes` component containing ``/`` may be given instead of\n manually separating the components. If no components are given the null\n route is matched, this is the case where ``segments`` is empty.\n\n :type segments: ``sequence`` of `bytes`\n :param segments: Sequence of path segments, from the request, to match\n against.\n\n :type partialMatching: `bool`\n :param partialMatching: Allow partial matching against the request path?\n\n :rtype: 2-`tuple` of `dict` keyed on `bytes` and `list` of `bytes`\n :return: Pair of parameter results, mapping parameter names to processed\n values, and a list of the remaining request path segments. If there is\n no route match the result will be ``None`` and the original request path\n segments."}
{"_id": "q_18085", "text": "Decorate a router-producing callable to instead produce a resource.\n\n This simply produces a new callable that invokes the original callable, and\n calls ``resource`` on the ``routerAttribute``.\n\n If the router producer has multiple routers the attribute can be altered to\n choose the appropriate one, for example:\n\n .. code-block:: python\n\n class _ComplexRouter(object):\n router = Router()\n privateRouter = Router()\n\n @router.route('/')\n def publicRoot(self, request, params):\n return SomethingPublic(...)\n\n @privateRouter.route('/')\n def privateRoot(self, request, params):\n return SomethingPrivate(...)\n\n PublicResource = routedResource(_ComplexRouter)\n PrivateResource = routedResource(_ComplexRouter, 'privateRouter')\n\n :type f: ``callable``\n :param f: Callable producing an object with a `Router` attribute, for\n example, a type.\n\n :type routerAttribute: `str`\n :param routerAttribute: Name of the `Router` attribute on the result of\n calling ``f``.\n\n :rtype: `callable`\n :return: Callable producing an `IResource`."}
{"_id": "q_18086", "text": "Create a new `Router` instance, with it's own set of routes, for\n ``obj``."}
{"_id": "q_18087", "text": "Add a route handler and matcher to the collection of possible routes."}
{"_id": "q_18088", "text": "See `txspinneret.route.route`.\n\n This decorator can be stacked with itself to specify multiple routes\n with a single handler."}
{"_id": "q_18089", "text": "See `txspinneret.route.subroute`.\n\n This decorator can be stacked with itself to specify multiple routes\n with a single handler."}
{"_id": "q_18090", "text": "Return the average brightness of the image."}
{"_id": "q_18091", "text": "Open a NamedTemoraryFile handle in a context manager"}
{"_id": "q_18092", "text": "Read entry from JSON file"}
{"_id": "q_18093", "text": "Save entry to JSON file"}
{"_id": "q_18094", "text": "Update entry by UUID in the JSON file"}
{"_id": "q_18095", "text": "Given a valid position in the text document, try to find the\n position of the matching bracket. Returns -1 if unsuccessful."}
{"_id": "q_18096", "text": "Updates the document formatting based on the new cursor position."}
{"_id": "q_18097", "text": "Bottleneck to fix up IronPython string exceptions"}
{"_id": "q_18098", "text": "Decorator for registering a simple path.\n\n Args:\n path (str): Path to be matched.\n method (str, optional): Usually used to define one of GET, POST,\n PUT, DELETE. You may use whatever fits your situation though.\n Defaults to None.\n type_cast (dict, optional): Mapping between the param name and\n one of `int`, `float` or `bool`. The value reflected by the\n provided param name will than be casted to the given type.\n Defaults to None."}
{"_id": "q_18099", "text": "Function for registering a simple path.\n\n Args:\n path (str): Path to be matched.\n function (function): Function to associate with this path.\n method (str, optional): Usually used to define one of GET, POST,\n PUT, DELETE. You may use whatever fits your situation though.\n Defaults to None.\n type_cast (dict, optional): Mapping between the param name and\n one of `int`, `float` or `bool`. The value reflected by the\n provided param name will than be casted to the given type.\n Defaults to None."}
{"_id": "q_18100", "text": "Calls the first function matching the urls pattern and method.\n\n Args:\n url (str): Url for which to call a matching function.\n method (str, optional): The method used while registering a\n function.\n Defaults to None\n args (dict, optional): Additional args to be passed to the\n matching function.\n\n Returns:\n The functions return value or `None` if no function was called."}
{"_id": "q_18101", "text": "Called when the down key is pressed. Returns whether to continue\n processing the event."}
{"_id": "q_18102", "text": "If possible, set the input buffer to a subsequent history item.\n\n Parameters:\n -----------\n substring : str, optional\n If specified, search for an item with this substring.\n as_prefix : bool, optional\n If True, the substring must match at the beginning (default).\n\n Returns:\n --------\n Whether the input buffer was changed."}
{"_id": "q_18103", "text": "Handles replies for code execution, here only session history length"}
{"_id": "q_18104", "text": "Returns whether history movement is locked."}
{"_id": "q_18105", "text": "Replace the current history with a sequence of history items."}
{"_id": "q_18106", "text": "Event handler for the button click."}
{"_id": "q_18107", "text": "Generates a list of Record objects given a DataFrame.\n Each Record instance has a series attribute which is a pandas.Series of the same attributes \n in the DataFrame.\n Optional data can be passed in through kwargs which will be included by the name of each object.\n\n parameters\n ----------\n df : pandas.DataFrame\n kwargs : alternate arguments to be saved by name to the series of each object\n\n Returns\n -------\n collection : list\n list of Record objects where each Record represents one row from a dataframe\n\n Examples\n --------\n This is how we generate a Record Collection from a DataFrame.\n\n >>> import pandas as pd\n >>> import turntable\n >>>\n >>> df = pd.DataFrame({'Artist':\"\"\"Michael Jackson, Pink Floyd, Whitney Houston, Meat Loaf, \n Eagles, Fleetwood Mac, Bee Gees, AC/DC\"\"\".split(', '),\n >>> 'Album' :\"\"\"Thriller, The Dark Side of the Moon, The Bodyguard, Bat Out of Hell, \n Their Greatest Hits (1971-1975), Rumours, Saturday Night Fever, Back in Black\"\"\".split(', ')})\n >>> collection = turntable.press.build_collection(df, my_favorite_record = 'nevermind')\n >>> record = collection[0]\n >>> print record.series"}
{"_id": "q_18108", "text": "Converts a collection back into a pandas DataFrame\n\n parameters\n ----------\n collection : list\n list of Record objects where each Record represents one row from a dataframe\n\n Returns\n -------\n df : pandas.DataFrame\n DataFrame of length=len(collection) where each row represents one Record"}
{"_id": "q_18109", "text": "Initalizes the given argument structure as properties of the class\n to be used by name in specific method execution.\n\n Parameters\n ----------\n kwargs : dictionary\n Dictionary of extra attributes,\n where keys are attributes names and values attributes values."}
{"_id": "q_18110", "text": "Update our SUB socket's subscriptions."}
{"_id": "q_18111", "text": "receive and parse a message, then log it."}
{"_id": "q_18112", "text": "Perform an N-way merge operation on sorted lists.\n\n @param list_of_lists: (really iterable of iterable) of sorted elements\n (either by naturally or by C{key})\n @param key: specify sort key function (like C{sort()}, C{sorted()})\n\n Yields tuples of the form C{(item, iterator)}, where the iterator is the\n built-in list iterator or something you pass in, if you pre-generate the\n iterators.\n\n This is a stable merge; complexity O(N lg N)\n\n Examples::\n\n >>> print list(mergesort([[1,2,3,4],\n ... [2,3.25,3.75,4.5,6,7],\n ... [2.625,3.625,6.625,9]]))\n [1, 2, 2, 2.625, 3, 3.25, 3.625, 3.75, 4, 4.5, 6, 6.625, 7, 9]\n\n # note stability\n >>> print list(mergesort([[1,2,3,4],\n ... [2,3.25,3.75,4.5,6,7],\n ... [2.625,3.625,6.625,9]],\n ... key=int))\n [1, 2, 2, 2.625, 3, 3.25, 3.75, 3.625, 4, 4.5, 6, 6.625, 7, 9]\n\n\n >>> print list(mergesort([[4, 3, 2, 1],\n ... [7, 6, 4.5, 3.75, 3.25, 2],\n ... [9, 6.625, 3.625, 2.625]],\n ... key=lambda x: -x))\n [9, 7, 6.625, 6, 4.5, 4, 3.75, 3.625, 3.25, 3, 2.625, 2, 2, 1]"}
{"_id": "q_18113", "text": "Return an iterator on an object living on a remote engine."}
{"_id": "q_18114", "text": "Return this platform's maximum compatible version.\n\n distutils.util.get_platform() normally reports the minimum version\n of Mac OS X that would be required to *use* extensions produced by\n distutils. But what we want when checking compatibility is to know the\n version of Mac OS X that we are *running*. To allow usage of packages that\n explicitly require a newer version of Mac OS X, we must also know the\n current version of the OS.\n\n If this condition occurs for any other platform with a version in its\n platform strings, this function should be extended accordingly."}
{"_id": "q_18115", "text": "Retrieve a PEP 302 \"importer\" for the given path item\n\n If there is no importer, this returns a wrapper around the builtin import\n machinery. The returned importer is only cached if it was created by a\n path hook."}
{"_id": "q_18116", "text": "Thunk to load the real StringIO on demand"}
{"_id": "q_18117", "text": "Convert a version string to a chronologically-sortable key\n\n This is a rough cross between distutils' StrictVersion and LooseVersion;\n if you give it versions that would work with StrictVersion, then it behaves\n the same; otherwise it acts like a slightly-smarter LooseVersion. It is\n *possible* to create pathological version coding schemes that will fool\n this parser, but they should be very rare in practice.\n\n The returned value will be a tuple of strings. Numeric portions of the\n version are padded to 8 digits so they will compare numerically, but\n without relying on how numbers compare relative to strings. Dots are\n dropped, but dashes are retained. Trailing zeros between alpha segments\n or dashes are suppressed, so that e.g. \"2.4.0\" is considered the same as\n \"2.4\". Alphanumeric parts are lower-cased.\n\n The algorithm assumes that strings like \"-\" and any alpha string that\n alphabetically follows \"final\" represents a \"patch level\". So, \"2.4-1\"\n is assumed to be a branch or patch of \"2.4\", and therefore \"2.4.1\" is\n considered newer than \"2.4-1\", which in turn is newer than \"2.4\".\n\n Strings like \"a\", \"b\", \"c\", \"alpha\", \"beta\", \"candidate\" and so on (that\n come before \"final\" alphabetically) are assumed to be pre-release versions,\n so that the version \"2.4\" is considered newer than \"2.4a1\".\n\n Finally, to handle miscellaneous cases, the strings \"pre\", \"preview\", and\n \"rc\" are treated as if they were \"c\", i.e. as though they were release\n candidates, and therefore are not as new as a version string that does not\n contain them, and \"dev\" is replaced with an '@' so that it sorts lower than\n than any other pre-release tag."}
{"_id": "q_18118", "text": "Return True when distribute wants to override a setuptools dependency.\n\n We want to override when the requirement is setuptools and the version is\n a variant of 0.6."}
{"_id": "q_18119", "text": "Add `dist` to working set, associated with `entry`\n\n If `entry` is unspecified, it defaults to the ``.location`` of `dist`.\n On exit from this routine, `entry` is added to the end of the working\n set's ``.entries`` (if it wasn't already present).\n\n `dist` is only added to the working set if it's for a project that\n doesn't already have a distribution in the set, unless `replace=True`.\n If it's added, any callbacks registered with the ``subscribe()`` method\n will be called."}
{"_id": "q_18120", "text": "Find all activatable distributions in `plugin_env`\n\n Example usage::\n\n distributions, errors = working_set.find_plugins(\n Environment(plugin_dirlist)\n )\n map(working_set.add, distributions) # add plugins+libs to sys.path\n print 'Could not load', errors # display errors\n\n The `plugin_env` should be an ``Environment`` instance that contains\n only distributions that are in the project's \"plugin directory\" or\n directories. The `full_env`, if supplied, should be an ``Environment``\n contains all currently-available distributions. If `full_env` is not\n supplied, one is created automatically from the ``WorkingSet`` this\n method is called on, which will typically mean that every directory on\n ``sys.path`` will be scanned for distributions.\n\n `installer` is a standard installer callback as used by the\n ``resolve()`` method. The `fallback` flag indicates whether we should\n attempt to resolve older versions of a plugin if the newest version\n cannot be resolved.\n\n This method returns a 2-tuple: (`distributions`, `error_info`), where\n `distributions` is a list of the distributions found in `plugin_env`\n that were loadable, along with any other distributions that are needed\n to resolve their dependencies. `error_info` is a dictionary mapping\n unloadable plugin distributions to an exception instance describing the\n error that occurred. Usually this will be a ``DistributionNotFound`` or\n ``VersionConflict`` instance."}
{"_id": "q_18121", "text": "Return absolute location in cache for `archive_name` and `names`\n\n The parent directory of the resulting path will be created if it does\n not already exist. `archive_name` should be the base filename of the\n enclosing egg (which may not be the name of the enclosing zipfile!),\n including its \".egg\" extension. `names`, if provided, should be a\n sequence of path name parts \"under\" the egg's extraction location.\n\n This method should only be called by resource providers that need to\n obtain an extraction location, and only for names they intend to\n extract, as it tracks the generated names for possible cleanup later."}
{"_id": "q_18122", "text": "Parse a single entry point from string `src`\n\n Entry point syntax follows the form::\n\n name = some.module:some.attr [extra1,extra2]\n\n The entry name and module name are required, but the ``:attrs`` and\n ``[extras]`` parts are optional"}
{"_id": "q_18123", "text": "Parse and cache metadata"}
{"_id": "q_18124", "text": "Reimplemented to disconnect signal handlers and event filter."}
{"_id": "q_18125", "text": "Returns a cursor with text between the start position and the\n current position selected."}
{"_id": "q_18126", "text": "Updates the current item based on the current text."}
{"_id": "q_18127", "text": "Return system per-CPU times as a list of named tuples."}
{"_id": "q_18128", "text": "Use the raw Win32 handle of sys.stdin to do non-blocking reads"}
{"_id": "q_18129", "text": "update visibility of the tabBar depending of the number of tab\n\n 0 or 1 tab, tabBar hidden\n 2+ tabs, tabBar visible\n\n send a self.close if number of tab ==0\n\n need to be called explicitly, or be connected to tabInserted/tabRemoved"}
{"_id": "q_18130", "text": "insert a tab with a given frontend in the tab bar, and give it a name"}
{"_id": "q_18131", "text": "Add action to menu as well as self\n \n So that when the menu bar is invisible, its actions are still available.\n \n If defer_shortcut is True, set the shortcut context to widget-only,\n where it will avoid conflict with shortcuts already bound to the\n widgets themselves."}
{"_id": "q_18132", "text": "Return a function `fun` that will execute `magic` on active frontend.\n\n Parameters\n ----------\n magic : string\n string that will be executed as is when the returned function is called\n\n Returns\n -------\n fun : function\n function with no parameters, when called will execute `magic` on the\n current active frontend at call time\n\n See Also\n --------\n populate_all_magic_menu : generate the \"All Magics...\" menu\n\n Notes\n -----\n `fun` executes `magic` in active frontend at the moment it is triggered,\n not the active frontend at the moment it was created.\n\n This function is mostly used to create the \"All Magics...\" Menu at run time."}
{"_id": "q_18133", "text": "Clean \"All Magics...\" menu and repopulate it with `listofmagic`\n\n Parameters\n ----------\n listofmagic : string,\n repr() of a list of strings, send back by the kernel\n\n Notes\n -----\n `listofmagic`is a repr() of list because it is fed with the result of\n a 'user_expression'"}
{"_id": "q_18134", "text": "Generate hashed password and salt for use in notebook configuration.\n\n In the notebook configuration, set `c.NotebookApp.password` to\n the generated string.\n\n Parameters\n ----------\n passphrase : str\n Password to hash. If unspecified, the user is asked to input\n and verify a password.\n algorithm : str\n Hashing algorithm to use (e.g, 'sha1' or any argument supported\n by :func:`hashlib.new`).\n\n Returns\n -------\n hashed_passphrase : str\n Hashed password, in the format 'hash_algorithm:salt:passphrase_hash'.\n\n Examples\n --------\n In [1]: passwd('mypassword')\n Out[1]: 'sha1:7cf3:b7d6da294ea9592a9480c8f52e63cd42cfb9dd12'"}
{"_id": "q_18135", "text": "Verify that a given passphrase matches its hashed version.\n\n Parameters\n ----------\n hashed_passphrase : str\n Hashed password, in the format returned by `passwd`.\n passphrase : str\n Passphrase to validate.\n\n Returns\n -------\n valid : bool\n True if the passphrase matches the hash.\n\n Examples\n --------\n In [1]: from IPython.lib.security import passwd_check\n\n In [2]: passwd_check('sha1:0e112c3ddfce:a68df677475c2b47b6e86d0467eec97ac5f4b85a',\n ...: 'mypassword')\n Out[2]: True\n\n In [3]: passwd_check('sha1:0e112c3ddfce:a68df677475c2b47b6e86d0467eec97ac5f4b85a',\n ...: 'anotherpassword')\n Out[3]: False"}
{"_id": "q_18136", "text": "Generate a html snippet for showing a boolean value on the admin page.\n Item is an object, attr is the attribute name we should display. Text\n is an optional explanatory text to be included in the output.\n\n This function will emit code to produce a checkbox input with its state\n corresponding to the item.attr attribute if no override value is passed.\n This input is wired to run a JS ajax updater to toggle the value.\n\n If override is passed in, ignores the attr attribute and returns a\n static image for the override boolean with no user interaction possible\n (useful for \"disabled and you can't change it\" situations)."}
{"_id": "q_18137", "text": "Generate a short title for an object, indent it depending on\n the object's depth in the hierarchy."}
{"_id": "q_18138", "text": "Collect all fields marked as editable booleans. We do not\n want the user to be able to edit arbitrary fields by crafting\n an AJAX request by hand."}
{"_id": "q_18139", "text": "Handle an AJAX toggle_boolean request"}
{"_id": "q_18140", "text": "Add children recursively to a binary tree."}
{"_id": "q_18141", "text": "Submit jobs via client where G describes the time dependencies."}
{"_id": "q_18142", "text": "Set the currently active scheme.\n\n Names are by default compared in a case-insensitive way, but this can\n be changed by setting the parameter case_sensitive to true."}
{"_id": "q_18143", "text": "Return the lib dir under the 'home' installation scheme"}
{"_id": "q_18144", "text": "method to wait for a kernel to be ready"}
{"_id": "q_18145", "text": "Returns a QTextCharFormat for token or None."}
{"_id": "q_18146", "text": "Returns a QTextCharFormat for token by"}
{"_id": "q_18147", "text": "Convert a path to its canonical, case-normalized, absolute version."}
{"_id": "q_18148", "text": "Verify that namespace packages are valid"}
{"_id": "q_18149", "text": "Verify that entry_points map is parseable"}
{"_id": "q_18150", "text": "Determine if the input source ends in a blank.\n\n A blank is either a newline or a line consisting of whitespace.\n\n Parameters\n ----------\n src : string\n A single or multiline string."}
{"_id": "q_18151", "text": "Handle the `files = !ls` syntax."}
{"_id": "q_18152", "text": "Handle the `a = %who` syntax."}
{"_id": "q_18153", "text": "Handle inputs that start with '>>> ' syntax."}
{"_id": "q_18154", "text": "Push one or more lines of input.\n\n This stores the given lines and returns a status code indicating\n whether the code forms a complete Python block or not.\n\n Any exceptions generated in compilation are swallowed, but if an\n exception was produced, the method returns True.\n\n Parameters\n ----------\n lines : string\n One or more lines of Python input.\n \n Returns\n -------\n is_complete : boolean\n True if the current input source (the result of the current input\n plus prior inputs) forms a complete Python execution block. Note that\n this value is also stored as a private attribute (``_is_complete``), so it\n can be queried at any time."}
{"_id": "q_18155", "text": "Return whether a block of interactive input can accept more input.\n\n This method is meant to be used by line-oriented frontends, who need to\n guess whether a block is complete or not based solely on prior and\n current input lines. The InputSplitter considers it has a complete\n interactive block and will not accept more input only when either a\n SyntaxError is raised, or *all* of the following are true:\n\n 1. The input compiles to a complete statement.\n \n 2. The indentation level is flush-left (because if we are indented,\n like inside a function definition or for loop, we need to keep\n reading new input).\n \n 3. There is one extra line consisting only of whitespace.\n\n Because of condition #3, this method should be used only by\n *line-oriented* frontends, since it means that intermediate blank lines\n are not allowed in function definitions (or any other indented block).\n\n If the current input produces a syntax error, this method immediately\n returns False but does *not* raise the syntax error exception, as\n typically clients will want to send invalid syntax to an execution\n backend which might convert the invalid syntax into valid Python via\n one of the dynamic IPython mechanisms."}
{"_id": "q_18156", "text": "Compute the new indentation level for a single line.\n\n Parameters\n ----------\n line : str\n A single new line of non-whitespace, non-comment Python input.\n \n Returns\n -------\n indent_spaces : int\n New value for the indent level (it may be equal to self.indent_spaces\n if indentation doesn't change.\n\n full_dedent : boolean\n Whether the new line causes a full flush-left dedent."}
{"_id": "q_18157", "text": "Store one or more lines of input.\n\n If input lines are not newline-terminated, a newline is automatically\n appended."}
{"_id": "q_18158", "text": "Return input and raw source and perform a full reset."}
{"_id": "q_18159", "text": "Process and translate a cell of input."}
{"_id": "q_18160", "text": "Initialize observer storage"}
{"_id": "q_18161", "text": "Post notification to all registered observers.\n\n The registered callback will be called as::\n\n callback(ntype, sender, *args, **kwargs)\n\n Parameters\n ----------\n ntype : hashable\n The notification type.\n sender : hashable\n The object sending the notification.\n *args : tuple\n The positional arguments to be passed to the callback.\n **kwargs : dict\n The keyword argument to be passed to the callback.\n\n Notes\n -----\n * If no registered observers, performance is O(1).\n * Notificaiton order is undefined.\n * Notifications are posted synchronously."}
{"_id": "q_18162", "text": "Find all registered observers that should recieve notification"}
{"_id": "q_18163", "text": "Add a new background job and start it in a separate thread.\n\n There are two types of jobs which can be created:\n\n 1. Jobs based on expressions which can be passed to an eval() call.\n The expression must be given as a string. For example:\n\n job_manager.new('myfunc(x,y,z=1)'[,glob[,loc]])\n\n The given expression is passed to eval(), along with the optional\n global/local dicts provided. If no dicts are given, they are\n extracted automatically from the caller's frame.\n \n A Python statement is NOT a valid eval() expression. Basically, you\n can only use as an eval() argument something which can go on the right\n of an '=' sign and be assigned to a variable.\n\n For example,\"print 'hello'\" is not valid, but '2+3' is.\n\n 2. Jobs given a function object, optionally passing additional\n positional arguments:\n\n job_manager.new(myfunc, x, y)\n\n The function is called with the given arguments.\n\n If you need to pass keyword arguments to your function, you must\n supply them as a dict named kw:\n\n job_manager.new(myfunc, x, y, kw=dict(z=1))\n\n The reason for this assymmetry is that the new() method needs to\n maintain access to its own keywords, and this prevents name collisions\n between arguments to new() and arguments to your own functions.\n\n In both cases, the result is stored in the job.result field of the\n background job object.\n\n You can set `daemon` attribute of the thread by giving the keyword\n argument `daemon`.\n\n Notes and caveats:\n\n 1. All threads running share the same standard output. Thus, if your\n background jobs generate output, it will come out on top of whatever\n you are currently writing. For this reason, background jobs are best\n used with silent functions which simply return their output.\n\n 2. Threads also all work within the same global namespace, and this\n system does not lock interactive variables. So if you send job to the\n background which operates on a mutable object for a long time, and\n start modifying that same mutable object interactively (or in another\n backgrounded job), all sorts of bizarre behaviour will occur.\n\n 3. If a background job is spending a lot of time inside a C extension\n module which does not release the Python Global Interpreter Lock\n (GIL), this will block the IPython prompt. This is simply because the\n Python interpreter can only switch between threads at Python\n bytecodes. While the execution is inside C code, the interpreter must\n simply wait unless the extension module releases the GIL.\n\n 4. There is no way, due to limitations in the Python threads library,\n to kill a thread once it has started."}
{"_id": "q_18164", "text": "Flush a given job group\n\n Return True if the group had any elements."}
{"_id": "q_18165", "text": "Execute a shell command."}
{"_id": "q_18166", "text": "Simple program that creates an temp S3 link."}
{"_id": "q_18167", "text": "Inserts a value in the ``ListVariable`` at an appropriate index.\n\n :param idx: The index before which to insert the new value.\n :param value: The value to insert."}
{"_id": "q_18168", "text": "Retrieve a copy of the Environment. Note that this is a shallow\n copy."}
{"_id": "q_18169", "text": "Declare an environment variable as a list-like special variable.\n This can be used even if the environment variable is not\n present.\n\n :param name: The name of the environment variable that should\n be considered list-like.\n :param sep: The separator to be used. Defaults to the value\n of ``os.pathsep``."}
{"_id": "q_18170", "text": "Change the working directory that processes should be executed in.\n\n :param value: The new path to change to. If relative, will be\n interpreted relative to the current working\n directory."}
{"_id": "q_18171", "text": "Swaps two cities in the route.\n\n :type state: TSPState"}
{"_id": "q_18172", "text": "Poll ``self.stdout`` and return True if it is readable.\n\n :param float timeout: seconds to wait I/O\n :return: True if readable, else False\n :rtype: boolean"}
{"_id": "q_18173", "text": "create an empty record"}
{"_id": "q_18174", "text": "Ensure that an incorrect table doesn't exist\n\n If a bad (old) table does exist, return False"}
{"_id": "q_18175", "text": "Read a config_file, check the validity with a JSON Schema as specs\n and get default values from default_file if asked.\n\n All parameters are optionnal.\n\n If there is no config_file defined, read the venv base\n dir and try to get config/app.yml.\n\n If no specs, don't validate anything.\n\n If no default_file, don't merge with default values."}
{"_id": "q_18176", "text": "Output a link tag."}
{"_id": "q_18177", "text": "Output a script tag to a js file."}
{"_id": "q_18178", "text": "Image tag helper."}
{"_id": "q_18179", "text": "Multiply the arg with the value."}
{"_id": "q_18180", "text": "Divide the arg by the value."}
{"_id": "q_18181", "text": "Return the verbose name of a model.\n The obj argument can be either a Model instance, or a ModelForm instance.\n This allows to retrieve the verbose name of the model of a ModelForm\n easily, without adding extra context vars."}
{"_id": "q_18182", "text": "Split user input into initial whitespace, escape character, function part\n and the rest."}
{"_id": "q_18183", "text": "Remove an added builtin and re-set the original."}
{"_id": "q_18184", "text": "Retrieve a list of characters and escape codes where each escape\n code uses only one index. The indexes will not match up with the\n indexes in the original string."}
{"_id": "q_18185", "text": "Yields all links with the given relations"}
{"_id": "q_18186", "text": "Turn a command-line argument into a list."}
{"_id": "q_18187", "text": "The main entry point to Coverage.\n\n This is installed as the script entry point."}
{"_id": "q_18188", "text": "The bulk of the command line interface to Coverage.\n\n `argv` is the argument list to process.\n\n Returns 0 if all is well, 1 if something went wrong."}
{"_id": "q_18189", "text": "Display an error message, or the named topic."}
{"_id": "q_18190", "text": "Deal with help requests.\n\n Return True if it handled the request, False if not."}
{"_id": "q_18191", "text": "Implementation of 'coverage debug'."}
{"_id": "q_18192", "text": "Add directory or directories list to bundle\n\n :param exclusions: List of excluded paths\n\n :type path: str|unicode\n :type exclusions: list"}
{"_id": "q_18193", "text": "Add custom path objects\n\n :type: path_object: static_bundle.paths.AbstractPath"}
{"_id": "q_18194", "text": "Add prepare handler to bundle\n\n :type: prepare_handler: static_bundle.handlers.AbstractPrepareHandler"}
{"_id": "q_18195", "text": "Called when builder run collect files in builder group\n\n :rtype: list[static_bundle.files.StaticFileResult]"}
{"_id": "q_18196", "text": "Set the hook."}
{"_id": "q_18197", "text": "decorator to log unhandled exceptions raised in a method.\n \n For use wrapping on_recv callbacks, so that exceptions\n do not cause the stream to be closed."}
{"_id": "q_18198", "text": "boolean check for whether a string is a zmq url"}
{"_id": "q_18199", "text": "validate a url for zeromq"}
{"_id": "q_18200", "text": "validate a potentially nested collection of urls."}
{"_id": "q_18201", "text": "Selects and return n random ports that are available."}
{"_id": "q_18202", "text": "Get the number of files in the folder."}
{"_id": "q_18203", "text": "Register the contents as JSON"}
{"_id": "q_18204", "text": "Translate the data with the translation table"}
{"_id": "q_18205", "text": "Turn a function into a parallel remote function.\n\n This method can be used for map:\n\n In [1]: @parallel(view, block=True)\n ...: def func(a):\n ...: pass"}
{"_id": "q_18206", "text": "call a function on each element of a sequence remotely.\n This should behave very much like the builtin map, but return an AsyncMapResult\n if self.block is False."}
{"_id": "q_18207", "text": "Issues a POST request against the API, allows for multipart data uploads\n\n :param url: a string, the url you are requesting\n :param params: a dict, the key-value of all the parameters needed\n in the request\n :param files: a list, the list of tuples of files\n\n :returns: a dict parsed of the JSON response"}
{"_id": "q_18208", "text": "Get the last n items in readline history."}
{"_id": "q_18209", "text": "Initialize logging in case it was requested at the command line."}
{"_id": "q_18210", "text": "Restore the state of the sys module."}
{"_id": "q_18211", "text": "Register a function for calling after code execution."}
{"_id": "q_18212", "text": "Return a new 'main' module object for user code execution."}
{"_id": "q_18213", "text": "Cache a main module's namespace.\n\n When scripts are executed via %run, we must keep a reference to the\n namespace of their __main__ module (a FakeModule instance) around so\n that Python doesn't clear it, rendering objects defined therein\n useless.\n\n This method keeps said reference in a private dict, keyed by the\n absolute path of the module object (which corresponds to the script\n path). This way, for multiple executions of the same script we only\n keep one copy of the namespace (the last one), thus preventing memory\n leaks from old references while allowing the objects from the last\n execution to be accessible.\n\n Note: we can not allow the actual FakeModule instances to be deleted,\n because of how Python tears down modules (it hard-sets all their\n references to None without regard for reference counts). This method\n must therefore make a *copy* of the given namespace, to allow the\n original module's __dict__ to be cleared and reused.\n\n\n Parameters\n ----------\n ns : a namespace (a dict, typically)\n\n fname : str\n Filename associated with the namespace.\n\n Examples\n --------\n\n In [10]: import IPython\n\n In [11]: _ip.cache_main_mod(IPython.__dict__,IPython.__file__)\n\n In [12]: IPython.__file__ in _ip._main_ns_cache\n Out[12]: True"}
{"_id": "q_18214", "text": "Initialize all user-visible namespaces to their minimum defaults.\n\n Certain history lists are also initialized here, as they effectively\n act as user namespaces.\n\n Notes\n -----\n All data structures here are only filled in, they are NOT reset by this\n method. If they were not empty before, data will simply be added to\n therm."}
{"_id": "q_18215", "text": "Get a list of references to all the namespace dictionaries in which\n IPython might store a user-created object.\n \n Note that this does not include the displayhook, which also caches\n objects from the output."}
{"_id": "q_18216", "text": "Clear all internal namespaces, and attempt to release references to\n user objects.\n\n If new_session is True, a new history session will be opened."}
{"_id": "q_18217", "text": "Clear selective variables from internal namespaces based on a\n specified regular expression.\n\n Parameters\n ----------\n regex : string or compiled pattern, optional\n A regular expression pattern that will be used in searching\n variable names in the users namespaces."}
{"_id": "q_18218", "text": "Inject a group of variables into the IPython user namespace.\n\n Parameters\n ----------\n variables : dict, str or list/tuple of str\n The variables to inject into the user's namespace. If a dict, a\n simple update is done. If a str, the string is assumed to have\n variable names separated by spaces. A list/tuple of str can also\n be used to give the variable names. If just the variable names are\n give (list/tuple/str) then the variable values looked up in the\n callers frame.\n interactive : bool\n If True (default), the variables will be listed with the ``who``\n magic."}
{"_id": "q_18219", "text": "Find an object in the available namespaces.\n\n self._ofind(oname) -> dict with keys: found,obj,ospace,ismagic\n\n Has special code to detect magic functions."}
{"_id": "q_18220", "text": "Second part of object finding, to look for property details."}
{"_id": "q_18221", "text": "Sets up the command history, and starts regular autosaves."}
{"_id": "q_18222", "text": "One more defense for GUI apps that call sys.excepthook.\n\n GUI frameworks like wxPython trap exceptions and call\n sys.excepthook themselves. I guess this is a feature that\n enables them to keep running after exceptions that would\n otherwise kill their mainloop. This is a bother for IPython\n which excepts to catch all of the program exceptions with a try:\n except: statement.\n\n Normally, IPython sets sys.excepthook to a CrashHandler instance, so if\n any app directly invokes sys.excepthook, it will look to the user like\n IPython crashed. In order to work around this, we can disable the\n CrashHandler and replace it with this excepthook instead, which prints a\n regular traceback using our InteractiveTB. In this fashion, apps which\n call sys.excepthook will generate a regular-looking exception from\n IPython, and the CrashHandler will only be triggered by real IPython\n crashes.\n\n This hook should be used sparingly, only in places which are not likely\n to be true IPython errors."}
{"_id": "q_18223", "text": "Actually show a traceback.\n\n Subclasses may override this method to put the traceback on a different\n place, like a side channel."}
{"_id": "q_18224", "text": "readline hook to be used at the start of each line.\n\n Currently it handles auto-indent only."}
{"_id": "q_18225", "text": "Execute the given line magic.\n\n Parameters\n ----------\n magic_name : str\n Name of the desired magic function, without '%' prefix.\n\n line : str\n The rest of the input line as a single string."}
{"_id": "q_18226", "text": "Define a new macro\n\n Parameters\n ----------\n name : str\n The name of the macro.\n themacro : str or Macro\n The action to do upon invoking the macro. If a string, a new\n Macro object is created by passing the string to it."}
{"_id": "q_18227", "text": "Call the given cmd in a subprocess using os.system\n\n Parameters\n ----------\n cmd : str\n Command to execute."}
{"_id": "q_18228", "text": "Evaluate a dict of expressions in the user's namespace.\n\n Parameters\n ----------\n expressions : dict\n A dict with string keys and string values. The expression values\n should be valid Python expressions, each of which will be evaluated\n in the user namespace.\n\n Returns\n -------\n A dict, keyed like the input expressions dict, with the repr() of each\n value."}
{"_id": "q_18229", "text": "Evaluate python expression expr in user namespace.\n\n Returns the result of evaluation"}
{"_id": "q_18230", "text": "Special method to call a cell magic with the data stored in self."}
{"_id": "q_18231", "text": "Run a complete IPython cell.\n\n Parameters\n ----------\n raw_cell : str\n The code (including IPython code such as %magic functions) to run.\n store_history : bool\n If True, the raw and translated cell will be stored in IPython's\n history. For user code calling back into IPython's machinery, this\n should be set to False.\n silent : bool\n If True, avoid side-effets, such as implicit displayhooks, history,\n and logging. silent=True forces store_history=False."}
{"_id": "q_18232", "text": "Activate pylab support at runtime.\n\n This turns on support for matplotlib, preloads into the interactive\n namespace all of numpy and pylab, and configures IPython to correctly\n interact with the GUI event loop. The GUI backend to be used can be\n optionally selected with the optional :param:`gui` argument.\n\n Parameters\n ----------\n gui : optional, string\n\n If given, dictates the choice of matplotlib GUI backend to use\n (should be one of IPython's supported backends, 'qt', 'osx', 'tk',\n 'gtk', 'wx' or 'inline'), otherwise we use the default chosen by\n matplotlib (as dictated by the matplotlib build-time options plus the\n user's matplotlibrc configuration file). Note that not all backends\n make sense in all contexts, for example a terminal ipython can't\n display figures inline."}
{"_id": "q_18233", "text": "Make a new tempfile and return its filename.\n\n This makes a call to tempfile.mktemp, but it registers the created\n filename internally so ipython cleans it up at exit time.\n\n Optional inputs:\n\n - data(None): if data is given, it gets written out to the temp file\n immediately, and the file is closed again."}
{"_id": "q_18234", "text": "Get a code string from history, file, url, or a string or macro.\n\n This is mainly used by magic functions.\n\n Parameters\n ----------\n\n target : str\n\n A string specifying code to retrieve. This will be tried respectively\n as: ranges of input history (see %history for syntax), url,\n correspnding .py file, filename, or an expression evaluating to a\n string or Macro in the user namespace.\n\n raw : bool\n If true (default), retrieve raw history. Has no effect on the other\n retrieval mechanisms.\n\n py_only : bool (default False)\n Only try to fetch python code, do not try alternative methods to decode file\n if unicode fails.\n\n Returns\n -------\n A string of code.\n\n ValueError is raised if nothing is found, and TypeError if it evaluates\n to an object of another type. In each case, .args[0] is a printable\n message."}
{"_id": "q_18235", "text": "This will be executed at the time of exit.\n\n Cleanup operations and saving of persistent data that is done\n unconditionally by IPython should be performed here.\n\n For things that may depend on startup flags or platform specifics (such\n as having readline or not), register a separate atexit function in the\n code that has the appropriate information, rather than trying to\n clutter"}
{"_id": "q_18236", "text": "Create a temporary directory with input data for the test.\n The directory contents is copied from a directory with the same name as the module located in the same directory of\n the test module."}
{"_id": "q_18237", "text": "Compare two files contents. If the files differ, show the diff and write a nice HTML\n diff file into the data directory.\n\n Searches for the filenames both inside and outside the data directory (in that order).\n\n :param unicode obtained_fn: basename to obtained file into the data directory, or full path.\n\n :param unicode expected_fn: basename to expected file into the data directory, or full path.\n\n :param bool binary:\n Thread both files as binary files.\n\n :param unicode encoding:\n File's encoding. If not None, contents obtained from file will be decoded using this\n `encoding`.\n\n :param callable fix_callback:\n A callback to \"fix\" the contents of the obtained (first) file.\n This callback receives a list of strings (lines) and must also return a list of lines,\n changed as needed.\n The resulting lines will be used to compare with the contents of expected_fn.\n\n :param bool binary:\n .. seealso:: zerotk.easyfs.GetFileContents"}
{"_id": "q_18238", "text": "Returns a nice side-by-side diff of the given files, as a string."}
{"_id": "q_18239", "text": "Add a peer or multiple peers to the PEERS variable, takes a single string or a list.\n\n :param peer(list or string)"}
{"_id": "q_18240", "text": "remove one or multiple peers from PEERS variable\n\n :param peer(list or string):"}
{"_id": "q_18241", "text": "check the status of the network and the peers\n\n :return: network_height, peer_status"}
{"_id": "q_18242", "text": "broadcasts a transaction to the peerslist using ark-js library"}
{"_id": "q_18243", "text": "broadcast a message from one engine to all others."}
{"_id": "q_18244", "text": "send a message from one to one-or-more engines."}
{"_id": "q_18245", "text": "Make function raise SkipTest exception if a given condition is true.\n\n If the condition is a callable, it is used at runtime to dynamically\n make the decision. This is useful for tests that may require costly\n imports, to delay the cost until the test suite is actually executed.\n\n Parameters\n ----------\n skip_condition : bool or callable\n Flag to determine whether to skip the decorated test.\n msg : str, optional\n Message to give on raising a SkipTest exception. Default is None.\n\n Returns\n -------\n decorator : function\n Decorator which, when applied to a function, causes SkipTest\n to be raised when `skip_condition` is True, and the function\n to be called normally otherwise.\n\n Notes\n -----\n The decorator itself is decorated with the ``nose.tools.make_decorator``\n function in order to transmit function name, and various other metadata."}
{"_id": "q_18246", "text": "Make function raise KnownFailureTest exception if given condition is true.\n\n If the condition is a callable, it is used at runtime to dynamically\n make the decision. This is useful for tests that may require costly\n imports, to delay the cost until the test suite is actually executed.\n\n Parameters\n ----------\n fail_condition : bool or callable\n Flag to determine whether to mark the decorated test as a known\n failure (if True) or not (if False).\n msg : str, optional\n Message to give on raising a KnownFailureTest exception.\n Default is None.\n\n Returns\n -------\n decorator : function\n Decorator, which, when applied to a function, causes SkipTest\n to be raised when `skip_condition` is True, and the function\n to be called normally otherwise.\n\n Notes\n -----\n The decorator itself is decorated with the ``nose.tools.make_decorator``\n function in order to transmit function name, and various other metadata."}
{"_id": "q_18247", "text": "Filter deprecation warnings while running the test suite.\n\n This decorator can be used to filter DeprecationWarning's, to avoid\n printing them during the test suite run, while checking that the test\n actually raises a DeprecationWarning.\n\n Parameters\n ----------\n conditional : bool or callable, optional\n Flag to determine whether to mark test as deprecated or not. If the\n condition is a callable, it is used at runtime to dynamically make the\n decision. Default is True.\n\n Returns\n -------\n decorator : function\n The `deprecated` decorator itself.\n\n Notes\n -----\n .. versionadded:: 1.4.0"}
{"_id": "q_18248", "text": "Find a distribution matching requirement `req`\n\n If there is an active distribution for the requested project, this\n returns it as long as it meets the version requirement specified by\n `req`. But, if there is an active distribution for the project and it\n does *not* meet the `req` requirement, ``VersionConflict`` is raised.\n If there is no active distribution for the requested project, ``None``\n is returned."}
{"_id": "q_18249", "text": "Exposes a given service to this API."}
{"_id": "q_18250", "text": "This function runs the given command; waits for it to finish; then\n returns all output as a string. STDERR is included in output. If the full\n path to the command is not given then the path is searched.\n\n Note that lines are terminated by CR/LF (\\\\r\\\\n) combination even on\n UNIX-like systems because this is the standard for pseudo ttys. If you set\n 'withexitstatus' to true, then run will return a tuple of (command_output,\n exitstatus). If 'withexitstatus' is false then this returns just\n command_output.\n\n The run() function can often be used instead of creating a spawn instance.\n For example, the following code uses spawn::\n\n from pexpect import *\n child = spawn('scp foo myname@host.example.com:.')\n child.expect ('(?i)password')\n child.sendline (mypassword)\n\n The previous code can be replace with the following::\n\n from pexpect import *\n run ('scp foo myname@host.example.com:.', events={'(?i)password': mypassword})\n\n Examples\n ========\n\n Start the apache daemon on the local machine::\n\n from pexpect import *\n run (\"/usr/local/apache/bin/apachectl start\")\n\n Check in a file using SVN::\n\n from pexpect import *\n run (\"svn ci -m 'automatic commit' my_file.py\")\n\n Run a command and capture exit status::\n\n from pexpect import *\n (command_output, exitstatus) = run ('ls -l /bin', withexitstatus=1)\n\n Tricky Examples\n ===============\n\n The following will run SSH and execute 'ls -l' on the remote machine. The\n password 'secret' will be sent if the '(?i)password' pattern is ever seen::\n\n run (\"ssh username@machine.example.com 'ls -l'\", events={'(?i)password':'secret\\\\n'})\n\n This will start mencoder to rip a video from DVD. This will also display\n progress ticks every 5 seconds as it runs. For example::\n\n from pexpect import *\n def print_ticks(d):\n print d['event_count'],\n run (\"mencoder dvd://1 -o video.avi -oac copy -ovc copy\", events={TIMEOUT:print_ticks}, timeout=5)\n\n The 'events' argument should be a dictionary of patterns and responses.\n Whenever one of the patterns is seen in the command out run() will send the\n associated response string. Note that you should put newlines in your\n string if Enter is necessary. The responses may also contain callback\n functions. Any callback is function that takes a dictionary as an argument.\n The dictionary contains all the locals from the run() function, so you can\n access the child spawn object or any other variable defined in run()\n (event_count, child, and extra_args are the most useful). A callback may\n return True to stop the current run process otherwise run() continues until\n the next event. A callback may also return a string which will be sent to\n the child. 'extra_args' is not used by directly run(). It provides a way to\n pass data to a callback function through run() through the locals\n dictionary passed to a callback."}
{"_id": "q_18251", "text": "This takes a given filename; tries to find it in the environment path;\n then checks if it is executable. This returns the full path to the filename\n if found and executable. Otherwise this returns None."}
{"_id": "q_18252", "text": "This sends a string to the child process. This returns the number of\n bytes written. If a log file was set then the data is also written to\n the log."}
{"_id": "q_18253", "text": "This sends a SIGINT to the child. It does not require\n the SIGINT to be the first character on a line."}
{"_id": "q_18254", "text": "Recompile unicode regexes as bytes regexes. Overridden in subclass."}
{"_id": "q_18255", "text": "This seeks through the stream until a pattern is matched. The\n pattern is overloaded and may take several types. The pattern can be a\n StringType, EOF, a compiled re, or a list of any of those types.\n Strings will be compiled to re types. This returns the index into the\n pattern list. If the pattern was not a list this returns index 0 on a\n successful match. This may raise exceptions for EOF or TIMEOUT. To\n avoid the EOF or TIMEOUT exceptions add EOF or TIMEOUT to the pattern\n list. That will cause expect to match an EOF or TIMEOUT condition\n instead of raising an exception.\n\n If you pass a list of patterns and more than one matches, the first match\n in the stream is chosen. If more than one pattern matches at that point,\n the leftmost in the pattern list is chosen. For example::\n\n # the input is 'foobar'\n index = p.expect (['bar', 'foo', 'foobar'])\n # returns 1 ('foo') even though 'foobar' is a \"better\" match\n\n Please note, however, that buffering can affect this behavior, since\n input arrives in unpredictable chunks. For example::\n\n # the input is 'foobar'\n index = p.expect (['foobar', 'foo'])\n # returns 0 ('foobar') if all input is available at once,\n # but returs 1 ('foo') if parts of the final 'bar' arrive late\n\n After a match is found the instance attributes 'before', 'after' and\n 'match' will be set. You can see all the data read before the match in\n 'before'. You can see the data that was matched in 'after'. The\n re.MatchObject used in the re match will be in 'match'. If an error\n occurred then 'before' will be set to all the data read so far and\n 'after' and 'match' will be None.\n\n If timeout is -1 then timeout will be set to the self.timeout value.\n\n A list entry may be EOF or TIMEOUT instead of a string. This will\n catch these exceptions and return the index of the list entry instead\n of raising the exception. The attribute 'after' will be set to the\n exception type. The attribute 'match' will be None. This allows you to\n write code like this::\n\n index = p.expect (['good', 'bad', pexpect.EOF, pexpect.TIMEOUT])\n if index == 0:\n do_something()\n elif index == 1:\n do_something_else()\n elif index == 2:\n do_some_other_thing()\n elif index == 3:\n do_something_completely_different()\n\n instead of code like this::\n\n try:\n index = p.expect (['good', 'bad'])\n if index == 0:\n do_something()\n elif index == 1:\n do_something_else()\n except EOF:\n do_some_other_thing()\n except TIMEOUT:\n do_something_completely_different()\n\n These two forms are equivalent. It all depends on what you want. You\n can also just expect the EOF if you are waiting for all output of a\n child to finish. For example::\n\n p = pexpect.spawn('/bin/ls')\n p.expect (pexpect.EOF)\n print p.before\n\n If you are trying to optimize for speed then see expect_list()."}
{"_id": "q_18256", "text": "This is the common loop used inside expect. The 'searcher' should be\n an instance of searcher_re or searcher_string, which describes how and what\n to search for in the input.\n\n See expect() for other arguments, return value and exceptions."}
{"_id": "q_18257", "text": "Recompile bytes regexes as unicode regexes."}
{"_id": "q_18258", "text": "This searches 'buffer' for the first occurence of one of the search\n strings. 'freshlen' must indicate the number of bytes at the end of\n 'buffer' which have not been searched before. It helps to avoid\n searching the same, possibly big, buffer over and over again.\n\n See class spawn for the 'searchwindowsize' argument.\n\n If there is a match this returns the index of that string, and sets\n 'start', 'end' and 'match'. Otherwise, this returns -1."}
{"_id": "q_18259", "text": "This searches 'buffer' for the first occurence of one of the regular\n expressions. 'freshlen' must indicate the number of bytes at the end of\n 'buffer' which have not been searched before.\n\n See class spawn for the 'searchwindowsize' argument.\n\n If there is a match this returns the index of that string, and sets\n 'start', 'end' and 'match'. Otherwise, returns -1."}
{"_id": "q_18260", "text": "Progress Monitor listener that logs all updates to the given logger"}
{"_id": "q_18261", "text": "Emit a message to the user.\n\n :param msg: The message to emit. If ``debug`` is ``True``,\n the message will be emitted to ``stderr`` only if\n the ``debug`` attribute is ``True``. If ``debug``\n is ``False``, the message will be emitted to\n ``stdout`` under the control of the ``verbose``\n attribute.\n :param level: Ignored if ``debug`` is ``True``. The message\n will only be emitted if the ``verbose``\n attribute is greater than or equal to the value\n of this parameter. Defaults to 1.\n :param debug: If ``True``, marks the message as a debugging\n message. The message will only be emitted if\n the ``debug`` attribute is ``True``."}
{"_id": "q_18262", "text": "Wrapper for subprocess.check_output."}
{"_id": "q_18263", "text": "Main entry point, expects doctopt arg dict as argd."}
{"_id": "q_18264", "text": "Print a message only if DEBUG is truthy."}
{"_id": "q_18265", "text": "If `s` is a file name, read the file and return it's content.\n Otherwise, return the original string.\n Returns None if the file was opened, but errored during reading."}
{"_id": "q_18266", "text": "Find the source for `filename`.\n\n Returns two values: the actual filename, and the source.\n\n The source returned depends on which of these cases holds:\n\n * The filename seems to be a non-source file: returns None\n\n * The filename is a source file, and actually exists: returns None.\n\n * The filename is a source file, and is in a zip file or egg:\n returns the source.\n\n * The filename is a source file, but couldn't be found: raises\n `NoSource`."}
{"_id": "q_18267", "text": "Returns a sorted list of the arcs actually executed in the code."}
{"_id": "q_18268", "text": "Returns a sorted list of the arcs in the code not executed."}
{"_id": "q_18269", "text": "Returns a list of line numbers that have more than one exit."}
{"_id": "q_18270", "text": "Return arcs that weren't executed from branch lines.\n\n Returns {l1:[l2a,l2b,...], ...}"}
{"_id": "q_18271", "text": "Get stats about branches.\n\n Returns a dict mapping line numbers to a tuple:\n (total_exits, taken_exits)."}
{"_id": "q_18272", "text": "Set the number of decimal places used to report percentages."}
{"_id": "q_18273", "text": "Returns the percent covered, as a string, without a percent sign.\n\n Note that \"0\" is only returned when the value is truly zero, and \"100\"\n is only returned when the value is truly 100. Rounding can never\n result in either \"0\" or \"100\"."}
{"_id": "q_18274", "text": "Applies cls_name to all needles found in haystack."}
{"_id": "q_18275", "text": "Given an list of words, this function highlights the matched words in the given string."}
{"_id": "q_18276", "text": "Run 'func' under os sandboxing"}
{"_id": "q_18277", "text": "Indent a string a given number of spaces or tabstops.\n\n indent(str,nspaces=4,ntabs=0) -> indent str by ntabs+nspaces.\n\n Parameters\n ----------\n\n instr : basestring\n The string to be indented.\n nspaces : int (default: 4)\n The number of spaces to be indented.\n ntabs : int (default: 0)\n The number of tabs to be indented.\n flatten : bool (default: False)\n Whether to scrub existing indentation. If True, all lines will be\n aligned to the same indentation. If False, existing indentation will\n be strictly increased.\n\n Returns\n -------\n\n str|unicode : string indented by ntabs and nspaces."}
{"_id": "q_18278", "text": "Format a string for screen printing.\n\n This removes some latex-type format codes."}
{"_id": "q_18279", "text": "Equivalent of textwrap.dedent that ignores unindented first line.\n\n This means it will still dedent strings like:\n '''foo\n is a bar\n '''\n\n For use in wrap_paragraphs."}
{"_id": "q_18280", "text": "Wrap multiple paragraphs to fit a specified width.\n\n This is equivalent to textwrap.wrap, but with support for multiple\n paragraphs, as separated by empty lines.\n\n Returns\n -------\n\n list of complete paragraphs, wrapped to fill `ncols` columns."}
{"_id": "q_18281", "text": "Calculate optimal info to columnize a list of string"}
{"_id": "q_18282", "text": "Returns a nested list, and info to columnize items\n\n Parameters :\n ------------\n\n items :\n list of strings to columize\n empty : (default None)\n default value to fill list if needed\n separator_size : int (default=2)\n How much caracters will be used as a separation between each columns.\n displaywidth : int (default=80)\n The width of the area onto wich the columns should enter\n\n Returns :\n ---------\n\n Returns a tuple of (strings_matrix, dict_info)\n\n strings_matrix :\n\n nested list of string, the outer most list contains as many list as\n rows, the innermost lists have each as many element as colums. If the\n total number of elements in `items` does not equal the product of\n rows*columns, the last element of some lists are filled with `None`.\n\n dict_info :\n some info to make columnize easier:\n\n columns_numbers : number of columns\n rows_numbers : number of rows\n columns_width : list of with of each columns\n optimal_separator_width : best separator width between columns\n\n Exemple :\n ---------\n\n In [1]: l = ['aaa','b','cc','d','eeeee','f','g','h','i','j','k','l']\n ...: compute_item_matrix(l,displaywidth=12)\n Out[1]:\n ([['aaa', 'f', 'k'],\n ['b', 'g', 'l'],\n ['cc', 'h', None],\n ['d', 'i', None],\n ['eeeee', 'j', None]],\n {'columns_numbers': 3,\n 'columns_width': [5, 1, 1],\n 'optimal_separator_width': 2,\n 'rows_numbers': 5})"}
{"_id": "q_18283", "text": "Collect whitespace-separated fields from string list\n\n Allows quick awk-like usage of string lists.\n\n Example data (in var a, created by 'a = !ls -l')::\n -rwxrwxrwx 1 ville None 18 Dec 14 2006 ChangeLog\n drwxrwxrwx+ 6 ville None 0 Oct 24 18:05 IPython\n\n a.fields(0) is ['-rwxrwxrwx', 'drwxrwxrwx+']\n a.fields(1,0) is ['1 -rwxrwxrwx', '6 drwxrwxrwx+']\n (note the joining by space).\n a.fields(-1) is ['ChangeLog', 'IPython']\n\n IndexErrors are ignored.\n\n Without args, fields() just split()'s the strings."}
{"_id": "q_18284", "text": "Wait for response until timeout.\n If timeout is specified to None, ``self.timeout`` is used.\n\n :param float timeout: seconds to wait I/O"}
{"_id": "q_18285", "text": "build argv to be passed to kernel subprocess"}
{"_id": "q_18286", "text": "If the file-object is not seekable, return ArchiveTemp of the fileobject,\n otherwise return the file-object itself"}
{"_id": "q_18287", "text": "Pretty print the object's representation."}
{"_id": "q_18288", "text": "Get a reasonable method resolution order of a class and its superclasses\n for both old-style and new-style classes."}
{"_id": "q_18289", "text": "The default print function. Used if an object does not provide one and\n it's none of the builtin objects."}
{"_id": "q_18290", "text": "Factory that returns a pprint function useful for sequences. Used by\n the default pprint for tuples, dicts, lists, sets and frozensets."}
{"_id": "q_18291", "text": "The pprint for the super type."}
{"_id": "q_18292", "text": "The pprint function for regular expression patterns."}
{"_id": "q_18293", "text": "The pprint for classes and types."}
{"_id": "q_18294", "text": "Base pprint for all functions and builtin functions."}
{"_id": "q_18295", "text": "Add a pretty printer for a given type."}
{"_id": "q_18296", "text": "Add a pretty printer for a type specified by the module and name of a type\n rather than the type object itself."}
{"_id": "q_18297", "text": "Add literal text to the output."}
{"_id": "q_18298", "text": "Add a breakable separator to the output. This does not mean that it\n will automatically break here. If no breaking on this position takes\n place the `sep` is inserted which default to one space."}
{"_id": "q_18299", "text": "Flush data that is left in the buffer."}
{"_id": "q_18300", "text": "Return a color table with fields for exception reporting.\n\n The table is an instance of ColorSchemeTable with schemes added for\n 'Linux', 'LightBG' and 'NoColor' and fields for exception handling filled\n in.\n\n Examples:\n\n >>> ec = exception_colors()\n >>> ec.active_scheme_name\n ''\n >>> print ec.active_colors\n None\n\n Now we activate a color scheme:\n >>> ec.set_active_scheme('NoColor')\n >>> ec.active_scheme_name\n 'NoColor'\n >>> sorted(ec.active_colors.keys())\n ['Normal', 'caret', 'em', 'excName', 'filename', 'filenameEm', 'line',\n 'lineno', 'linenoEm', 'name', 'nameEm', 'normalEm', 'topline', 'vName',\n 'val', 'valEm']"}
{"_id": "q_18301", "text": "Setup before_request, after_request handlers for tracing."}
{"_id": "q_18302", "text": "Records the starting time of this reqeust."}
{"_id": "q_18303", "text": "Calculates the request duration, and adds a transaction\n ID to the header."}
{"_id": "q_18304", "text": "Get the clipboard's text using Tkinter.\n\n This is the default on systems that are not Windows or OS X. It may\n interfere with other UI toolkits and should be replaced with an\n implementation that uses that toolkit."}
{"_id": "q_18305", "text": "Returns a safe build_prefix"}
{"_id": "q_18306", "text": "Rekey a dict that has been forced to use str keys where there should be\n ints by json."}
{"_id": "q_18307", "text": "extract ISO8601 dates from unpacked JSON"}
{"_id": "q_18308", "text": "squash datetime objects into ISO8601 strings"}
{"_id": "q_18309", "text": "default function for packing datetime objects in JSON."}
{"_id": "q_18310", "text": "Clean an object to ensure it's safe to encode in JSON.\n \n Atomic, immutable objects are returned unmodified. Sets and tuples are\n converted to lists, lists are copied and dicts are also copied.\n\n Note: dicts whose keys could cause collisions upon encoding (such as a dict\n with both the number 1 and the string '1' as keys) will cause a ValueError\n to be raised.\n\n Parameters\n ----------\n obj : any python object\n\n Returns\n -------\n out : object\n \n A version of the input which will not cause an encoding error when\n encoded as JSON. Note that this function does not *encode* its inputs,\n it simply sanitizes it so that there will be no encoding errors later.\n\n Examples\n --------\n >>> json_clean(4)\n 4\n >>> json_clean(range(10))\n [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n >>> sorted(json_clean(dict(x=1, y=2)).items())\n [('x', 1), ('y', 2)]\n >>> sorted(json_clean(dict(x=1, y=2, z=[1,2,3])).items())\n [('x', 1), ('y', 2), ('z', [1, 2, 3])]\n >>> json_clean(True)\n True"}
{"_id": "q_18311", "text": "Verify that self.install_dir is .pth-capable dir, if needed"}
{"_id": "q_18312", "text": "Write an executable file to the scripts directory"}
{"_id": "q_18313", "text": "simple function that takes args, prints a short message, sleeps for a time, and returns the same args"}
{"_id": "q_18314", "text": "convert .pyx extensions to .c"}
{"_id": "q_18315", "text": "watch iopub channel, and print messages"}
{"_id": "q_18316", "text": "Create a package finder appropriate to this install command.\n This method is meant to be overridden by subclasses, not\n called directly."}
{"_id": "q_18317", "text": "Adjust the log level when log_level is set."}
{"_id": "q_18318", "text": "Start logging for this application.\n\n The default is to log to stdout using a StreaHandler. The log level\n starts at loggin.WARN, but this can be adjusted by setting the\n ``log_level`` attribute."}
{"_id": "q_18319", "text": "Print the alias part of the help."}
{"_id": "q_18320", "text": "Print the subcommand part of the help."}
{"_id": "q_18321", "text": "Print the help for each Configurable class in self.classes.\n\n If classes=False (the default), only flags and aliases are printed."}
{"_id": "q_18322", "text": "Print usage and examples.\n\n This usage string goes at the end of the command line help string\n and should contain examples of the application's usage."}
{"_id": "q_18323", "text": "Fire the traits events when the config is updated."}
{"_id": "q_18324", "text": "Initialize a subcommand with argv."}
{"_id": "q_18325", "text": "flatten flags and aliases, so cl-args override as expected.\n \n This prevents issues such as an alias pointing to InteractiveShell,\n but a config file setting the same trait in TerminalInteraciveShell\n getting inappropriate priority over the command-line arg.\n\n Only aliases with exactly one descendent in the class list\n will be promoted."}
{"_id": "q_18326", "text": "Parse the command line arguments."}
{"_id": "q_18327", "text": "generate default config file from Configurables"}
{"_id": "q_18328", "text": "Insert spaces between words until it is wide enough for `width`."}
{"_id": "q_18329", "text": "Prepend or append text to lines. Yields each line."}
{"_id": "q_18330", "text": "Format block by wrapping on spaces."}
{"_id": "q_18331", "text": "Remove spaces in between words until it is small enough for\n `width`.\n This will always leave at least one space between words,\n so it may not be able to get below `width` characters."}
{"_id": "q_18332", "text": "Choose k random elements of array."}
{"_id": "q_18333", "text": "Produce a sequence of formatted lines from info.\n\n `info` is a sequence of pairs (label, data). The produced lines are\n nicely formatted, ready to print."}
{"_id": "q_18334", "text": "Write a line of debug output."}
{"_id": "q_18335", "text": "Update all the class traits having ``config=True`` as metadata.\n\n For any class trait with a ``config`` metadata attribute that is\n ``True``, we update the trait with the value of the corresponding\n config entry."}
{"_id": "q_18336", "text": "Get the help string for a single trait.\n \n If `inst` is given, it's current trait values will be used in place of\n the class default."}
{"_id": "q_18337", "text": "Get the config class config section"}
{"_id": "q_18338", "text": "unset _instance for this class and singleton parents."}
{"_id": "q_18339", "text": "Returns a global instance of this class.\n\n This method create a new instance if none have previously been created\n and returns a previously created instance is one already exists.\n\n The arguments and keyword arguments passed to this method are passed\n on to the :meth:`__init__` method of the class upon instantiation.\n\n Examples\n --------\n\n Create a singleton class using instance, and retrieve it::\n\n >>> from IPython.config.configurable import SingletonConfigurable\n >>> class Foo(SingletonConfigurable): pass\n >>> foo = Foo.instance()\n >>> foo == Foo.instance()\n True\n\n Create a subclass that is retrived using the base class instance::\n\n >>> class Bar(SingletonConfigurable): pass\n >>> class Bam(Bar): pass\n >>> bam = Bam.instance()\n >>> bam == Bar.instance()\n True"}
{"_id": "q_18340", "text": "Check IP trough the httpBL API\n\n :param ip: ipv4 ip address\n :return: httpBL results or None if any error is occurred"}
{"_id": "q_18341", "text": "Check if IP is a threat\n\n :param result: httpBL results; if None, then results from last check_ip() used (optional)\n :param harmless_age: harmless age for check if httpBL age is older (optional)\n :param threat_score: threat score for check if httpBL threat is lower (optional)\n :param threat_type: threat type, if not equal httpBL score type, then return False (optional)\n :return: True or False"}
{"_id": "q_18342", "text": "Check if IP is suspicious\n\n :param result: httpBL results; if None, then results from last check_ip() used (optional)\n :return: True or False"}
{"_id": "q_18343", "text": "Invalidate httpBL cache for IP address\n\n :param ip: ipv4 IP address"}
{"_id": "q_18344", "text": "Reimplemented to ensure that signals are dispatched immediately."}
{"_id": "q_18345", "text": "Reimplemented to emit signal."}
{"_id": "q_18346", "text": "Upload the next batch of items, return whether successful."}
{"_id": "q_18347", "text": "Get a single item from the queue."}
{"_id": "q_18348", "text": "Attempt to upload the batch and retry before raising an error"}
{"_id": "q_18349", "text": "Read from a pipe ignoring EINTR errors.\n\n This is necessary because when reading from pipes with GUI event loops\n running in the background, often interrupts are raised that stop the\n command from completing."}
{"_id": "q_18350", "text": "Open a command in a shell subprocess and execute a callback.\n\n This function provides common scaffolding for creating subprocess.Popen()\n calls. It creates a Popen object and then calls the callback with it.\n\n Parameters\n ----------\n cmd : str\n A string to be executed with the underlying system shell (by calling\n :func:`Popen` with ``shell=True``.\n\n callback : callable\n A one-argument function that will be called with the Popen object.\n\n stderr : file descriptor number, optional\n By default this is set to ``subprocess.PIPE``, but you can also pass the\n value ``subprocess.STDOUT`` to force the subprocess' stderr to go into\n the same file descriptor as its stdout. This is useful to read stdout\n and stderr combined in the order they are generated.\n\n Returns\n -------\n The return value of the provided callback is returned."}
{"_id": "q_18351", "text": "Split a command line's arguments in a shell-like manner.\n\n This is a modified version of the standard library's shlex.split()\n function, but with a default of posix=False for splitting, so that quotes\n in inputs are respected.\n\n if strict=False, then any errors shlex.split would raise will result in the\n unparsed remainder being the last element of the list, rather than raising.\n This is because we sometimes use arg_split to parse things other than\n command-line args."}
{"_id": "q_18352", "text": "Translate camelCase into underscore format.\n\n >>> _camelcase_to_underscore('minutesBetweenSummaries')\n 'minutes_between_summaries'"}
{"_id": "q_18353", "text": "Creates the Trello endpoint tree.\n\n >>> r = {'1': { \\\n 'actions': {'METHODS': {'GET'}}, \\\n 'boards': { \\\n 'members': {'METHODS': {'DELETE'}}}} \\\n }\n >>> r == create_tree([ \\\n 'GET /1/actions/[idAction]', \\\n 'DELETE /1/boards/[board_id]/members/[idMember]'])\n True"}
{"_id": "q_18354", "text": "Compress a directory history into a new one with at most 20 entries.\n\n Return a new list made from the first and last 10 elements of dhist after\n removal of duplicates."}
{"_id": "q_18355", "text": "Class decorator for all subclasses of the main Magics class.\n\n Any class that subclasses Magics *must* also apply this decorator, to\n ensure that all the methods that have been decorated as line/cell magics\n get correctly registered in the class instance. This is necessary because\n when method decorators run, the class does not exist yet, so they\n temporarily store their information into a module global. Application of\n this class decorator copies that global data to the class instance and\n clears the global.\n\n Obviously, this mechanism is not thread-safe, which means that the\n *creation* of subclasses of Magic should only be done in a single-thread\n context. Instantiation of the classes has no restrictions. Given that\n these classes are typically created at IPython startup time and before user\n application code becomes active, in practice this should not pose any\n problems."}
{"_id": "q_18356", "text": "Decorator factory for standalone functions."}
{"_id": "q_18357", "text": "Register one or more instances of Magics.\n\n Take one or more classes or instances of classes that subclass the main \n `core.Magic` class, and register them with IPython to use the magic\n functions they provide. The registration process will then ensure that\n any methods that have decorated to provide line and/or cell magics will\n be recognized with the `%x`/`%%x` syntax as a line/cell magic\n respectively.\n\n If classes are given, they will be instantiated with the default\n constructor. If your classes need a custom constructor, you should\n instanitate them first and pass the instance.\n\n The provided arguments can be an arbitrary mix of classes and instances.\n\n Parameters\n ----------\n magic_objects : one or more classes or instances"}
{"_id": "q_18358", "text": "Expose a standalone function as magic function for IPython.\n\n This will create an IPython magic (line, cell or both) from a\n standalone function. The functions should have the following\n signatures: \n\n * For line magics: `def f(line)`\n * For cell magics: `def f(line, cell)`\n * For a function that does both: `def f(line, cell=None)`\n\n In the latter case, the function will be called with `cell==None` when\n invoked as `%f`, and with cell as a string when invoked as `%%f`.\n\n Parameters\n ----------\n func : callable\n Function to be registered as a magic.\n\n magic_kind : str\n Kind of magic, one of 'line', 'cell' or 'line_cell'\n\n magic_name : optional str\n If given, the name the magic will have in the IPython namespace. By\n default, the name of the function itself is used."}
{"_id": "q_18359", "text": "Format a string for latex inclusion."}
{"_id": "q_18360", "text": "Show a basic reference about the GUI Console."}
{"_id": "q_18361", "text": "Return task info dictionary from task label. Internal function,\n pretty much only used in migrations since the model methods aren't there."}
{"_id": "q_18362", "text": "Find and return a callable object from a task info dictionary"}
{"_id": "q_18363", "text": "Calculate next run time of this task"}
{"_id": "q_18364", "text": "Internal instance method to submit this task for running immediately.\n Does not handle any iteration, end-date, etc., processing."}
{"_id": "q_18365", "text": "Internal instance method run by worker process to actually run the task callable."}
{"_id": "q_18366", "text": "Instance method to run this task immediately."}
{"_id": "q_18367", "text": "Class method to run a callable with a specified number of iterations"}
{"_id": "q_18368", "text": "Class method to run a one-shot task, immediately."}
{"_id": "q_18369", "text": "Set the url file.\n\n Here we don't try to actually see if it exists for is valid as that\n is hadled by the connection logic."}
{"_id": "q_18370", "text": "Promote engine to listening kernel, accessible to frontends."}
{"_id": "q_18371", "text": "Execute a test described by a YAML file.\n\n :param ctxt: A ``timid.context.Context`` object.\n :param test: The name of a YAML file containing the test\n description. Note that the current working directory\n set up in ``ctxt.environment`` does not affect the\n resolution of this file.\n :param key: An optional key into the test description file. If\n not ``None``, the file named by ``test`` must be a\n YAML dictionary of lists of steps; otherwise, it must\n be a simple list of steps.\n :param check: If ``True``, only performs a syntax check of the\n test steps indicated by ``test`` and ``key``; the\n test itself is not run.\n :param exts: An instance of ``timid.extensions.ExtensionSet``\n describing the extensions to be called while\n processing the test steps."}
{"_id": "q_18372", "text": "Filter a namespace dictionary by name pattern and item type."}
{"_id": "q_18373", "text": "Return dictionary of all objects in a namespace dictionary that match\n type_pattern and filter."}
{"_id": "q_18374", "text": "Write to a local log file"}
{"_id": "q_18375", "text": "Helper method to store username and password"}
{"_id": "q_18376", "text": "Is called after every pylab drawing command"}
{"_id": "q_18377", "text": "Send all figures that changed\n\n This is meant to be called automatically and will call show() if, during\n prior code execution, there had been any calls to draw_if_interactive.\n \n This function is meant to be used as a post_execute callback in IPython,\n so user-caused errors are handled with showtraceback() instead of being\n allowed to raise. If this function is not called from within IPython,\n then these exceptions will raise."}
{"_id": "q_18378", "text": "Load an IPython extension by its module name.\n\n If :func:`load_ipython_extension` returns anything, this function\n will return that object."}
{"_id": "q_18379", "text": "initialize tornado webapp and httpserver"}
{"_id": "q_18380", "text": "SIGINT handler spawns confirmation dialog"}
{"_id": "q_18381", "text": "confirm shutdown on ^C\n \n A second ^C, or answering 'y' within 5s will cause shutdown,\n otherwise original SIGINT handler will be restored.\n \n This doesn't work on Windows."}
{"_id": "q_18382", "text": "shutdown all kernels\n \n The kernels will shutdown themselves when this process no longer exists,\n but explicit shutdown allows the KernelManagers to cleanup the connection files."}
{"_id": "q_18383", "text": "Price European and Asian options using a Monte Carlo method.\n\n Parameters\n ----------\n S : float\n The initial price of the stock.\n K : float\n The strike price of the option.\n sigma : float\n The volatility of the stock.\n r : float\n The risk free interest rate.\n days : int\n The number of days until the option expires.\n paths : int\n The number of Monte Carlo paths used to price the option.\n\n Returns\n -------\n A tuple of (E. call, E. put, A. call, A. put) option prices."}
{"_id": "q_18384", "text": "Set connection parameters. Call set_connection with no arguments to clear."}
{"_id": "q_18385", "text": "Set delegate parameters. Call set_delegate with no arguments to clear."}
{"_id": "q_18386", "text": "returns a list of named tuples, x.timestamp, x.amount including block rewards"}
{"_id": "q_18387", "text": "Launches a localhost kernel, binding to the specified ports.\n\n Parameters\n ----------\n code : str,\n A string of Python code that imports and executes a kernel entry point.\n\n stdin, stdout, stderr : optional (default None)\n Standards streams, as defined in subprocess.Popen.\n\n fname : unicode, optional\n The JSON connector file, containing ip/port/hmac key information.\n\n key : str, optional\n The Session key used for HMAC authentication.\n\n executable : str, optional (default sys.executable)\n The Python executable to use for the kernel process.\n\n independent : bool, optional (default False)\n If set, the kernel process is guaranteed to survive if this process\n dies. If not set, an effort is made to ensure that the kernel is killed\n when this process dies. Note that in this case it is still good practice\n to kill kernels manually before exiting.\n\n extra_arguments : list, optional\n A list of extra arguments to pass when executing the launch code.\n \n cwd : path, optional\n The working dir of the kernel process (default: cwd of this process).\n\n Returns\n -------\n A tuple of form:\n (kernel_process, shell_port, iopub_port, stdin_port, hb_port)\n where kernel_process is a Popen object and the ports are integers."}
{"_id": "q_18388", "text": "This is the actual zest.releaser entry point\n\n Relevant items in the context dict:\n\n name\n Name of the project being released\n\n tagdir\n Directory where the tag checkout is placed (*if* a tag\n checkout has been made)\n\n version\n Version we're releasing\n\n workingdir\n Original working directory"}
{"_id": "q_18389", "text": "Fix the version in metadata.txt\n\n Relevant context dict item for both prerelease and postrelease:\n ``new_version``."}
{"_id": "q_18390", "text": "return whether an object is mappable or not."}
{"_id": "q_18391", "text": "Returns the pth partition of q partitions of seq."}
{"_id": "q_18392", "text": "Massages the 'true' and 'false' strings to bool equivalents.\n\n :param str config_val: The env var value.\n :param EnvironmentVariable evar: The EVar object we are validating\n a value for.\n :rtype: bool\n :return: True or False, depending on the value."}
{"_id": "q_18393", "text": "Convert an evar value into a Python logging level constant.\n\n :param str config_val: The env var value.\n :param EnvironmentVariable evar: The EVar object we are validating\n a value for.\n :return: A validated string.\n :raises: ValueError if the log level is invalid."}
{"_id": "q_18394", "text": "Run the given file interactively.\n\n Inputs:\n\n -fname: name of the file to execute.\n\n See the run_source docstring for the meaning of the optional\n arguments."}
{"_id": "q_18395", "text": "Run the given source code interactively.\n\n Inputs:\n\n - source: a string of code to be executed, or an open file object we\n can iterate over.\n\n Optional inputs:\n\n - interact(False): if true, start to interact with the running\n program at the end of the script. Otherwise, just exit.\n\n - get_output(False): if true, capture the output of the child process\n (filtering the input commands out) and return it as a string.\n\n Returns:\n A string containing the process output, but only if requested."}
{"_id": "q_18396", "text": "Generate a Cobertura-compatible XML report for `morfs`.\n\n `morfs` is a list of modules or filenames.\n\n `outfile` is a file object to write the XML to."}
{"_id": "q_18397", "text": "Add to the XML report for a single file."}
{"_id": "q_18398", "text": "This will download a segment of pi from super-computing.org\n if the file is not already present."}
{"_id": "q_18399", "text": "Add up a list of freq counts to get the total counts."}
{"_id": "q_18400", "text": "Read digits of pi from a file and compute the n digit frequencies."}
{"_id": "q_18401", "text": "Consume digits of pi and compute 1 digit freq. counts."}
{"_id": "q_18402", "text": "Consume digits of pi and compute 2 digits freq. counts."}
{"_id": "q_18403", "text": "Consume digits of pi and compute n digits freq. counts.\n\n This should only be used for 1-6 digits."}
{"_id": "q_18404", "text": "Plot two digits frequency counts using matplotlib."}
{"_id": "q_18405", "text": "Print the value of an expression from the caller's frame.\n\n Takes an expression, evaluates it in the caller's frame and prints both\n the given expression and the resulting value (as well as a debug mark\n indicating the name of the calling function. The input must be of a form\n suitable for eval().\n\n An optional message can be passed, which will be prepended to the printed\n expr->value pair."}
{"_id": "q_18406", "text": "User-friendly reverse. Pass arguments and keyword arguments to Django's `reverse`\n\tas `args` and `kwargs` arguments, respectively.\n\n\tThe special optional keyword argument `query` is a dictionary of query (or GET) parameters\n\tthat can be appended to the `reverse`d URL.\n\n\tExample:\n\n\t\treverse('products:category', categoryId = 5, query = {'page': 2})\n\n\tis equivalent to\n\n\t\tdjango.core.urlresolvers.reverse('products:category', kwargs = {'categoryId': 5}) + '?page=2'"}
{"_id": "q_18407", "text": "A unittest suite for one or more doctest files.\n\n The path to each doctest file is given as a string; the\n interpretation of that string depends on the keyword argument\n \"module_relative\".\n\n A number of options may be provided as keyword arguments:\n\n module_relative\n If \"module_relative\" is True, then the given file paths are\n interpreted as os-independent module-relative paths. By\n default, these paths are relative to the calling module's\n directory; but if the \"package\" argument is specified, then\n they are relative to that package. To ensure os-independence,\n \"filename\" should use \"/\" characters to separate path\n segments, and may not be an absolute path (i.e., it may not\n begin with \"/\").\n\n If \"module_relative\" is False, then the given file paths are\n interpreted as os-specific paths. These paths may be absolute\n or relative (to the current working directory).\n\n package\n A Python package or the name of a Python package whose directory\n should be used as the base directory for module relative paths.\n If \"package\" is not specified, then the calling module's\n directory is used as the base directory for module relative\n filenames. It is an error to specify \"package\" if\n \"module_relative\" is False.\n\n setUp\n A set-up function. This is called before running the\n tests in each file. The setUp function will be passed a DocTest\n object. The setUp function can access the test globals as the\n globs attribute of the test passed.\n\n tearDown\n A tear-down function. This is called after running the\n tests in each file. The tearDown function will be passed a DocTest\n object. The tearDown function can access the test globals as the\n globs attribute of the test passed.\n\n globs\n A dictionary containing initial global variables for the tests.\n\n optionflags\n A set of doctest option flags expressed as an integer.\n\n parser\n A DocTestParser (or subclass) that should be used to extract\n tests from the files."}
{"_id": "q_18408", "text": "Debug a single doctest docstring, in argument `src`"}
{"_id": "q_18409", "text": "Debug a test script. `src` is the script, as a string."}
{"_id": "q_18410", "text": "Debug a single doctest docstring.\n\n Provide the module (or dotted name of the module) containing the\n test to be debugged and the name (within the module) of the object\n with the docstring with tests to be debugged."}
{"_id": "q_18411", "text": "Compress category 'hashroot', so hset is fast again\n\n hget will fail if fast_only is True for compressed items (that were\n hset before hcompress)."}
{"_id": "q_18412", "text": "returns whether this record should be printed"}
{"_id": "q_18413", "text": "return the bool of whether `record` starts with\n any item in `matchers`"}
{"_id": "q_18414", "text": "Call this to embed IPython at the current point in your program.\n\n The first invocation of this will create an :class:`InteractiveShellEmbed`\n instance and then call it. Consecutive calls just call the already\n created instance.\n\n Here is a simple example::\n\n from IPython import embed\n a = 10\n b = 20\n embed('First time')\n c = 30\n d = 40\n embed\n\n Full customization can be done by passing a :class:`Struct` in as the\n config argument."}
{"_id": "q_18415", "text": "Embeds IPython into a running python program.\n\n Input:\n\n - header: An optional header message can be specified.\n\n - local_ns, module: working local namespace (a dict) and module (a\n module or similar object). If given as None, they are automatically\n taken from the scope where the shell was called, so that\n program variables become visible.\n\n - stack_depth: specifies how many levels in the stack to go to\n looking for namespaces (when local_ns or module is None). This\n allows an intermediate caller to make sure that this function gets\n the namespace from the intended level in the stack. By default (0)\n it will get its locals and globals from the immediate caller.\n\n Warning: it's possible to use this in a program which is being run by\n IPython itself (via %run), but some funny things will happen (a few\n globals get overwritten). In the future this will be cleaned up, as\n there is no fundamental reason why it can't work perfectly."}
{"_id": "q_18416", "text": "Prepare new csv writers, write title rows and return them."}
{"_id": "q_18417", "text": "method to subscribe a user to a service"}
{"_id": "q_18418", "text": "function to init option parser"}
{"_id": "q_18419", "text": "Run a python module, as though with ``python -m name args...``.\n\n `modulename` is the name of the module, possibly a dot-separated name.\n `args` is the argument array to present as sys.argv, including the first\n element naming the module being executed."}
{"_id": "q_18420", "text": "returnr a string for an html table"}
{"_id": "q_18421", "text": "Cancel the completion\n\n should be called when the completer have to be dismissed\n\n This reset internal variable, clearing the temporary buffer\n of the console where the completion are shown."}
{"_id": "q_18422", "text": "Change the selection index, and make sure it stays in the right range\n\n A little more complicated than just dooing modulo the number of row columns\n to be sure to cycle through all element.\n\n horizontaly, the element are maped like this :\n to r <-- a b c d e f --> to g\n to f <-- g h i j k l --> to m\n to l <-- m n o p q r --> to a\n\n and vertically\n a d g j m p\n b e h k n q\n c f i l o r"}
{"_id": "q_18423", "text": "move cursor up"}
{"_id": "q_18424", "text": "Return a dictionary of words and word counts in a string."}
{"_id": "q_18425", "text": "Write the XML job description to a file."}
{"_id": "q_18426", "text": "Validate the given pin against the schema.\n\n :param dict pin: The pin to validate:\n :raises pypebbleapi.schemas.DocumentError: If the pin is not valid."}
{"_id": "q_18427", "text": "Delete a shared pin.\n\n :param str pin_id: The id of the pin to delete.\n :raises `requests.exceptions.HTTPError`: If an HTTP error occurred."}
{"_id": "q_18428", "text": "Send a user pin.\n\n :param str user_token: The token of the user.\n :param dict pin: The pin.\n :param bool skip_validation: Whether to skip the validation.\n :raises pypebbleapi.schemas.DocumentError: If the validation process failed.\n :raises `requests.exceptions.HTTPError`: If an HTTP error occurred."}
{"_id": "q_18429", "text": "Delete a user pin.\n\n :param str user_token: The token of the user.\n :param str pin_id: The id of the pin to delete.\n :raises `requests.exceptions.HTTPError`: If an HTTP error occurred."}
{"_id": "q_18430", "text": "Get the list of the topics which a user is subscribed to.\n\n :param str user_token: The token of the user.\n :return: The list of the topics.\n :rtype: list\n :raises `requests.exceptions.HTTPError`: If an HTTP error occurred."}
{"_id": "q_18431", "text": "Create a submonitor with the given units"}
{"_id": "q_18432", "text": "Increment the monitor with N units worked and an optional message"}
{"_id": "q_18433", "text": "Create a sub monitor that stands for N units of work in this monitor\n The sub task should call .begin (or use @monitored / with .task) before calling updates"}
{"_id": "q_18434", "text": "Signal that this task is done.\n This is completely optional and will just call .update with the remaining work."}
{"_id": "q_18435", "text": "Print a string, piping through a pager.\n\n This version ignores the screen_lines and pager_cmd arguments and uses\n IPython's payload system instead.\n\n Parameters\n ----------\n strng : str\n Text to page.\n\n start : int\n Starting line at which to place the display.\n \n html : str, optional\n If given, an html string to send as well.\n\n auto_html : bool, optional\n If true, the input string is assumed to be valid reStructuredText and is\n converted to HTML with docutils. Note that if docutils is not found,\n this option is silently ignored.\n\n Note\n ----\n\n Only one of the ``html`` and ``auto_html`` options can be given, not\n both."}
{"_id": "q_18436", "text": "Acquires the correct error for a given response.\n\n :param requests.Response response: HTTP error response\n :returns: the appropriate error for a given response\n :rtype: APIError"}
{"_id": "q_18437", "text": "Load the config from a file and return it as a Struct."}
{"_id": "q_18438", "text": "decode argv if bytes, using stin.encoding, falling back on default enc"}
{"_id": "q_18439", "text": "Parse command line arguments and return as a Config object.\n\n Parameters\n ----------\n\n args : optional, list\n If given, a list with the structure of sys.argv[1:] to parse\n arguments from. If not given, the instance's self.argv attribute\n (given at construction time) is used."}
{"_id": "q_18440", "text": "self.parser->self.parsed_data"}
{"_id": "q_18441", "text": "imp.find_module variant that only return path of module.\n \n The `imp.find_module` returns a filehandle that we are not interested in.\n Also we ignore any bytecode files that `imp.find_module` finds.\n\n Parameters\n ----------\n name : str\n name of module to locate\n path : list of str\n list of paths to search for `name`. If path=None then search sys.path\n\n Returns\n -------\n filename : str\n Return full path of module or None if module is missing or does not have\n .py or .pyw extension"}
{"_id": "q_18442", "text": "Register a callback to be called with this Launcher's stop_data\n when the process actually finishes."}
{"_id": "q_18443", "text": "Call this to trigger startup actions.\n\n This logs the process startup and sets the state to 'running'. It is\n a pass-through so it can be used as a callback."}
{"_id": "q_18444", "text": "Send INT, wait a delay and then send KILL."}
{"_id": "q_18445", "text": "Build self.args using all the fields."}
{"_id": "q_18446", "text": "Start n instances of the program using mpiexec."}
{"_id": "q_18447", "text": "send a single file"}
{"_id": "q_18448", "text": "fetch a single file"}
{"_id": "q_18449", "text": "determine engine count from `engines` dict"}
{"_id": "q_18450", "text": "Start engines by profile or profile_dir.\n `n` is ignored, and the `engines` config property is used instead."}
{"_id": "q_18451", "text": "Start n copies of the process using the Win HPC job scheduler."}
{"_id": "q_18452", "text": "load the default context with the default values for the basic keys\n\n because the _trait_changed methods only load the context if they\n are set to something other than the default value."}
{"_id": "q_18453", "text": "Take the output of the submit command and return the job id."}
{"_id": "q_18454", "text": "Instantiate and write the batch script to the work_dir."}
{"_id": "q_18455", "text": "Reimplemented to return a custom context menu for images."}
{"_id": "q_18456", "text": "Append raw JPG data to the widget."}
{"_id": "q_18457", "text": "Append raw PNG data to the widget."}
{"_id": "q_18458", "text": "Append raw SVG data to the widget."}
{"_id": "q_18459", "text": "Copies the ImageResource with 'name' to the clipboard."}
{"_id": "q_18460", "text": "Returns the QImage stored as the ImageResource with 'name'."}
{"_id": "q_18461", "text": "insert a raw image, jpg or png"}
{"_id": "q_18462", "text": "Configure the user's environment."}
{"_id": "q_18463", "text": "Called to show the auto-rewritten input for autocall and friends.\n\n FIXME: this payload is currently not correctly processed by the\n frontend."}
{"_id": "q_18464", "text": "Send the specified text to the frontend to be presented at the next\n input cell."}
{"_id": "q_18465", "text": "Converts the request parameters to Python.\n\n :param request: <pyramid.request.Request> || <dict>\n\n :return: <dict>"}
{"_id": "q_18466", "text": "Extracts ORB context information from the request.\n\n :param request: <pyramid.request.Request>\n :param model: <orb.Model> || None\n\n :return: {<str> key: <variant> value} values, <orb.Context>"}
{"_id": "q_18467", "text": "Read a list of strings.\n\n The value of `section` and `option` is treated as a comma- and newline-\n separated list of strings. Each value is stripped of whitespace.\n\n Returns the list of strings."}
{"_id": "q_18468", "text": "Read a list of full-line strings.\n\n The value of `section` and `option` is treated as a newline-separated\n list of strings. Each value is stripped of whitespace.\n\n Returns the list of strings."}
{"_id": "q_18469", "text": "Read configuration from a .rc file.\n\n `filename` is a file name to read."}
{"_id": "q_18470", "text": "Expand '~'-style usernames in strings.\n\n This is similar to :func:`os.path.expanduser`, but it computes and returns\n extra information that will be useful if the input was being used in\n computing completions, and you wish to return the completions with the\n original '~' instead of its expanded value.\n\n Parameters\n ----------\n path : str\n String to be expanded. If no ~ is present, the output is the same as the\n input.\n\n Returns\n -------\n newpath : str\n Result of ~ expansion in the input path.\n tilde_expand : bool\n Whether any expansion was performed or not.\n tilde_val : str\n The value that ~ was replaced with."}
{"_id": "q_18471", "text": "Compute matches when text contains a dot.\n\n Assuming the text is of the form NAME.NAME....[NAME], and is\n evaluatable in self.namespace or self.global_namespace, it will be\n evaluated and its attributes (as revealed by dir()) are used as\n possible completions. (For class instances, class members are are\n also considered.)\n\n WARNING: this can still invoke arbitrary C code, if an object\n with a __getattr__ hook is evaluated."}
{"_id": "q_18472", "text": "update the splitter and readline delims when greedy is changed"}
{"_id": "q_18473", "text": "Match filenames, expanding ~USER type strings.\n\n Most of the seemingly convoluted logic in this completer is an\n attempt to handle filenames with spaces in them. And yet it's not\n quite perfect, because Python's readline doesn't expose all of the\n GNU readline details needed for this to be done correctly.\n\n For a filename with a space in it, the printed completions will be\n only the parts after what's already been typed (instead of the\n full completions, as is normally done). I don't think with the\n current (as of Python 2.3) Python readline it's possible to do\n better."}
{"_id": "q_18474", "text": "Match internal system aliases"}
{"_id": "q_18475", "text": "Match attributes or global python names"}
{"_id": "q_18476", "text": "Return the state-th possible completion for 'text'.\n\n This is called successively with state == 0, 1, 2, ... until it\n returns None. The completion should begin with 'text'.\n\n Parameters\n ----------\n text : string\n Text to perform the completion on.\n\n state : int\n Counter used by readline."}
{"_id": "q_18477", "text": "Check if a specific record matches tests."}
{"_id": "q_18478", "text": "Should we silence the display hook because of ';'?"}
{"_id": "q_18479", "text": "Write the format data dict to the frontend.\n\n This default version of this method simply writes the plain text\n representation of the object to ``io.stdout``. Subclasses should\n override this method to send the entire `format_dict` to the\n frontends.\n\n Parameters\n ----------\n format_dict : dict\n The format dict for the object passed to `sys.displayhook`."}
{"_id": "q_18480", "text": "Log the output."}
{"_id": "q_18481", "text": "Used exclusively as a thread which keeps the WebSocket alive."}
{"_id": "q_18482", "text": "Connects and subscribes to the WebSocket Feed."}
{"_id": "q_18483", "text": "raise `InvalidOperationException` if is freezed."}
{"_id": "q_18484", "text": "Convert a MySQL TIMESTAMP to a Timestamp object."}
{"_id": "q_18485", "text": "schedule call to eventloop from IOLoop"}
{"_id": "q_18486", "text": "dispatch control requests"}
{"_id": "q_18487", "text": "dispatch shell requests"}
{"_id": "q_18488", "text": "register dispatchers for streams"}
{"_id": "q_18489", "text": "Publish the code request on the pyin stream."}
{"_id": "q_18490", "text": "abort a specifig msg by id"}
{"_id": "q_18491", "text": "Clear our namespace."}
{"_id": "q_18492", "text": "prefixed topic for IOPub messages"}
{"_id": "q_18493", "text": "Marks a view function as being exempt from the cached httpbl view protection."}
{"_id": "q_18494", "text": "Return absolute, normalized path to directory, if it exists; None\n otherwise."}
{"_id": "q_18495", "text": "A name is file-like if it is a path that exists, or it has a\n directory part, or it ends in .py, or it isn't a legal python\n identifier."}
{"_id": "q_18496", "text": "Is obj a class? Inspect's isclass is too liberal and returns True\n for objects that can't be subclasses of anything."}
{"_id": "q_18497", "text": "Is this path a package directory?\n\n >>> ispackage('nose')\n True\n >>> ispackage('unit_tests')\n False\n >>> ispackage('nose/plugins')\n True\n >>> ispackage('nose/loader.py')\n False"}
{"_id": "q_18498", "text": "Draw a 70-char-wide divider, with label in the middle.\n\n >>> ln('hello there')\n '---------------------------- hello there -----------------------------'"}
{"_id": "q_18499", "text": "Sort key function factory that puts items that match a\n regular expression last.\n\n >>> from nose.config import Config\n >>> from nose.pyversion import sort_list\n >>> c = Config()\n >>> regex = c.testMatch\n >>> entries = ['.', '..', 'a_test', 'src', 'lib', 'test', 'foo.py']\n >>> sort_list(entries, regex_last_key(regex))\n >>> entries\n ['.', '..', 'foo.py', 'lib', 'src', 'a_test', 'test']"}
{"_id": "q_18500", "text": "Make a function imported from module A appear as if it is located\n in module B.\n\n >>> from pprint import pprint\n >>> pprint.__module__\n 'pprint'\n >>> pp = transplant_func(pprint, __name__)\n >>> pp.__module__\n 'nose.util'\n\n The original function is not modified.\n\n >>> pprint.__module__\n 'pprint'\n\n Calling the transplanted function calls the original.\n\n >>> pp([1, 2])\n [1, 2]\n >>> pprint([1,2])\n [1, 2]"}
{"_id": "q_18501", "text": "Return system CPU times as a namedtuple."}
{"_id": "q_18502", "text": "Return etwork connections opened by a process as a list of\n namedtuples."}
{"_id": "q_18503", "text": "Check if a user is in a certaing group.\n By default, the check is skipped for superusers."}
{"_id": "q_18504", "text": "Load a class by a fully qualified class_path,\n eg. myapp.models.ModelName"}
{"_id": "q_18505", "text": "Hook point for overriding how the CounterPool gets its connection to\n AWS."}
{"_id": "q_18506", "text": "Hook point for overriding how the CounterPool determines the schema\n to be used when creating a missing table."}
{"_id": "q_18507", "text": "Hook point for overriding how the CounterPool creates a new table\n in DynamooDB"}
{"_id": "q_18508", "text": "Hook point for overriding how the CounterPool transforms table_name\n into a boto DynamoDB Table object."}
{"_id": "q_18509", "text": "Hook point for overriding how the CouterPool creates a DynamoDB item\n for a given counter when an existing item can't be found."}
{"_id": "q_18510", "text": "Hook point for overriding how the CouterPool fetches a DynamoDB item\n for a given counter."}
{"_id": "q_18511", "text": "A decorator which can be used to mark functions as deprecated."}
{"_id": "q_18512", "text": "Login into Google Docs with user authentication info."}
{"_id": "q_18513", "text": "Parse GDocs key from Spreadsheet url."}
{"_id": "q_18514", "text": "Make sure temp directory exists and create one if it does not."}
{"_id": "q_18515", "text": "Download csv files from GDocs and convert them into po files structure."}
{"_id": "q_18516", "text": "Clear GDoc Spreadsheet by sending empty csv file."}
{"_id": "q_18517", "text": "start a new qtconsole connected to our kernel"}
{"_id": "q_18518", "text": "Check whether the HTML page contains the content or not and return boolean"}
{"_id": "q_18519", "text": "Visit the URL and return the HTTP response code in 'int'"}
{"_id": "q_18520", "text": "Compare the content type header of url param with content_type param and returns boolean \n @param url -> string e.g. http://127.0.0.1/index\n @param content_type -> string e.g. text/html"}
{"_id": "q_18521", "text": "Compare the response code of url param with code param and returns boolean \n @param url -> string e.g. http://127.0.0.1/index\n @param content_type -> int e.g. 404, 500, 400 ..etc"}
{"_id": "q_18522", "text": "Use an event to build a many-to-one relationship on a class.\n\n This makes use of the :meth:`.References._reference_table` method\n to generate a full foreign key relationship to the remote table."}
{"_id": "q_18523", "text": "Clear the output of the cell receiving output."}
{"_id": "q_18524", "text": "Djeffify data between tags"}
{"_id": "q_18525", "text": "Create a foreign key reference from the local class to the given remote\n table.\n\n Adds column references to the declarative class and adds a\n ForeignKeyConstraint."}
{"_id": "q_18526", "text": "Find absolute path to executable cmd in a cross platform manner.\n\n This function tries to determine the full path to a command line program\n using `which` on Unix/Linux/OS X and `win32api` on Windows. Most of the\n time it will use the version that is first on the users `PATH`. If\n cmd is `python` return `sys.executable`.\n\n Warning, don't use this to find IPython command line programs as there\n is a risk you will find the wrong one. Instead find those using the\n following code and looking for the application itself::\n\n from IPython.utils.path import get_ipython_module_path\n from IPython.utils.process import pycmd2argv\n argv = pycmd2argv(get_ipython_module_path('IPython.frontend.terminal.ipapp'))\n\n Parameters\n ----------\n cmd : str\n The command line program to look for."}
{"_id": "q_18527", "text": "Path join helper method\n Join paths if list passed\n\n :type path: str|unicode|list\n :rtype: str|unicode"}
{"_id": "q_18528", "text": "Read helper method\n\n :type file_path: str|unicode\n :type encoding: str|unicode\n :rtype: str|unicode"}
{"_id": "q_18529", "text": "Helper method for absolute and relative paths resolution\n Split passed path and return each directory parts\n\n example: \"/usr/share/dir\"\n return: [\"usr\", \"share\", \"dir\"]\n\n @type path: one of (unicode, str)\n @rtype: list"}
{"_id": "q_18530", "text": "A base for a flat filename to correspond to this code unit.\n\n Useful for writing files about the code where you want all the files in\n the same directory, but need to differentiate same-named files from\n different directories.\n\n For example, the file a/b/c.py might return 'a_b_c'"}
{"_id": "q_18531", "text": "Does it seem like this file should contain Python?\n\n This is used to decide if a file reported as part of the exection of\n a program was really likely to have contained Python in the first\n place."}
{"_id": "q_18532", "text": "timedelta.total_seconds was added in 2.7"}
{"_id": "q_18533", "text": "Return the result when it arrives.\n\n If `timeout` is not ``None`` and the result does not arrive within\n `timeout` seconds then ``TimeoutError`` is raised. If the\n remote call raised an exception then that exception will be reraised\n by get() inside a `RemoteError`."}
{"_id": "q_18534", "text": "Wait until the result is available or until `timeout` seconds pass.\n\n This method always returns None."}
{"_id": "q_18535", "text": "Get the results as a dict, keyed by engine_id.\n\n timeout behavior is described in `get()`."}
{"_id": "q_18536", "text": "elapsed time since initial submission"}
{"_id": "q_18537", "text": "interactive wait, printing progress at regular intervals"}
{"_id": "q_18538", "text": "republish individual displaypub content dicts"}
{"_id": "q_18539", "text": "wait for the 'status=idle' message that indicates we have all outputs"}
{"_id": "q_18540", "text": "wait for result to complete."}
{"_id": "q_18541", "text": "Creates fully qualified endpoint URIs.\n\n :param parts: the string parts that form the request URI"}
{"_id": "q_18542", "text": "Makes sure we have proper ISO 8601 time.\n\n :param time: either already ISO 8601 a string or datetime.datetime\n :returns: ISO 8601 time\n :rtype: str"}
{"_id": "q_18543", "text": "Returns the given response or raises an APIError for non-2xx responses.\n\n :param requests.Response response: HTTP response\n :returns: requested data\n :rtype: requests.Response\n :raises APIError: for non-2xx responses"}
{"_id": "q_18544", "text": "Colors text with code and given format"}
{"_id": "q_18545", "text": "Registers the given message type in the local database.\n\n Args:\n message: a message.Message, to be registered.\n\n Returns:\n The provided message."}
{"_id": "q_18546", "text": "Return the absolute normalized form of `filename`."}
{"_id": "q_18547", "text": "Prepare the file patterns for use in a `FnmatchMatcher`.\n\n If a pattern starts with a wildcard, it is used as a pattern\n as-is. If it does not start with a wildcard, then it is made\n absolute with the current directory.\n\n If `patterns` is None, an empty list is returned."}
{"_id": "q_18548", "text": "Find the path separator used in this string, or os.sep if none."}
{"_id": "q_18549", "text": "Return the relative form of `filename`.\n\n The filename will be relative to the current directory when the\n `FileLocator` was constructed."}
{"_id": "q_18550", "text": "Get data from `filename` if it is a zip file path.\n\n Returns the string data read from the zip file, or None if no zip file\n could be found or `filename` isn't in it. The data returned will be\n an empty string if the file is empty."}
{"_id": "q_18551", "text": "Does `fpath` match one of our filename patterns?"}
{"_id": "q_18552", "text": "Map `path` through the aliases.\n\n `path` is checked against all of the patterns. The first pattern to\n match is used to replace the root of the path with the result root.\n Only one pattern is ever used. If no patterns match, `path` is\n returned unchanged.\n\n The separator style in the result is made to match that of the result\n in the alias."}
{"_id": "q_18553", "text": "Start a kernel with wx event loop support."}
{"_id": "q_18554", "text": "Start a kernel with the Tk event loop."}
{"_id": "q_18555", "text": "Start the kernel, coordinating with the Cocoa CFRunLoop event loop\n via the matplotlib MacOSX backend."}
{"_id": "q_18556", "text": "Enable integration with a given GUI"}
{"_id": "q_18557", "text": "Compute the eigvals of mat and then find the center eigval difference."}
{"_id": "q_18558", "text": "Initialize the item. This calls the class constructor with the\n appropriate arguments and returns the initialized object.\n\n :param ctxt: The context object.\n :param step_addr: The address of the step in the test\n configuration."}
{"_id": "q_18559", "text": "Parse a YAML file containing test steps.\n\n :param ctxt: The context object.\n :param fname: The name of the file to parse.\n :param key: An optional dictionary key. If specified, the\n file must be a YAML dictionary, and the referenced\n value will be interpreted as a list of steps. If\n not provided, the file must be a YAML list, which\n will be interpreted as the list of steps.\n :param step_addr: The address of the step in the test\n configuration. This may be used in the case\n of includes, for instance.\n\n :returns: A list of ``Step`` objects."}
{"_id": "q_18560", "text": "Parse runtime path representation to list.\n\n :param string string: runtime path string\n :return: list of runtime paths\n :rtype: list of string"}
{"_id": "q_18561", "text": "Create a crash handler, typically setting sys.excepthook to it."}
{"_id": "q_18562", "text": "Load the config file.\n\n By default, errors in loading config are handled, and a warning\n printed on screen. For testing, the suppress_errors option is set\n to False, so errors will make tests fail."}
{"_id": "q_18563", "text": "initialize the profile dir"}
{"_id": "q_18564", "text": "auto generate default config file, and stage it into the profile."}
{"_id": "q_18565", "text": "Add some bundle to build group\n\n :type bundle: static_bundle.bundles.AbstractBundle\n @rtype: BuildGroup"}
{"_id": "q_18566", "text": "Return collected files links\n\n :rtype: list[static_bundle.files.StaticFileResult]"}
{"_id": "q_18567", "text": "Render all includes in asset by names\n\n :type name: str|unicode\n :rtype: str|unicode"}
{"_id": "q_18568", "text": "Return links without build files"}
{"_id": "q_18569", "text": "Write the collected coverage data to a file.\n\n `suffix` is a suffix to append to the base file name. This can be used\n for multiple or parallel execution, so that many coverage data files\n can exist simultaneously. A dot will be used to join the base name and\n the suffix."}
{"_id": "q_18570", "text": "Erase the data, both in this object, and from its file storage."}
{"_id": "q_18571", "text": "Return the map from filenames to lists of line numbers executed."}
{"_id": "q_18572", "text": "Return the map from filenames to lists of line number pairs."}
{"_id": "q_18573", "text": "Write the coverage data to `filename`."}
{"_id": "q_18574", "text": "Combine a number of data files together.\n\n Treat `self.filename` as a file prefix, and combine the data from all\n of the data files starting with that prefix plus a dot.\n\n If `aliases` is provided, it's a `PathAliases` object that is used to\n re-map paths to match the local machine's."}
{"_id": "q_18575", "text": "Add executed line data.\n\n `line_data` is { filename: { lineno: None, ... }, ...}"}
{"_id": "q_18576", "text": "Return a dict summarizing the coverage data.\n\n Keys are based on the filenames, and values are the number of executed\n lines. If `fullpath` is true, then the keys are the full pathnames of\n the files, otherwise they are the basenames of the files."}
{"_id": "q_18577", "text": "Coerce everything to strings.\n All objects representing time get output according to default_date_fmt."}
{"_id": "q_18578", "text": "Initialize the zlogger.\n\n Sets up a rotating file handler to the specified path and file with\n the given size and backup count limits, sets the default\n application_name, server_hostname, and default/whitelist fields.\n\n :param path: path to write the log file\n :param target: name of the log file\n :param logger_name: name of the logger (defaults to root)\n :param level: log level for this logger (defaults to logging.DEBUG)\n :param maxBytes: size of the file before rotation (default 1MB)\n :param application_name: app name to add to each log entry\n :param server_hostname: hostname to add to each log entry\n :param fields: default/whitelist fields.\n :type path: string\n :type target: string\n :type logger_name: string\n :type level: int\n :type maxBytes: int\n :type backupCount: int\n :type application_name: string\n :type server_hostname: string\n :type fields: dict"}
{"_id": "q_18579", "text": "Yield pasted lines until the user enters the given sentinel value."}
{"_id": "q_18580", "text": "Start the mainloop.\n\n If an optional banner argument is given, it will override the\n internally created default banner."}
{"_id": "q_18581", "text": "Store multiple lines as a single entry in history"}
{"_id": "q_18582", "text": "Write a prompt and read a line.\n\n The returned line does not include the trailing newline.\n When the user enters the EOF key sequence, EOFError is raised.\n\n Optional inputs:\n\n - prompt(''): a string to be printed to prompt the user.\n\n - continue_prompt(False): whether this line is the first one or a\n continuation in a sequence of inputs."}
{"_id": "q_18583", "text": "The bottom half of the syntax error handler called in the main loop.\n\n Loop until syntax error is fixed or user cancels."}
{"_id": "q_18584", "text": "Utility routine for edit_syntax_error"}
{"_id": "q_18585", "text": "Handle interactive exit.\n\n This method calls the ask_exit callback."}
{"_id": "q_18586", "text": "Returns the correct repository URL and revision by parsing the given\n repository URL"}
{"_id": "q_18587", "text": "Create and return new frontend attached to new kernel, launched on localhost."}
{"_id": "q_18588", "text": "return the connection info for this object's sockets."}
{"_id": "q_18589", "text": "Convert an object in R's namespace to one suitable\n for ipython's namespace.\n\n For a data.frame, it tries to return a structured array.\n It first checks for colnames, then names.\n If all are NULL, it returns np.asarray(Robj), else\n it tries to construct a recarray\n\n Parameters\n ----------\n\n Robj: an R object returned from rpy2"}
{"_id": "q_18590", "text": "Toggle between the currently active color scheme and NoColor."}
{"_id": "q_18591", "text": "Return a color formatted string with the traceback info.\n\n Parameters\n ----------\n etype : exception type\n Type of the exception raised.\n\n value : object\n Data stored in the exception\n\n elist : list\n List of frames, see class docstring for details.\n\n tb_offset : int, optional\n Number of frames in the traceback to skip. If not given, the\n instance value is used (set in constructor).\n\n context : int, optional\n Number of lines of context information to print.\n\n Returns\n -------\n String with formatted exception."}
{"_id": "q_18592", "text": "Format a list of traceback entry tuples for printing.\n\n Given a list of tuples as returned by extract_tb() or\n extract_stack(), return a list of strings ready for printing.\n Each string in the resulting list corresponds to the item with the\n same index in the argument list. Each string ends in a newline;\n the strings may contain internal newlines as well, for those items\n whose source text line is not None.\n\n Lifted almost verbatim from traceback.py"}
{"_id": "q_18593", "text": "Format the exception part of a traceback.\n\n The arguments are the exception type and value such as given by\n sys.exc_info()[:2]. The return value is a list of strings, each ending\n in a newline. Normally, the list contains a single string; however,\n for SyntaxError exceptions, it contains several lines that (when\n printed) display detailed information about where the syntax error\n occurred. The message indicating which exception occurred is the\n always last string in the list.\n\n Also lifted nearly verbatim from traceback.py"}
{"_id": "q_18594", "text": "Only print the exception type and message, without a traceback.\n\n Parameters\n ----------\n etype : exception type\n value : exception value"}
{"_id": "q_18595", "text": "Switch to the desired mode.\n\n If mode is not specified, cycles through the available modes."}
{"_id": "q_18596", "text": "View decorator for requiring a user group."}
{"_id": "q_18597", "text": "Initialize the model for a Pyramid app.\n\n Activate this setup using ``config.include('baka_model')``."}
{"_id": "q_18598", "text": "Handle 'from module import a, b, c' imports."}
{"_id": "q_18599", "text": "Add a line of source to the code.\n\n Don't include indentations or newlines."}
{"_id": "q_18600", "text": "Generate a Python expression for `expr`."}
{"_id": "q_18601", "text": "Render this template by applying it to `context`.\n\n `context` is a dictionary of values to use in this rendering."}
{"_id": "q_18602", "text": "Evaluate dotted expressions at runtime."}
{"_id": "q_18603", "text": "A shortcut function to render a partial template with context and return\r\n the output."}
{"_id": "q_18604", "text": "Activate the default formatters."}
{"_id": "q_18605", "text": "Add a format function for a type specified by the full dotted\n module and name of the type, rather than the type of the object.\n\n Parameters\n ----------\n type_module : str\n The full dotted name of the module the type is defined in, like\n ``numpy``.\n type_name : str\n The name of the type (the class name), like ``dtype``\n func : callable\n The callable that will be called to compute the format data. The\n call signature of this function is simple, it must take the\n object to be formatted and return the raw data for the given\n format. Subclasses may use a different call signature for the\n `func` argument."}
{"_id": "q_18606", "text": "Return absolute and relative path for file\n\n :type root_path: str|unicode\n :type file_name: str|unicode\n :type input_dir: str|unicode\n :rtype: tuple"}
{"_id": "q_18607", "text": "Configure logging for nose, or optionally other packages. Any logger\n name may be set with the debug option, and that logger will be set to\n debug level and be assigned the same handler as the nose loggers, unless\n it already has a handler."}
{"_id": "q_18608", "text": "Configure the working directory or directories for the test run."}
{"_id": "q_18609", "text": "Very dumb 'pager' in Python, for when nothing else works.\n\n Only moves forward, same interface as page(), except for pager_cmd and\n mode."}
{"_id": "q_18610", "text": "Page a file, using an optional pager command and starting line."}
{"_id": "q_18611", "text": "Return a pager command.\n\n Makes some attempts at finding an OS-correct one."}
{"_id": "q_18612", "text": "Return the string for paging files with an offset.\n\n This is the '+N' argument which less and more (under Unix) accept."}
{"_id": "q_18613", "text": "A function to pretty print sympy Basic objects."}
{"_id": "q_18614", "text": "A function to display sympy expression using display style LaTeX in PNG."}
{"_id": "q_18615", "text": "Return True if type o can be printed with LaTeX.\n\n If o is a container type, this is True if and only if every element of o\n can be printed with LaTeX."}
{"_id": "q_18616", "text": "A function to generate the latex representation of sympy\n expressions."}
{"_id": "q_18617", "text": "Non-camel-case version of func name for backwards compatibility.\n\n .. warning ::\n\n DEPRECATED: Do not use this method,\n use :meth:`options <nose.plugins.base.IPluginInterface.options>`\n instead."}
{"_id": "q_18618", "text": "Validate that the input is a list of strings.\n\n Raises ValueError if not."}
{"_id": "q_18619", "text": "Validate that the input is a dict with string keys and values.\n\n Raises ValueError if not."}
{"_id": "q_18620", "text": "Run my loop, ignoring EINTR events in the poller"}
{"_id": "q_18621", "text": "callback for stream.on_recv\n \n unpacks message, and calls handlers with it."}
{"_id": "q_18622", "text": "Execute code in the kernel.\n\n Parameters\n ----------\n code : str\n A string of Python code.\n\n silent : bool, optional (default False)\n If set, the kernel will execute the code as quietly possible.\n\n user_variables : list, optional\n A list of variable names to pull from the user's namespace. They\n will come back as a dict with these names as keys and their\n :func:`repr` as values.\n\n user_expressions : dict, optional\n A dict with string keys and to pull from the user's\n namespace. They will come back as a dict with these names as keys\n and their :func:`repr` as values.\n\n allow_stdin : bool, optional\n Flag for \n A dict with string keys and to pull from the user's\n namespace. They will come back as a dict with these names as keys\n and their :func:`repr` as values.\n\n Returns\n -------\n The msg_id of the message sent."}
{"_id": "q_18623", "text": "Tab complete text in the kernel's namespace.\n\n Parameters\n ----------\n text : str\n The text to complete.\n line : str\n The full line of text that is the surrounding context for the\n text to complete.\n cursor_pos : int\n The position of the cursor in the line where the completion was\n requested.\n block : str, optional\n The full block of code in which the completion is being requested.\n\n Returns\n -------\n The msg_id of the message sent."}
{"_id": "q_18624", "text": "Get metadata information about an object.\n\n Parameters\n ----------\n oname : str\n A string specifying the object name.\n detail_level : int, optional\n The level of detail for the introspection (0-2)\n\n Returns\n -------\n The msg_id of the message sent."}
{"_id": "q_18625", "text": "Request an immediate kernel shutdown.\n\n Upon receipt of the (empty) reply, client code can safely assume that\n the kernel has shut down and it's safe to forcefully terminate it if\n it's still alive.\n\n The kernel will send the reply via a function registered with Python's\n atexit module, ensuring it's truly done as the kernel is done with all\n normal operation."}
{"_id": "q_18626", "text": "Immediately processes all pending messages on the SUB channel.\n\n Callers should use this method to ensure that :method:`call_handlers`\n has been called for all messages that have been received on the\n 0MQ SUB socket of this channel.\n\n This method is thread safe.\n\n Parameters\n ----------\n timeout : float, optional\n The maximum amount of time to spend flushing, in seconds. The\n default is one second."}
{"_id": "q_18627", "text": "Send a string of raw input to the kernel."}
{"_id": "q_18628", "text": "Starts the channels for this kernel.\n\n This will create the channels if they do not exist and then start\n them. If port numbers of 0 are being used (random ports) then you\n must first call :method:`start_kernel`. If the channels have been\n stopped and you call this, :class:`RuntimeError` will be raised."}
{"_id": "q_18629", "text": "Are any of the channels created and running?"}
{"_id": "q_18630", "text": "load connection info from JSON dict in self.connection_file"}
{"_id": "q_18631", "text": "Attempts to the stop the kernel process cleanly. If the kernel\n cannot be stopped, it is killed, if possible."}
{"_id": "q_18632", "text": "Restarts a kernel with the arguments that were used to launch it.\n\n If the old kernel was launched with random ports, the same ports will be\n used for the new kernel.\n\n Parameters\n ----------\n now : bool, optional\n If True, the kernel is forcefully restarted *immediately*, without\n having a chance to do any cleanup action. Otherwise the kernel is\n given 1s to clean up before a forceful restart is issued.\n\n In all cases the kernel is restarted, the only difference is whether\n it is given a chance to perform a clean shutdown or not.\n\n **kw : optional\n Any options specified here will replace those used to launch the\n kernel."}
{"_id": "q_18633", "text": "Kill the running kernel."}
{"_id": "q_18634", "text": "Interrupts the kernel. Unlike ``signal_kernel``, this operation is\n well supported on all platforms."}
{"_id": "q_18635", "text": "Sends a signal to the kernel. Note that since only SIGTERM is\n supported on Windows, this function is only useful on Unix systems."}
{"_id": "q_18636", "text": "Get the REQ socket channel object to make requests of the kernel."}
{"_id": "q_18637", "text": "Get the SUB socket channel object."}
{"_id": "q_18638", "text": "Get the heartbeat socket channel object to check that the\n kernel is alive."}
{"_id": "q_18639", "text": "Bind an Engine's Kernel to be used as a full IPython kernel.\n \n This allows a running Engine to be used simultaneously as a full IPython kernel\n with the QtConsole or other frontends.\n \n This function returns immediately."}
{"_id": "q_18640", "text": "Emit a debugging message depending on the debugging level.\n\n :param level: The debugging level.\n :param message: The message to emit."}
{"_id": "q_18641", "text": "Retrieve the extension classes in priority order.\n\n :returns: A list of extension classes, in proper priority\n order."}
{"_id": "q_18642", "text": "Called prior to executing a step.\n\n :param ctxt: An instance of ``timid.context.Context``.\n :param step: An instance of ``timid.steps.Step`` describing\n the step to be executed.\n :param idx: The index of the step in the list of steps.\n\n :returns: A ``True`` value if the step is to be skipped,\n ``False`` otherwise."}
{"_id": "q_18643", "text": "Called at the end of processing. This call allows extensions to\n emit any additional data, such as timing information, prior to\n ``timid``'s exit. Extensions may also alter the return value.\n\n :param ctxt: An instance of ``timid.context.Context``.\n :param result: The return value of the basic ``timid`` call,\n or an ``Exception`` instance if an exception\n was raised. Without the extension, this would\n be passed directly to ``sys.exit()``.\n\n :returns: The final result."}
{"_id": "q_18644", "text": "Adds an EnumDescriptor to the pool.\n\n This method also registers the FileDescriptor associated with the message.\n\n Args:\n enum_desc: An EnumDescriptor."}
{"_id": "q_18645", "text": "Loads the named enum descriptor from the pool.\n\n Args:\n full_name: The full name of the enum descriptor to load.\n\n Returns:\n The enum descriptor for the named type."}
{"_id": "q_18646", "text": "Make a protobuf EnumDescriptor given an EnumDescriptorProto protobuf.\n\n Args:\n enum_proto: The descriptor_pb2.EnumDescriptorProto protobuf message.\n package: Optional package name for the new message EnumDescriptor.\n file_desc: The file containing the enum descriptor.\n containing_type: The type containing this enum.\n scope: Scope containing available types.\n\n Returns:\n The added descriptor"}
{"_id": "q_18647", "text": "Creates a field descriptor from a FieldDescriptorProto.\n\n For message and enum type fields, this method will do a look up\n in the pool for the appropriate descriptor for that type. If it\n is unavailable, it will fall back to the _source function to\n create it. If this type is still unavailable, construction will\n fail.\n\n Args:\n field_proto: The proto describing the field.\n message_name: The name of the containing message.\n index: Index of the field\n is_extension: Indication that this field is for an extension.\n\n Returns:\n An initialized FieldDescriptor object"}
{"_id": "q_18648", "text": "Check whether module possibly uses unsafe-for-zipfile stuff"}
{"_id": "q_18649", "text": "Create and run the IPython controller"}
{"_id": "q_18650", "text": "save a connection dict to json file."}
{"_id": "q_18651", "text": "load config from existing json connector files."}
{"_id": "q_18652", "text": "secondary config, loading from JSON and setting defaults"}
{"_id": "q_18653", "text": "Get a ``sqlalchemy.orm.Session`` instance backed by a transaction.\n\n This function will hook the session to the transaction manager which\n will take care of committing any changes.\n\n - When using pyramid_tm it will automatically be committed or aborted\n depending on whether an exception is raised.\n\n - When using scripts you should wrap the session in a manager yourself.\n For example::\n\n import transaction\n\n engine = get_engine(settings)\n session_factory = get_session_factory(engine)\n with transaction.manager:\n dbsession = get_tm_session(session_factory, transaction.manager)"}
{"_id": "q_18654", "text": "Enable %autopx mode by saving the original run_cell and installing\n pxrun_cell."}
{"_id": "q_18655", "text": "Disable %autopx by restoring the original InteractiveShell.run_cell."}
{"_id": "q_18656", "text": "drop-in replacement for InteractiveShell.run_cell.\n\n This executes code remotely, instead of in the local namespace.\n\n See InteractiveShell.run_cell for details."}
{"_id": "q_18657", "text": "Patch the protocol's makeConnection and connectionLost methods to make the\n protocol and its transport behave more like what `Agent` expects.\n\n While `Agent` is the driving force behind this, other clients and servers\n will no doubt have similar requirements."}
{"_id": "q_18658", "text": "Patch a method onto an object if it isn't already there."}
{"_id": "q_18659", "text": "Accept a pending connection."}
{"_id": "q_18660", "text": "Reject a pending connection."}
{"_id": "q_18661", "text": "Calls pre and post save hooks."}
{"_id": "q_18662", "text": "Use SaveHookMixin pre_save to set the user."}
{"_id": "q_18663", "text": "Check whether some modules need to be reloaded."}
{"_id": "q_18664", "text": "Open the default editor at the given filename and linenumber.\n\n This is IPython's default editor hook, you can use it as an example to\n write your own modified one. To set your own editor function as the\n new editor hook, call ip.set_hook('editor',yourfunc)."}
{"_id": "q_18665", "text": "Get text from the clipboard."}
{"_id": "q_18666", "text": "Add a func to the cmd chain with given priority"}
{"_id": "q_18667", "text": "Configure which kinds of exceptions trigger plugin."}
{"_id": "q_18668", "text": "Import and return bar given the string foo.bar."}
{"_id": "q_18669", "text": "Forces a flush from the internal queue to the server"}
{"_id": "q_18670", "text": "Try passwordless login with shell ssh command."}
{"_id": "q_18671", "text": "Try passwordless login with paramiko."}
{"_id": "q_18672", "text": "Connect a socket to an address via an ssh tunnel.\n\n This is a wrapper for socket.connect(addr), when addr is not accessible\n from the local machine. It simply creates an ssh tunnel using the remaining args,\n and calls socket.connect('tcp://localhost:lport') where lport is the randomly\n selected local port of the tunnel."}
{"_id": "q_18673", "text": "unwrap exception, and remap engine_id to int."}
{"_id": "q_18674", "text": "Register a new engine, and update our connection info."}
{"_id": "q_18675", "text": "Flush task or queue results waiting in ZMQ queue."}
{"_id": "q_18676", "text": "Flush replies from the iopub channel waiting\n in the ZMQ queue."}
{"_id": "q_18677", "text": "target func for use in spin_thread"}
{"_id": "q_18678", "text": "Flush any registration notifications and execution results\n waiting in the ZMQ queue."}
{"_id": "q_18679", "text": "construct and send an apply message via a socket.\n\n This is the principal method with which all engine execution is performed by views."}
{"_id": "q_18680", "text": "construct and send an execute request via a socket."}
{"_id": "q_18681", "text": "Retrieve a result by msg_id or history index, wrapped in an AsyncResult object.\n\n If the client already has the results, no request to the Hub will be made.\n\n This is a convenient way to construct AsyncResult objects, which are wrappers\n that include metadata about execution, and allow for awaiting results that\n were not submitted by this Client.\n\n It can also be a convenient way to retrieve the metadata associated with\n blocking execution, since it always retrieves\n\n Examples\n --------\n ::\n\n In [10]: r = client.apply()\n\n Parameters\n ----------\n\n indices_or_msg_ids : integer history index, str msg_id, or list of either\n The indices or msg_ids of indices to be retrieved\n\n block : bool\n Whether to wait for the result to be done\n\n Returns\n -------\n\n AsyncResult\n A single AsyncResult object will always be returned.\n\n AsyncHubResult\n A subclass of AsyncResult that retrieves results from the Hub"}
{"_id": "q_18682", "text": "Return a set of opcodes by the names in `names`."}
{"_id": "q_18683", "text": "Parse the source to find the interesting facts about its lines.\n\n A handful of member fields are updated."}
{"_id": "q_18684", "text": "Return the first line number of the statement including `line`."}
{"_id": "q_18685", "text": "Map the line numbers in `lines` to the correct first line of the\n statement.\n\n Skip any line mentioned in any of the sequences in `ignores`.\n\n Returns a set of the first lines."}
{"_id": "q_18686", "text": "Get information about the arcs available in the code.\n\n Returns a sorted list of line number pairs. Line numbers have been\n normalized to the first line of multiline statements."}
{"_id": "q_18687", "text": "Get a mapping from line numbers to count of exits from that line.\n\n Excluded lines are excluded."}
{"_id": "q_18688", "text": "Iterate over all the code objects nested within this one.\n\n The iteration includes `self` as its first value."}
{"_id": "q_18689", "text": "Map byte offsets to line numbers in `code`.\n\n Uses co_lnotab described in Python/compile.c to map byte offsets to\n line numbers. Produces a sequence: (b0, l0), (b1, l1), ...\n\n Only byte offsets that correspond to line numbers are included in the\n results."}
{"_id": "q_18690", "text": "Validate the rule that chunks have a single entrance."}
{"_id": "q_18691", "text": "Returns a list of `Chunk` objects for this code and its children.\n\n See `_split_into_chunks` for details."}
{"_id": "q_18692", "text": "Get the set of all arcs in this code object and its children.\n\n See `_arcs` for details."}
{"_id": "q_18693", "text": "Use all decompressor possible to make the stream"}
{"_id": "q_18694", "text": "Add options to command line."}
{"_id": "q_18695", "text": "If inclusive coverage enabled, return true for all source files\n in wanted packages."}
{"_id": "q_18696", "text": "Generate alternative interpretations of a source distro name\n\n Note: if `location` is a filesystem filename, you should call\n ``pkg_resources.normalize_path()`` on it before passing it to this\n routine!"}
{"_id": "q_18697", "text": "Open a urllib2 request, handling HTTP authentication"}
{"_id": "q_18698", "text": "Obtain a distribution suitable for fulfilling `requirement`\n\n `requirement` must be a ``pkg_resources.Requirement`` instance.\n If necessary, or if the `force_scan` flag is set, the requirement is\n searched for in the (online) package index as well as the locally\n installed packages. If a distribution matching `requirement` is found,\n the returned distribution's ``location`` is the value you would have\n gotten from calling the ``download()`` method with the matching\n distribution's URL or filename. If no matching distribution is found,\n ``None`` is returned.\n\n If the `source` flag is set, only source distributions and source\n checkout links will be considered. Unless the `develop_ok` flag is\n set, development and system eggs (i.e., those using the ``.egg-info``\n format) will be ignored."}
{"_id": "q_18699", "text": "get parent from obj."}
{"_id": "q_18700", "text": "Manage a Marv site"}
{"_id": "q_18701", "text": "this is a property, in case the handler is created\n before the engine gets registered with an id"}
{"_id": "q_18702", "text": "renders context aware template"}
{"_id": "q_18703", "text": "Add a function to the function list, in order."}
{"_id": "q_18704", "text": "Return the mapping of a document according to the function list."}
{"_id": "q_18705", "text": "Re-reduce a set of values, with a list of rereduction functions."}
{"_id": "q_18706", "text": "Validate...this function is undocumented, but still in CouchDB."}
{"_id": "q_18707", "text": "The main function called to handle a request."}
{"_id": "q_18708", "text": "Log an event on the CouchDB server."}
{"_id": "q_18709", "text": "Turn a list to list of list"}
{"_id": "q_18710", "text": "Generates a universally unique ID.\n Any arguments only create more randomness."}
{"_id": "q_18711", "text": "Convert a hex color to rgb integer tuple."}
{"_id": "q_18712", "text": "Construct the keys to be used building the base stylesheet\n from a templatee."}
{"_id": "q_18713", "text": "revoke_token removes the access token from the data_store"}
{"_id": "q_18714", "text": "_validate_request_code - internal method for verifying the the given nonce.\n also removes the nonce from the data_store, as they are intended for\n one-time use."}
{"_id": "q_18715", "text": "_generate_token - internal function for generating randomized alphanumberic\n strings of a given length"}
{"_id": "q_18716", "text": "Return a font of the requested family, using fallback as alternative.\n\n If a fallback is provided, it is used in case the requested family isn't\n found. If no fallback is given, no alternative is chosen and Qt's internal\n algorithms may automatically choose a fallback font.\n\n Parameters\n ----------\n family : str\n A font name.\n fallback : str\n A font name.\n\n Returns\n -------\n font : QFont object"}
{"_id": "q_18717", "text": "Implemented to handle history tail replies, which are only supported\n by the IPython kernel."}
{"_id": "q_18718", "text": "Reimplemented for IPython-style \"display hook\"."}
{"_id": "q_18719", "text": "Reimplemented to make a history request and load %guiref."}
{"_id": "q_18720", "text": "Reimplemented for IPython-style traceback formatting."}
{"_id": "q_18721", "text": "Reimplemented to dispatch payloads to handler methods."}
{"_id": "q_18722", "text": "Sets the widget style to the class defaults.\n\n Parameters:\n -----------\n colors : str, optional (default lightbg)\n Whether to use the default IPython light background or dark\n background or B&W style."}
{"_id": "q_18723", "text": "Opens a Python script for editing.\n\n Parameters:\n -----------\n filename : str\n A path to a local system file.\n\n line : int, optional\n A line of interest in the file."}
{"_id": "q_18724", "text": "Set the style sheets of the underlying widgets."}
{"_id": "q_18725", "text": "Set the style for the syntax highlighter."}
{"_id": "q_18726", "text": "Handles the response returned from the CloudStack API. Some CloudStack API are implemented asynchronous, which\n means that the API call returns just a job id. The actually expected API response is postponed and a specific\n asyncJobResults API has to be polled using the job id to get the final result once the API call has been\n processed.\n\n :param response: The response returned by the aiohttp call.\n :type response: aiohttp.client_reqrep.ClientResponse\n :param await_final_result: Specifier that indicates whether the function should poll the asyncJobResult API\n until the asynchronous API call has been processed\n :type await_final_result: bool\n :return: Dictionary containing the JSON response of the API call\n :rtype: dict"}
{"_id": "q_18727", "text": "According to the CloudStack documentation, each request needs to be signed in order to authenticate the user\n account executing the API command. The signature is generated using a combination of the api secret and a SHA-1\n hash of the url parameters including the command string. In order to generate a unique identifier, the url\n parameters have to be transformed to lower case and ordered alphabetically.\n\n :param url_parameters: The url parameters of the API call including the command string\n :type url_parameters: dict\n :return: The url parameters including a new key, which contains the signature\n :rtype: dict"}
{"_id": "q_18728", "text": "Each CloudStack API call returns a nested dictionary structure. The first level contains only one key indicating\n the API that originated the response. This function removes that first level from the data returned to the\n caller.\n\n :param data: Response of the API call\n :type data: dict\n :return: Simplified response without the information about the API that originated the response.\n :rtype: dict"}
{"_id": "q_18729", "text": "System virtual memory as a namedutple."}
{"_id": "q_18730", "text": "Return system per-CPU times as a named tuple"}
{"_id": "q_18731", "text": "Return real, effective and saved group ids."}
{"_id": "q_18732", "text": "Return the number of threads belonging to the process."}
{"_id": "q_18733", "text": "Return files opened by process as a list of namedtuples."}
{"_id": "q_18734", "text": "Return dict describing the context of this package\n\n Parameters\n ----------\n pkg_path : str\n path containing __init__.py for package\n\n Returns\n -------\n context : dict\n with named parameters of interest"}
{"_id": "q_18735", "text": "Return useful information about IPython and the system, as a string.\n\n Example\n -------\n In [2]: print sys_info()\n {'commit_hash': '144fdae', # random\n 'commit_source': 'repository',\n 'ipython_path': '/home/fperez/usr/lib/python2.6/site-packages/IPython',\n 'ipython_version': '0.11.dev',\n 'os_name': 'posix',\n 'platform': 'Linux-2.6.35-22-generic-i686-with-Ubuntu-10.10-maverick',\n 'sys_executable': '/usr/bin/python',\n 'sys_platform': 'linux2',\n 'sys_version': '2.6.6 (r266:84292, Sep 15 2010, 15:52:39) \\\\n[GCC 4.4.5]'}"}
{"_id": "q_18736", "text": "Return the number of active CPUs on a Darwin system."}
{"_id": "q_18737", "text": "Return the effective number of CPUs in the system as an integer.\n\n This cross-platform function makes an attempt at finding the total number of\n available CPUs in the system, as returned by various underlying system and\n python calls.\n\n If it can't find a sensible answer, it returns 1 (though an error *may* make\n it return a large positive number that's actually incorrect)."}
{"_id": "q_18738", "text": "Fetches a single row from the cursor."}
{"_id": "q_18739", "text": "this function will be called on the engines"}
{"_id": "q_18740", "text": "Read a .py notebook from a string and return the NotebookNode object."}
{"_id": "q_18741", "text": "Read a notebook from a string and return the NotebookNode object.\n\n This function properly handles notebooks of any version. The notebook\n returned will always be in the current version's format.\n\n Parameters\n ----------\n s : unicode\n The raw unicode string to read the notebook from.\n format : (u'json', u'ipynb', u'py')\n The format that the string is in.\n\n Returns\n -------\n nb : NotebookNode\n The notebook that was read."}
{"_id": "q_18742", "text": "Convert to a notebook having notebook metadata."}
{"_id": "q_18743", "text": "Helps us validate the parameters for the request\n\n :param valid_options: a list of strings of valid options for the\n api request\n :param params: a dict, the key-value store which we really only care about\n the key which has tells us what the user is using for the\n API request\n\n :returns: None or throws an exception if the validation fails"}
{"_id": "q_18744", "text": "Does the name match my requirements?\n\n To match, a name must match config.testMatch OR config.include\n and it must not match config.exclude"}
{"_id": "q_18745", "text": "Is the class a wanted test class?\n\n A class must be a unittest.TestCase subclass, or match test name\n requirements. Classes that start with _ are always excluded."}
{"_id": "q_18746", "text": "Is the file a wanted test file?\n\n The file must be a python source file and match testMatch or\n include, and not match exclude. Files that match ignore are *never*\n wanted, regardless of plugin, testMatch, include or exclude settings."}
{"_id": "q_18747", "text": "Is the method a test method?"}
{"_id": "q_18748", "text": "Is the module a test module?\n\n The tail of the module name must match test requirements. One exception:\n we always want __main__."}
{"_id": "q_18749", "text": "Get current datetime for every file."}
{"_id": "q_18750", "text": "List command to use if we have a newer pydb installed"}
{"_id": "q_18751", "text": "The printing (as opposed to the parsing part of a 'list'\n command."}
{"_id": "q_18752", "text": "Generates a multiplying factor used to convert two currencies"}
{"_id": "q_18753", "text": "Converts an amount of money from one currency to another on a specified date."}
{"_id": "q_18754", "text": "Return the given stream's encoding or a default.\n\n There are cases where sys.std* might not actually be a stream, so\n check for the encoding attribute prior to returning it, and return\n a default if it doesn't exist or evaluates as False. `default'\n is None if not provided."}
{"_id": "q_18755", "text": "run your main spider here\n as for branch spider result data, you can return everything or do whatever with it\n in your own code\n\n :return: None"}
{"_id": "q_18756", "text": "Read version info from a file without importing it"}
{"_id": "q_18757", "text": "start the heart beating"}
{"_id": "q_18758", "text": "Redirect input streams and set a display hook."}
{"_id": "q_18759", "text": "Create the Kernel object itself"}
{"_id": "q_18760", "text": "construct connection function, which handles tunnels."}
{"_id": "q_18761", "text": "Returns the root if this is a nested type, or itself if its the root."}
{"_id": "q_18762", "text": "Searches for the specified method, and returns its descriptor."}
{"_id": "q_18763", "text": "Define the command line options for the plugin."}
{"_id": "q_18764", "text": "Check if directory is eligible for test discovery"}
{"_id": "q_18765", "text": "Converts protobuf message to JSON format.\n\n Args:\n message: The protocol buffers message instance to serialize.\n including_default_value_fields: If True, singular primitive fields,\n repeated fields, and map fields will always be serialized. If\n False, only serialize non-empty fields. Singular message fields\n and oneof fields are not affected by this option.\n\n Returns:\n A string containing the JSON formatted protocol buffer message."}
{"_id": "q_18766", "text": "Parses a JSON representation of a protocol message into a message.\n\n Args:\n text: Message JSON representation.\n message: A protocol beffer message to merge into.\n\n Returns:\n The same message passed as argument.\n\n Raises::\n ParseError: On JSON parsing problems."}
{"_id": "q_18767", "text": "Convert a JSON object into a message.\n\n Args:\n value: A JSON object.\n message: A WKT or regular protocol message to record the data.\n\n Raises:\n ParseError: In case of convert problems."}
{"_id": "q_18768", "text": "Convert a JSON representation into Value message."}
{"_id": "q_18769", "text": "Convert a JSON representation into Struct message."}
{"_id": "q_18770", "text": "Return true if 'ext' links to a dynamic lib in the same package"}
{"_id": "q_18771", "text": "call each func from func list.\n\n return the last func value or None if func list is empty."}
{"_id": "q_18772", "text": "ensure there is only one newline between usage and the first heading\n if there is no description"}
{"_id": "q_18773", "text": "Update config options with the provided dictionary of options."}
{"_id": "q_18774", "text": "initialize the app"}
{"_id": "q_18775", "text": "Get the pid from the pid file.\n\n If the pid file doesn't exist a :exc:`PIDFileError` is raised."}
{"_id": "q_18776", "text": "Construct an argument parser using the function decorations."}
{"_id": "q_18777", "text": "Find the real name of the magic."}
{"_id": "q_18778", "text": "Highlight a block of text. Reimplemented to highlight selectively."}
{"_id": "q_18779", "text": "Reimplemented to temporarily enable highlighting if disabled."}
{"_id": "q_18780", "text": "Reimplemented to highlight selectively."}
{"_id": "q_18781", "text": "Copy the currently selected text to the clipboard, removing prompts."}
{"_id": "q_18782", "text": "Called immediately after a prompt is finished, i.e. when some input\n will be processed and a new prompt displayed."}
{"_id": "q_18783", "text": "Reimplemented for auto-indentation."}
{"_id": "q_18784", "text": "Handle replies for tab completion."}
{"_id": "q_18785", "text": "Silently execute `expr` in the kernel and call `callback` with reply\n\n the `expr` is evaluated silently in the kernel (without) output in\n the frontend. Call `callback` with the\n `repr <http://docs.python.org/library/functions.html#repr> `_ as first argument\n\n Parameters\n ----------\n expr : string\n valid string to be executed by the kernel.\n callback : function\n function accepting one argument, as a string. The string will be\n the `repr` of the result of evaluating `expr`\n\n The `callback` is called with the `repr()` of the result of `expr` as\n first argument. To get the object, do `eval()` on the passed value.\n\n See Also\n --------\n _handle_exec_callback : private method, deal with calling callback with reply"}
{"_id": "q_18786", "text": "Execute `callback` corresponding to `msg` reply, after ``_silent_exec_callback``\n\n Parameters\n ----------\n msg : raw message send by the kernel containing an `user_expressions`\n and having a 'silent_exec_callback' kind.\n\n Notes\n -----\n This function will look for a `callback` associated with the\n corresponding message id. Association has been made by\n `_silent_exec_callback`. `callback` is then called with the `repr()`\n of the value of corresponding `user_expressions` as argument.\n `callback` is then removed from the known list so that any message\n coming again with the same id won't trigger it."}
{"_id": "q_18787", "text": "Handles replies for code execution."}
{"_id": "q_18788", "text": "Handle requests for raw_input."}
{"_id": "q_18789", "text": "Handle shutdown signal, only if from other console."}
{"_id": "q_18790", "text": "Attempts to execute file with 'path'. If 'hidden', no output is\n shown."}
{"_id": "q_18791", "text": "Attempts to interrupt the running kernel.\n \n Also unsets _reading flag, to avoid runtime errors\n if raw_input is called again."}
{"_id": "q_18792", "text": "Resets the widget to its initial state if ``clear`` parameter or\n ``clear_on_kernel_restart`` configuration setting is True, otherwise\n prints a visual indication of the fact that the kernel restarted, but\n does not clear the traces from previous usage of the kernel before it\n was restarted. With ``clear=True``, it is similar to ``%clear``, but\n also re-writes the banner and aborts execution if necessary."}
{"_id": "q_18793", "text": "Process a reply for an execution request that resulted in an error."}
{"_id": "q_18794", "text": "Process a reply for a successful execution request."}
{"_id": "q_18795", "text": "Called whenever the document's content changes. Display a call tip\n if appropriate."}
{"_id": "q_18796", "text": "Add plugin to my list of plugins to call, if it has the attribute\n I'm bound to."}
{"_id": "q_18797", "text": "Call all plugins, returning the first non-None result."}
{"_id": "q_18798", "text": "Load plugins by iterating the `nose.plugins` entry point."}
{"_id": "q_18799", "text": "Render LaTeX to HTML with embedded PNG data using data URIs.\n\n Parameters\n ----------\n s : str\n The raw string containing valid inline LateX.\n alt : str\n The alt text to use for the HTML."}
{"_id": "q_18800", "text": "Given a math expression, renders it in a closely-clipped bounding\n box to an image file.\n\n *s*\n A math expression. The math portion should be enclosed in\n dollar signs.\n\n *filename_or_obj*\n A filepath or writable file-like object to write the image data\n to.\n\n *prop*\n If provided, a FontProperties() object describing the size and\n style of the text.\n\n *dpi*\n Override the output dpi, otherwise use the default associated\n with the output format.\n\n *format*\n The output format, eg. 'svg', 'pdf', 'ps' or 'png'. If not\n provided, will be deduced from the filename."}
{"_id": "q_18801", "text": "Completes measuring time interval and updates counter."}
{"_id": "q_18802", "text": "Return a generator yielding a Process class instance for all\n running processes on the local machine.\n\n Every new Process instance is only created once and then cached\n into an internal table which is updated every time this is used.\n\n The sorting order in which processes are yielded is based on\n their PIDs."}
{"_id": "q_18803", "text": "Utility method returning process information as a hashable\n dictionary.\n\n If 'attrs' is specified it must be a list of strings reflecting\n available Process class's attribute names (e.g. ['get_cpu_times',\n 'name']) else all public (read only) attributes are assumed.\n\n 'ad_value' is the value which gets assigned to a dict key in case\n AccessDenied exception is raised when retrieving that particular\n process information."}
{"_id": "q_18804", "text": "The process name."}
{"_id": "q_18805", "text": "The process executable path. May also be an empty string."}
{"_id": "q_18806", "text": "Return a float representing the current process CPU\n utilization as a percentage.\n\n When interval is > 0.0 compares process times to system CPU\n times elapsed before and after the interval (blocking).\n\n When interval is 0.0 or None compares process times to system CPU\n times elapsed since last call, returning immediately.\n In this case is recommended for accuracy that this function be\n called with at least 0.1 seconds between calls."}
{"_id": "q_18807", "text": "Return process's mapped memory regions as a list of nameduples\n whose fields are variable depending on the platform.\n\n If 'grouped' is True the mapped regions with the same 'path'\n are grouped together and the different memory fields are summed.\n\n If 'grouped' is False every mapped region is shown as a single\n entity and the namedtuple will also include the mapped region's\n address space ('addr') and permission set ('perms')."}
{"_id": "q_18808", "text": "Return whether this process is running."}
{"_id": "q_18809", "text": "Resume process execution."}
{"_id": "q_18810", "text": "Wait for process to terminate and, if process is a children\n of the current one also return its exit code, else None."}
{"_id": "q_18811", "text": "Converts Duration to string format.\n\n Returns:\n A string converted from self. The string format will contains\n 3, 6, or 9 fractional digits depending on the precision required to\n represent the exact Duration value. For example: \"1s\", \"1.010s\",\n \"1.000000100s\", \"-3.100s\""}
{"_id": "q_18812", "text": "Initializes the kernel inside GTK.\n \n This is meant to run only once at startup, so it does its job and\n returns False to ensure it doesn't get run again by GTK."}
{"_id": "q_18813", "text": "Hijack a few key functions in GTK for IPython integration.\n\n Modifies pyGTK's main and main_quit with a dummy so user code does not\n block IPython. This allows us to use %run to run arbitrary pygtk\n scripts from a long-lived IPython session, and when they attempt to\n start or stop\n\n Returns\n -------\n The original functions that have been hijacked:\n - gtk.main\n - gtk.main_quit"}
{"_id": "q_18814", "text": "Is the given identifier defined in one of the namespaces which shadow\n the alias and magic namespaces? Note that an identifier is different\n than ifun, because it can not contain a '.' character."}
{"_id": "q_18815", "text": "Create the default transformers."}
{"_id": "q_18816", "text": "Register a transformer instance."}
{"_id": "q_18817", "text": "Unregister a transformer instance."}
{"_id": "q_18818", "text": "Create the default checkers."}
{"_id": "q_18819", "text": "Register a checker instance."}
{"_id": "q_18820", "text": "Create the default handlers."}
{"_id": "q_18821", "text": "Register a handler instance by name with esc_strings."}
{"_id": "q_18822", "text": "Unregister a handler instance by name with esc_strings."}
{"_id": "q_18823", "text": "Calls the enabled transformers in order of increasing priority."}
{"_id": "q_18824", "text": "Prefilter multiple input lines of text.\n\n This is the main entry point for prefiltering multiple lines of\n input. This simply calls :meth:`prefilter_line` for each line of\n input.\n\n This covers cases where there are multiple lines in the user entry,\n which is the case when the user goes back to a multiline history\n entry and presses enter."}
{"_id": "q_18825", "text": "Check if the initital identifier on the line is an alias."}
{"_id": "q_18826", "text": "Handle normal input lines. Use as a template for handlers."}
{"_id": "q_18827", "text": "Handle alias input lines."}
{"_id": "q_18828", "text": "Execute the line in a shell, empty return value"}
{"_id": "q_18829", "text": "Execute magic functions."}
{"_id": "q_18830", "text": "Handle lines which can be auto-executed, quoting if requested."}
{"_id": "q_18831", "text": "Try to get some help for the object.\n\n obj? or ?obj -> basic information.\n obj?? or ??obj -> more details."}
{"_id": "q_18832", "text": "Reimplemented to hide on certain key presses and on text edit focus\n changes."}
{"_id": "q_18833", "text": "Reimplemented to cancel the hide timer."}
{"_id": "q_18834", "text": "Reimplemented to paint the background panel."}
{"_id": "q_18835", "text": "Attempts to show the specified call line and docstring at the\n current cursor location. The docstring is possibly truncated for\n length."}
{"_id": "q_18836", "text": "Attempts to show the specified tip at the current cursor location."}
{"_id": "q_18837", "text": "Create a property that proxies attribute ``proxied_attr`` through\n the local attribute ``local_attr``."}
{"_id": "q_18838", "text": "Canonicalizes a path relative to a given working directory. That\n is, the path, if not absolute, is interpreted relative to the\n working directory, then converted to absolute form.\n\n :param cwd: The working directory.\n :param path: The path to canonicalize.\n\n :returns: The absolute path."}
{"_id": "q_18839", "text": "Schema validation helper. Performs JSONSchema validation. If a\n schema validation error is encountered, an exception of the\n designated class is raised with the validation error message\n appropriately simplified and passed as the sole positional\n argument.\n\n :param instance: The object to schema validate.\n :param schema: The schema to use for validation.\n :param exc_class: The exception class to raise instead of the\n ``jsonschema.ValidationError`` exception.\n :param prefix: Positional arguments are interpreted as a list of\n keys to prefix to the path contained in the\n validation error.\n :param kwargs: Keyword arguments to pass to the exception\n constructor."}
{"_id": "q_18840", "text": "Retrieve a read-only subordinate mapping. All values are\n stringified, and sensitive values are masked. The subordinate\n mapping implements the context manager protocol for\n convenience."}
{"_id": "q_18841", "text": "Return a CouchDB document, given its ID, revision and database name."}
{"_id": "q_18842", "text": "Give reST format README for pypi."}
{"_id": "q_18843", "text": "Return True if in a venv and no system site packages."}
{"_id": "q_18844", "text": "Parallel word frequency counter.\n \n view - An IPython DirectView\n fnames - The filenames containing the split data."}
{"_id": "q_18845", "text": "Convert a function based decorator into a class based decorator usable\n\ton class based Views.\n\n\tCan't subclass the `View` as it breaks inheritance (super in particular),\n\tso we monkey-patch instead.\n\n\tBased on http://stackoverflow.com/a/8429311"}
{"_id": "q_18846", "text": "Return list of shell aliases to auto-define."}
{"_id": "q_18847", "text": "Define a new alias after validating it.\n\n This will raise an :exc:`AliasError` if there are validation\n problems."}
{"_id": "q_18848", "text": "Call an alias given its name and the rest of the line."}
{"_id": "q_18849", "text": "Transform alias to system command string."}
{"_id": "q_18850", "text": "Expand an alias in the command line\n\n Returns the provided command line, possibly with the first word\n (command) translated according to alias expansion rules.\n\n [ipython]|16> _ip.expand_aliases(\"np myfile.txt\")\n <16> 'q:/opt/np/notepad++.exe myfile.txt'"}
{"_id": "q_18851", "text": "produces rst from nose help"}
{"_id": "q_18852", "text": "Yields substrings for which the same escape code applies."}
{"_id": "q_18853", "text": "Returns a QColor for a given color code, or None if one cannot be\n constructed."}
{"_id": "q_18854", "text": "Returns a QTextCharFormat that encodes the current style attributes."}
{"_id": "q_18855", "text": "Generate a one-time jwt with an age in seconds"}
{"_id": "q_18856", "text": "Run by housekeeper thread"}
{"_id": "q_18857", "text": "Get common prefix for completions\n\n Return the longest common prefix of a list of strings, but with special\n treatment of escape characters that might precede commands in IPython,\n such as %magic functions. Used in tab completion.\n\n For a more general function, see os.path.commonprefix"}
{"_id": "q_18858", "text": "Reimplemented to ensure a console-like behavior in the underlying\n text widgets."}
{"_id": "q_18859", "text": "Reimplemented to suggest a size that is 80 characters wide and\n 25 lines high."}
{"_id": "q_18860", "text": "Returns whether text can be cut to the clipboard."}
{"_id": "q_18861", "text": "Returns whether text can be pasted from the clipboard."}
{"_id": "q_18862", "text": "Clear the console.\n\n Parameters:\n -----------\n keep_input : bool, optional (default True)\n If set, restores the old input buffer if a new prompt is written."}
{"_id": "q_18863", "text": "Executes source or the input buffer, possibly prompting for more\n input.\n\n Parameters:\n -----------\n source : str, optional\n\n The source to execute. If not specified, the input buffer will be\n used. If specified and 'hidden' is False, the input buffer will be\n replaced with the source before execution.\n\n hidden : bool, optional (default False)\n\n If set, no output will be shown and the prompt will not be modified.\n In other words, it will be completely invisible to the user that\n an execution has occurred.\n\n interactive : bool, optional (default False)\n\n Whether the console is to treat the source as having been manually\n entered by the user. The effect of this parameter depends on the\n subclass implementation.\n\n Raises:\n -------\n RuntimeError\n If incomplete input is given and 'hidden' is True. In this case,\n it is not possible to prompt for more input.\n\n Returns:\n --------\n A boolean indicating whether the source was executed."}
{"_id": "q_18864", "text": "Sets the text in the input buffer.\n\n If the console is currently executing, this call has no *immediate*\n effect. When the execution is finished, the input buffer will be updated\n appropriately."}
{"_id": "q_18865", "text": "Print the contents of the ConsoleWidget to the specified QPrinter."}
{"_id": "q_18866", "text": "Moves the prompt to the top of the viewport."}
{"_id": "q_18867", "text": "Sets the font to the default fixed-width font for this platform."}
{"_id": "q_18868", "text": "A low-level method for appending content to the end of the buffer.\n\n If 'before_prompt' is enabled, the content will be inserted before the\n current prompt, if there is one."}
{"_id": "q_18869", "text": "Appends HTML at the end of the console buffer."}
{"_id": "q_18870", "text": "Appends HTML, then returns the plain text version of it."}
{"_id": "q_18871", "text": "Appends plain text, processing ANSI codes if enabled."}
{"_id": "q_18872", "text": "Clears the \"temporary text\" buffer, i.e. all the text following\n the prompt region."}
{"_id": "q_18873", "text": "fill the area below the active editting zone with text"}
{"_id": "q_18874", "text": "Filter key events for the paging widget to create console-like\n interface."}
{"_id": "q_18875", "text": "Convenience method that returns a cursor for the last character."}
{"_id": "q_18876", "text": "Convenience method that returns a cursor with text selected between\n the positions 'start' and 'end'."}
{"_id": "q_18877", "text": "Inserts plain text using the specified cursor, processing ANSI codes\n if enabled."}
{"_id": "q_18878", "text": "Ensures that the cursor is inside the editing region. Returns\n whether the cursor was moved."}
{"_id": "q_18879", "text": "Cancels the current editing task ala Ctrl-G in Emacs."}
{"_id": "q_18880", "text": "Displays text using the pager if it exceeds the height of the\n viewport.\n\n Parameters:\n -----------\n html : bool, optional (default False)\n If set, the text will be interpreted as HTML instead of plain text."}
{"_id": "q_18881", "text": "Called immediately after a new prompt is displayed."}
{"_id": "q_18882", "text": "Scrolls the viewport so that the specified cursor is at the top."}
{"_id": "q_18883", "text": "Expands the vertical scrollbar beyond the range set by Qt."}
{"_id": "q_18884", "text": "Copy a default config file into the active profile directory.\n\n Default configuration files are kept in :mod:`IPython.config.default`.\n This function moves these from that location to the working profile\n directory."}
{"_id": "q_18885", "text": "Create a profile dir by profile name and path.\n\n Parameters\n ----------\n path : unicode\n The path (directory) to put the profile directory in.\n name : unicode\n The name of the profile. The name of the profile directory will\n be \"profile_<profile>\"."}
{"_id": "q_18886", "text": "Convert a cmp= function into a key= function"}
{"_id": "q_18887", "text": "Read a file and close it. Returns the file source."}
{"_id": "q_18888", "text": "Take multiple lines of input.\n\n A list with each line of input as a separate element is returned when a\n termination string is entered (defaults to a single '.'). Input can also\n terminate via EOF (^D in Unix, ^Z-RET in Windows).\n\n Lines of input which end in \\\\ are joined into single entries (and a\n secondary continuation prompt is issued as long as the user terminates\n lines with \\\\). This allows entering very long strings which are still\n meant to be treated as single entities."}
{"_id": "q_18889", "text": "Write data to both channels."}
{"_id": "q_18890", "text": "add a new handler for new hearts"}
{"_id": "q_18891", "text": "add a new handler for heart failure"}
{"_id": "q_18892", "text": "a heart just beat"}
{"_id": "q_18893", "text": "Converts a list into a list of lists with equal batch_size.\n\n Parameters\n ----------\n sequence : list\n list of items to be placed in batches\n batch_size : int\n length of each sub list\n mod : int\n remainder of list length devided by batch_size\n mod = len(sequence) % batch_size\n randomize = bool\n should the initial sequence be randomized before being batched"}
{"_id": "q_18894", "text": "Generator for walking a directory tree.\n Starts at specified root folder, returning files that match our pattern. \n Optionally will also recurse through sub-folders.\n\n Parameters\n ----------\n root : string (default is *'.'*)\n Path for the root folder to look in.\n recurse : bool (default is *True*)\n If *True*, will also look in the subfolders.\n pattern : string (default is :emphasis:`'*'`, which means all the files are concerned)\n The pattern to look for in the files' name.\n\n Returns\n -------\n generator\n **Walk** yields a generator from the matching files paths."}
{"_id": "q_18895", "text": "Extract configuration data from a bdist_wininst .exe\n\n Returns a ConfigParser.RawConfigParser, or None"}
{"_id": "q_18896", "text": "Ensure that the importer caches dont have stale info for `path`"}
{"_id": "q_18897", "text": "Verify that there are no conflicting \"old-style\" packages"}
{"_id": "q_18898", "text": "When easy_install is about to run bdist_egg on a source dist, that\n source dist might have 'setup_requires' directives, requiring\n additional fetching. Ensure the fetcher options given to easy_install\n are available to that command as well."}
{"_id": "q_18899", "text": "Create directories under ~."}
{"_id": "q_18900", "text": "Return True if `name` is a considered as an archive file."}
{"_id": "q_18901", "text": "return a mutable proxy for the `obj`.\n\n all modify on the proxy will not apply on origin object."}
{"_id": "q_18902", "text": "return a readonly proxy for the `obj`.\n\n all modify on the proxy will not apply on origin object."}
{"_id": "q_18903", "text": "Create a new section cell with a given integer level."}
{"_id": "q_18904", "text": "Create a new metadata node."}
{"_id": "q_18905", "text": "Create a new author."}
{"_id": "q_18906", "text": "Whether `path` is a directory, to which the user has write access."}
{"_id": "q_18907", "text": "On Windows, remove leading and trailing quotes from filenames."}
{"_id": "q_18908", "text": "Return the 'home' directory, as a unicode string.\n\n * First, check for frozen env in case of py2exe\n * Otherwise, defer to os.path.expanduser('~')\n \n See stdlib docs for how this is determined.\n $HOME is first priority on *ALL* platforms.\n \n Parameters\n ----------\n \n require_writable : bool [default: False]\n if True:\n guarantees the return value is a writable directory, otherwise\n raises HomeDirError\n if False:\n The path is resolved, but it is not guaranteed to exist or be writable."}
{"_id": "q_18909", "text": "Return the XDG_CONFIG_HOME, if it is defined and exists, else None.\n\n This is only for non-OS X posix (Linux,Unix,etc.) systems."}
{"_id": "q_18910", "text": "Get the IPython directory for this platform and user.\n\n This uses the logic in `get_home_dir` to find the home directory\n and then adds .ipython to the end of the path."}
{"_id": "q_18911", "text": "Find the path to an IPython module in this version of IPython.\n\n This will always find the version of the module that is in this importable\n IPython package. This will always return the path to the ``.py``\n version of the module."}
{"_id": "q_18912", "text": "Determine whether a target is out of date.\n\n target_outdated(target,deps) -> 1/0\n\n deps: list of filenames which MUST exist.\n target: single filename which may or may not exist.\n\n If target doesn't exist or is older than any file listed in deps, return\n true, otherwise return false."}
{"_id": "q_18913", "text": "Make an MD5 hash of a file, ignoring any differences in line\n ending characters."}
{"_id": "q_18914", "text": "Check for old config files, and present a warning if they exist.\n\n A link to the docs of the new config is included in the message.\n\n This should mitigate confusion with the transition to the new\n config system in 0.11."}
{"_id": "q_18915", "text": "Updates the suggestions' dictionary for an object upon visiting its page"}
{"_id": "q_18916", "text": "Return this path as a relative path,\n based from the current working directory."}
{"_id": "q_18917", "text": "Return a list of path objects that match the pattern.\n\n pattern - a path relative to this directory, with wildcards.\n\n For example, path('/users').glob('*/bin/*') returns a list\n of all the files users have in their bin directories."}
{"_id": "q_18918", "text": "r\"\"\" Open this file, read all lines, return them in a list.\n\n Optional arguments:\n encoding - The Unicode encoding (or character set) of\n the file. The default is None, meaning the content\n of the file is read as 8-bit characters and returned\n as a list of (non-Unicode) str objects.\n errors - How to handle Unicode errors; see help(str.decode)\n for the options. Default is 'strict'\n retain - If true, retain newline characters; but all newline\n character combinations ('\\r', '\\n', '\\r\\n') are\n translated to '\\n'. If false, newline characters are\n stripped off. Default is True.\n\n This uses 'U' mode in Python 2.3 and later."}
{"_id": "q_18919", "text": "Enable event loop integration with PyGTK.\n\n Parameters\n ----------\n app : ignored\n Ignored, it's only a placeholder to keep the call signature of all\n gui activation methods consistent, which simplifies the logic of\n supporting magics.\n\n Notes\n -----\n This methods sets the PyOS_InputHook for PyGTK, which allows\n the PyGTK to integrate with terminal based applications like\n IPython."}
{"_id": "q_18920", "text": "Connect to the database, and create tables if necessary."}
{"_id": "q_18921", "text": "Prepares and runs an SQL query for the history database.\n\n Parameters\n ----------\n sql : str\n Any filtering expressions to go after SELECT ... FROM ...\n params : tuple\n Parameters passed to the SQL query (to replace \"?\")\n raw, output : bool\n See :meth:`get_range`\n\n Returns\n -------\n Tuples as :meth:`get_range`"}
{"_id": "q_18922", "text": "get info about a session\n\n Parameters\n ----------\n\n session : int\n Session number to retrieve. The current session is 0, and negative\n numbers count back from current session, so -1 is previous session.\n\n Returns\n -------\n\n (session_id [int], start [datetime], end [datetime], num_cmds [int],\n remark [unicode])\n\n Sessions that are running or did not exit cleanly will have `end=None`\n and `num_cmds=None`."}
{"_id": "q_18923", "text": "Get lines of history from a string of ranges, as used by magic\n commands %hist, %save, %macro, etc.\n\n Parameters\n ----------\n rangestr : str\n A string specifying ranges, e.g. \"5 ~2/1-4\". See\n :func:`magic_history` for full details.\n raw, output : bool\n As :meth:`get_range`\n\n Returns\n -------\n Tuples as :meth:`get_range`"}
{"_id": "q_18924", "text": "Give the current session a name in the history database."}
{"_id": "q_18925", "text": "Clear the session history, releasing all object references, and\n optionally open a new session."}
{"_id": "q_18926", "text": "Write any entries in the cache to the database."}
{"_id": "q_18927", "text": "This can be called from the main thread to safely stop this thread.\n\n Note that it does not attempt to write out remaining history before\n exiting. That should be done by calling the HistoryManager's\n end_session method."}
{"_id": "q_18928", "text": "Return the number of CPUs on the system"}
{"_id": "q_18929", "text": "Return a list of namedtuple representing the CPU times\n for every CPU available on the system."}
{"_id": "q_18930", "text": "Return mounted disk partitions as a list of nameduples"}
{"_id": "q_18931", "text": "Make a nice string representation of a pair of numbers.\n\n If the numbers are equal, just return the number, otherwise return the pair\n with a dash between them, indicating the range."}
{"_id": "q_18932", "text": "Return a string summarizing the call stack."}
{"_id": "q_18933", "text": "A decorator to cache the result of an expensive operation.\n\n Only applies to methods with no arguments."}
{"_id": "q_18934", "text": "Combine a list of regexes into one that matches any of them."}
{"_id": "q_18935", "text": "Remove a file, and don't get annoyed if it doesn't exist."}
{"_id": "q_18936", "text": "List all profiles in the ipython_dir and cwd."}
{"_id": "q_18937", "text": "Start a cluster for a given profile."}
{"_id": "q_18938", "text": "Stop a cluster for a given profile."}
{"_id": "q_18939", "text": "Find the full path to a .bat or .exe using the win32api module."}
{"_id": "q_18940", "text": "Callback for _system."}
{"_id": "q_18941", "text": "remove records from collection whose parameters match kwargs"}
{"_id": "q_18942", "text": "Resolve the URL to this point.\n\n >>> trello = TrelloAPIV1('APIKEY')\n >>> trello.batch._url\n '1/batch'\n >>> trello.boards(board_id='BOARD_ID')._url\n '1/boards/BOARD_ID'\n >>> trello.boards(board_id='BOARD_ID')(field='FIELD')._url\n '1/boards/BOARD_ID/FIELD'\n >>> trello.boards(board_id='BOARD_ID').cards(filter='FILTER')._url\n '1/boards/BOARD_ID/cards/FILTER'"}
{"_id": "q_18943", "text": "Makes the HTTP request."}
{"_id": "q_18944", "text": "Run a reporting function on a number of morfs.\n\n `report_fn` is called for each relative morf in `morfs`. It is called\n as::\n\n report_fn(code_unit, analysis)\n\n where `code_unit` is the `CodeUnit` for the morf, and `analysis` is\n the `Analysis` for the morf."}
{"_id": "q_18945", "text": "Call pdb.set_trace in the calling frame, first restoring\n sys.stdout to the real output stream. Note that sys.stdout is NOT\n reset to whatever it was before the call once pdb is done!"}
{"_id": "q_18946", "text": "Test must finish within specified time limit to pass.\n\n Example use::\n\n @timed(.1)\n def test_that_fails():\n time.sleep(.2)"}
{"_id": "q_18947", "text": "Load all IPython extensions in IPythonApp.extensions.\n\n This uses the :meth:`ExtensionManager.load_extensions` to load all\n the extensions listed in ``self.extensions``."}
{"_id": "q_18948", "text": "run the pre-flight code, specified via exec_lines"}
{"_id": "q_18949", "text": "Run files from profile startup directory"}
{"_id": "q_18950", "text": "Run files from IPythonApp.exec_files"}
{"_id": "q_18951", "text": "Run code or file specified at the command-line"}
{"_id": "q_18952", "text": "Run module specified at the command-line."}
{"_id": "q_18953", "text": "Return the contents of a data file of ours."}
{"_id": "q_18954", "text": "HTML-escape the text in `t`."}
{"_id": "q_18955", "text": "Generate an HTML report for `morfs`.\n\n `morfs` is a list of modules or filenames."}
{"_id": "q_18956", "text": "Make local instances of static files for HTML report."}
{"_id": "q_18957", "text": "Compute a hash that changes if the file needs to be re-reported."}
{"_id": "q_18958", "text": "Read the last status in `directory`."}
{"_id": "q_18959", "text": "Write the current status to `directory`."}
{"_id": "q_18960", "text": "Sort and compare two lists.\n\n By default it does it in place, thus modifying the lists. Use inplace = 0\n to avoid that (at the cost of temporary copy creation)."}
{"_id": "q_18961", "text": "Get a slice of a sequence with variable step. Specify start,stop,step."}
{"_id": "q_18962", "text": "Read configuration from setup.cfg."}
{"_id": "q_18963", "text": "Read existing configuration from MANIFEST.in.\n\n We use that to ignore anything the MANIFEST.in ignores."}
{"_id": "q_18964", "text": "List all files versioned by git in the current directory."}
{"_id": "q_18965", "text": "Skips over a field value.\n\n Args:\n tokenizer: A tokenizer to parse the field name and values.\n\n Raises:\n ParseError: In case an invalid field value is found."}
{"_id": "q_18966", "text": "Parses an integer.\n\n Args:\n text: The text to parse.\n is_signed: True if a signed integer must be parsed.\n is_long: True if a long integer must be parsed.\n\n Returns:\n The integer value.\n\n Raises:\n ValueError: Thrown Iff the text is not a valid integer."}
{"_id": "q_18967", "text": "Convert protobuf message to text format.\n\n Args:\n message: The protocol buffers message."}
{"_id": "q_18968", "text": "Merges a single scalar field into a message.\n\n Args:\n tokenizer: A tokenizer to parse the field value.\n message: The message of which field is a member.\n field: The descriptor of the field to be merged.\n\n Raises:\n ParseError: In case of text parsing problems."}
{"_id": "q_18969", "text": "Consume one token of a string literal.\n\n String literals (whether bytes or text) can come in multiple adjacent\n tokens which are automatically concatenated, like in C or Python. This\n method only consumes one token.\n\n Returns:\n The token parsed.\n Raises:\n ParseError: When the wrong format data is found."}
{"_id": "q_18970", "text": "Shutdown a kernel by its kernel uuid.\n\n Parameters\n ==========\n kernel_id : uuid\n The id of the kernel to shutdown."}
{"_id": "q_18971", "text": "Kill a kernel by its kernel uuid.\n\n Parameters\n ==========\n kernel_id : uuid\n The id of the kernel to kill."}
{"_id": "q_18972", "text": "Return a dictionary of ports for a kernel.\n\n Parameters\n ==========\n kernel_id : uuid\n The id of the kernel.\n\n Returns\n =======\n port_dict : dict\n A dict of key, value pairs where the keys are the names\n (stdin_port,iopub_port,shell_port) and the values are the\n integer port numbers for those channels."}
{"_id": "q_18973", "text": "Start a kernel for a notebok an return its kernel_id.\n\n Parameters\n ----------\n notebook_id : uuid\n The uuid of the notebook to associate the new kernel with. If this\n is not None, this kernel will be persistent whenever the notebook\n requests a kernel."}
{"_id": "q_18974", "text": "Shutdown a kernel and remove its notebook association."}
{"_id": "q_18975", "text": "Interrupt a kernel."}
{"_id": "q_18976", "text": "Create a new iopub stream."}
{"_id": "q_18977", "text": "Create a new hb stream."}
{"_id": "q_18978", "text": "convert ark timestamp to unix timestamp"}
{"_id": "q_18979", "text": "Reset all OneTimeProperty attributes that may have fired already."}
{"_id": "q_18980", "text": "Export the contents of the ConsoleWidget as HTML.\n\n Parameters:\n -----------\n html : str,\n A utf-8 encoded Python string containing the Qt HTML to export.\n\n filename : str\n The file to be saved.\n\n image_tag : callable, optional (default None)\n Used to convert images. See ``default_image_tag()`` for information.\n\n inline : bool, optional [default True]\n If True, include images as inline PNGs. Otherwise, include them as\n links to external PNG files, mimicking web browsers' \"Web Page,\n Complete\" behavior."}
{"_id": "q_18981", "text": "Export the contents of the ConsoleWidget as XHTML with inline SVGs.\n\n Parameters:\n -----------\n html : str,\n A utf-8 encoded Python string containing the Qt HTML to export.\n\n filename : str\n The file to be saved.\n\n image_tag : callable, optional (default None)\n Used to convert images. See ``default_image_tag()`` for information."}
{"_id": "q_18982", "text": "Transforms a Qt-generated HTML string into a standards-compliant one.\n\n Parameters:\n -----------\n html : str,\n A utf-8 encoded Python string containing the Qt HTML."}
{"_id": "q_18983", "text": "Returns a unique instance of `klass` or None"}
{"_id": "q_18984", "text": "Builds a query for included terms in a text search."}
{"_id": "q_18985", "text": "Builds a query for both included & excluded terms in a text search."}
{"_id": "q_18986", "text": "Query for if date_field is within number of \"days\" ago."}
{"_id": "q_18987", "text": "Converts queries to case insensitive for special fields."}
{"_id": "q_18988", "text": "Close the connection."}
{"_id": "q_18989", "text": "Register command line options"}
{"_id": "q_18990", "text": "Verify whether a method has the required attributes\n The method is considered a match if it matches all attributes\n for any attribute group.\n ."}
{"_id": "q_18991", "text": "Rotate the kill ring, then yank back the new top."}
{"_id": "q_18992", "text": "backport a few patches from newer pyzmq\n \n These can be removed as we bump our minimum pyzmq version"}
{"_id": "q_18993", "text": "websocket url matching the current request\n\n turns http[s]://host[:port] into\n ws[s]://host[:port]"}
{"_id": "q_18994", "text": "Reserialize a reply message using JSON.\n\n This takes the msg list from the ZMQ socket, unserializes it using\n self.session and then serializes the result using JSON. This method\n should be used by self._on_zmq_reply to build messages that can\n be sent back to the browser."}
{"_id": "q_18995", "text": "Inject the first message, which is the document cookie,\n for authentication."}
{"_id": "q_18996", "text": "callback for delayed heartbeat start\n \n Only start the hb loop if we haven't been closed during the wait."}
{"_id": "q_18997", "text": "Get the current block index, validating and checking status.\n\n Returns None if the demo is finished"}
{"_id": "q_18998", "text": "Move the current seek pointer to the given block.\n\n You can use negative indices to seek from the end, with identical\n semantics to those of Python lists."}
{"_id": "q_18999", "text": "Show a single block on screen"}
{"_id": "q_19000", "text": "Processes a collection in series \n\n Parameters\n ----------\n collection : list\n list of Record objects\n method : method to call on each Record\n prints : int\n number of timer prints to the screen\n\n Returns\n -------\n collection : list\n list of Record objects after going through method called\n \n If more than one collection is given, the function is called with an argument list \n consisting of the corresponding item of each collection, substituting None for \n missing values when not all collection have the same length. \n If the function is None, return the original collection (or a list of tuples if multiple collections).\n \n Example\n -------\n adding 2 to every number in a range\n\n >>> import turntable\n >>> collection = range(100)\n >>> method = lambda x: x + 2\n >>> collection = turntable.spin.series(collection, method)"}
{"_id": "q_19001", "text": "Processes a collection in parallel batches, \n each batch processes in series on a single process.\n Running batches in parallel can be more effficient that splitting a list across cores as in spin.parallel \n because of parallel processing has high IO requirements.\n\n Parameters\n ----------\n collection : list\n i.e. list of Record objects\n method : method to call on each Record\n processes : int\n number of processes to run on [defaults to number of cores on machine]\n batch_size : int\n lenght of each batch [defaults to number of elements / number of processes]\n\n Returns\n -------\n collection : list\n list of Record objects after going through method called\n\n Example\n -------\n adding 2 to every number in a range\n\n >>> import turntable\n >>> collection = range(100)\n >>> def jam(record):\n >>> return record + 2\n >>> collection = turntable.spin.batch(collection, jam)\n\n Note\n ----\n\n lambda functions do not work in parallel"}
{"_id": "q_19002", "text": "Processes a collection in parallel.\n\n Parameters\n ----------\n collection : list\n i.e. list of Record objects\n method : method to call on each Record\n processes : int\n number of processes to run on [defaults to number of cores on machine]\n batch_size : int\n lenght of each batch [defaults to number of elements / number of processes]\n\n Returns\n -------\n collection : list\n list of Record objects after going through method called\n\n Example\n -------\n adding 2 to every number in a range\n\n >>> import turntable\n >>> collection = range(100)\n >>> def jam(record):\n >>> return record + 2\n >>> collection = turntable.spin.parallel(collection, jam)\n\n Note\n ----\n\n lambda functions do not work in parallel"}
{"_id": "q_19003", "text": "Get source from a traceback object.\n\n A tuple of two things is returned: a list of lines of context from\n the source code, and the index of the current line within that list.\n The optional second argument specifies the number of lines of context\n to return, which are centered around the current line.\n\n .. Note ::\n This is adapted from inspect.py in the python 2.4 standard library, \n since a bug in the 2.3 version of inspect prevents it from correctly\n locating source lines in a traceback frame."}
{"_id": "q_19004", "text": "Create a countdown."}
{"_id": "q_19005", "text": "Cleanup routine to shut down all subprocesses we opened."}
{"_id": "q_19006", "text": "A modifier hook function. This is called in priority order prior\n to invoking the ``Action`` for the step. This allows a\n modifier to alter the context, or to take over subsequent\n action invocation.\n\n :param ctxt: The context object.\n :param pre_mod: A list of the modifiers preceding this\n modifier in the list of modifiers that is\n applicable to the action. This list is in\n priority order.\n :param post_mod: A list of the modifiers following this\n modifier in the list of modifiers that is\n applicable to the action. This list is in\n priority order.\n :param action: The action that will be performed.\n\n :returns: A ``None`` return value indicates that the modifier\n is taking no action. A non-``None`` return value\n should consist of a ``StepResult`` object; this will\n suspend further ``pre_call()`` processing and\n proceed to the ``post_call()`` processing. This\n implementation returns a ``StepResult`` with state\n ``SKIPPED`` if the condition does not evaluate to\n ``True``."}
{"_id": "q_19007", "text": "sync relevant results from self.client to our results attribute."}
{"_id": "q_19008", "text": "call spin after the method."}
{"_id": "q_19009", "text": "Get all messages that are currently ready."}
{"_id": "q_19010", "text": "Gets a message if there is one that is ready."}
{"_id": "q_19011", "text": "`get_onlys` is a sugar for multi-`property`.\n\n ``` py\n name, age = get_onlys('_name', '_age')\n\n # equals:\n\n @property\n def name(self):\n return getattr(self, '_name')\n\n @property\n def age(self):\n return getattr(self, '_age')\n ```"}
{"_id": "q_19012", "text": "Parses a database URL."}
{"_id": "q_19013", "text": "Easily create a trivial completer for a command.\n\n Takes either a list of completions, or all completions in string (that will\n be split on whitespace).\n\n Example::\n\n [d:\\ipython]|1> import ipy_completers\n [d:\\ipython]|2> ipy_completers.quick_completer('foo', ['bar','baz'])\n [d:\\ipython]|3> foo b<TAB>\n bar baz\n [d:\\ipython]|3> foo ba"}
{"_id": "q_19014", "text": "Returns a list containing the completion possibilities for an import line.\n\n The line looks like this :\n 'import xml.d'\n 'from xml.dom import'"}
{"_id": "q_19015", "text": "Complete files that end in .py or .ipy for the %run command."}
{"_id": "q_19016", "text": "Completer function for cd, which only returns directories."}
{"_id": "q_19017", "text": "Append numbers in sequential order to the filename or folder name\r\n\tNumbers should be appended before the extension on a filename."}
{"_id": "q_19018", "text": "Set the modified time of a file"}
{"_id": "q_19019", "text": "wrap a function that returns a dir, making sure it exists"}
{"_id": "q_19020", "text": "Get closer to your EOL"}
{"_id": "q_19021", "text": "Open a connection over the serial line and receive data lines"}
{"_id": "q_19022", "text": "Escape an XML attribute. Value can be unicode."}
{"_id": "q_19023", "text": "Configures the xunit plugin."}
{"_id": "q_19024", "text": "Writes an Xunit-formatted XML file\n\n The file includes a report of test errors and failures."}
{"_id": "q_19025", "text": "Add error output to Xunit report."}
{"_id": "q_19026", "text": "Add failure output to Xunit report."}
{"_id": "q_19027", "text": "Add success output to Xunit report."}
{"_id": "q_19028", "text": "create & start main thread\n\n :return: None"}
{"_id": "q_19029", "text": "Pick two at random, use the LRU of the two.\n\n The content of loads is ignored.\n\n Assumes LRU ordering of loads, with oldest first."}
{"_id": "q_19030", "text": "Pick two at random using inverse load as weight.\n\n Return the less loaded of the two."}
{"_id": "q_19031", "text": "Existing engine with ident `uid` became unavailable."}
{"_id": "q_19032", "text": "Deal with jobs resident in an engine that died."}
{"_id": "q_19033", "text": "Dispatch job submission to appropriate handlers."}
{"_id": "q_19034", "text": "Audit all waiting tasks for expired timeouts."}
{"_id": "q_19035", "text": "check location dependencies, and run if they are met."}
{"_id": "q_19036", "text": "Save a message for later submission when its dependencies are met."}
{"_id": "q_19037", "text": "dispatch method for result replies"}
{"_id": "q_19038", "text": "handle a real task result, either success or failure"}
{"_id": "q_19039", "text": "handle an unmet dependency"}
{"_id": "q_19040", "text": "dep_id just finished. Update our dependency\n graph and submit any jobs that just became runable.\n\n Called with dep_id=None to update entire graph for hwm, but without finishing\n a task."}
{"_id": "q_19041", "text": "Generate a new log-file with a default header.\n\n Raises RuntimeError if the log has already been started"}
{"_id": "q_19042", "text": "Write the sources to a log.\n\n Inputs:\n\n - line_mod: possibly modified input, such as the transformations made\n by input prefilters or input handlers of various kinds. This should\n always be valid Python.\n\n - line_ori: unmodified input line from the user. This is not\n necessarily valid Python."}
{"_id": "q_19043", "text": "Fully stop logging and close log file.\n\n In order to start logging again, a new logstart() call needs to be\n made, possibly (though not necessarily) with a new filename, mode and\n other options."}
{"_id": "q_19044", "text": "Create a worksheet by name with with a list of cells."}
{"_id": "q_19045", "text": "Adds a target 'string' for dispatching"}
{"_id": "q_19046", "text": "Adds a target regexp for dispatching"}
{"_id": "q_19047", "text": "Yield all 'value' targets, without priority"}
{"_id": "q_19048", "text": "do a bit of validation of the notebook dir"}
{"_id": "q_19049", "text": "Delete a notebook's id only. This doesn't delete the actual notebook."}
{"_id": "q_19050", "text": "Does a notebook exist?"}
{"_id": "q_19051", "text": "Return a full path to a notebook given its notebook_id."}
{"_id": "q_19052", "text": "Get the representation of a notebook in format by notebook_id."}
{"_id": "q_19053", "text": "Get the NotebookNode representation of a notebook by notebook_id."}
{"_id": "q_19054", "text": "Save an existing notebook by notebook_id."}
{"_id": "q_19055", "text": "Save an existing notebook object by notebook_id."}
{"_id": "q_19056", "text": "Delete notebook by notebook_id."}
{"_id": "q_19057", "text": "Load the default config file from the default ipython_dir.\n\n This is useful for embedded shells."}
{"_id": "q_19058", "text": "This has to be in a method, for TerminalIPythonApp to be available."}
{"_id": "q_19059", "text": "initialize the InteractiveShell instance"}
{"_id": "q_19060", "text": "optionally display the banner"}
{"_id": "q_19061", "text": "Return a string representation of a value and its type for readable\n error messages."}
{"_id": "q_19062", "text": "Get a list of all the traits of this class.\n\n This method is just like the :meth:`traits` method, but is unbound.\n\n The TraitTypes returned don't know anything about the values\n that the various HasTrait's instances are holding.\n\n This follows the same algorithm as traits does and does not allow\n for any simple way of specifying merely that a metadata name\n exists, but has any value. This is because get_metadata returns\n None if a metadata key doesn't exist."}
{"_id": "q_19063", "text": "Get metadata values for trait by key."}
{"_id": "q_19064", "text": "Validates that the value is a valid object instance."}
{"_id": "q_19065", "text": "Instantiate a default value instance.\n\n This is called when the containing HasTraits classes'\n :meth:`__new__` method is called to ensure that a unique instance\n is created for each HasTraits instance."}
{"_id": "q_19066", "text": "return whether this dependency has become impossible."}
{"_id": "q_19067", "text": "Represent this dependency as a dict. For json compatibility."}
{"_id": "q_19068", "text": "Scans through all children of node and gathers the\n text. If node has non-text child-nodes then\n NotTextNodeError is raised."}
{"_id": "q_19069", "text": "Get the number of credits remaining at AmbientSMS"}
{"_id": "q_19070", "text": "Send a mesage via the AmbientSMS API server"}
{"_id": "q_19071", "text": "Inteface for sending web requests to the AmbientSMS API Server"}
{"_id": "q_19072", "text": "Called for each file\n Must return file content\n Can be wrapped\n\n :type f: static_bundle.files.StaticFileResult\n :type text: str|unicode\n :rtype: str|unicode"}
{"_id": "q_19073", "text": "Convert a date or time to a datetime. If when is a date then it sets the time to midnight. If\n when is a time it sets the date to the epoch. If when is None or a datetime it returns when.\n Otherwise a TypeError is raised. Returned datetimes have tzinfo set to None unless when is a\n datetime with tzinfo set in which case it remains the same."}
{"_id": "q_19074", "text": "Return a Unix timestamp in seconds for the provided datetime. The `totz` function is called\n on the datetime to convert it to the provided timezone. It will be converted to UTC if no\n timezone is provided."}
{"_id": "q_19075", "text": "Return a Unix timestamp in milliseconds for the provided datetime. The `totz` function is\n called on the datetime to convert it to the provided timezone. It will be converted to UTC if\n no timezone is provided."}
{"_id": "q_19076", "text": "Return the datetime representation of the provided Unix timestamp. By defaults the timestamp is\n interpreted as UTC. If tzin is set it will be interpreted as this timestamp instead. By default\n the output datetime will have UTC time. If tzout is set it will be converted in this timezone\n instead."}
{"_id": "q_19077", "text": "Return the Unix timestamp in milliseconds as a datetime object. If tz is set it will be\n converted to the requested timezone otherwise it defaults to UTC."}
{"_id": "q_19078", "text": "Return the datetime truncated to the precision of the provided unit."}
{"_id": "q_19079", "text": "Return the date for the day of this week."}
{"_id": "q_19080", "text": "print a binary tree"}
{"_id": "q_19081", "text": "parallel reduce followed by broadcast of the result"}
{"_id": "q_19082", "text": "Internal function that determines EOL_STYLE_NATIVE constant with the proper value for the\n current platform."}
{"_id": "q_19083", "text": "Normalizes a path maintaining the final slashes.\n\n Some environment variables need the final slash in order to work.\n\n Ex. The SOURCES_DIR set by subversion must end with a slash because of the way it is used\n in the Visual Studio projects.\n\n :param unicode path:\n The path to normalize.\n\n :rtype: unicode\n :returns:\n Normalized path"}
{"_id": "q_19084", "text": "Returns a version of a path that is unique.\n\n Given two paths path1 and path2:\n CanonicalPath(path1) == CanonicalPath(path2) if and only if they represent the same file on\n the host OS. Takes account of case, slashes and relative paths.\n\n :param unicode path:\n The original path.\n\n :rtype: unicode\n :returns:\n The unique path."}
{"_id": "q_19085", "text": "Replaces all slashes and backslashes with the target separator\n\n StandardPath:\n We are defining that the standard-path is the one with only back-slashes in it, either\n on Windows or any other platform.\n\n :param bool strip:\n If True, removes additional slashes from the end of the path."}
{"_id": "q_19086", "text": "Copy a file from source to target.\n\n :param source_filename:\n @see _DoCopyFile\n\n :param target_filename:\n @see _DoCopyFile\n\n :param bool md5_check:\n If True, checks md5 files (of both source and target files), if they match, skip this copy\n and return MD5_SKIP\n\n Md5 files are assumed to be {source, target} + '.md5'\n\n If any file is missing (source, target or md5), the copy will always be made.\n\n :param copy_symlink:\n @see _DoCopyFile\n\n :raises FileAlreadyExistsError:\n If target_filename already exists, and override is False\n\n :raises NotImplementedProtocol:\n If file protocol is not accepted\n\n Protocols allowed are:\n source_filename: local, ftp, http\n target_filename: local, ftp\n\n :rtype: None | MD5_SKIP\n :returns:\n MD5_SKIP if the file was not copied because there was a matching .md5 file\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information"}
{"_id": "q_19087", "text": "Copy a file locally to a directory.\n\n :param unicode source_filename:\n The filename to copy from.\n\n :param unicode target_filename:\n The filename to copy to.\n\n :param bool copy_symlink:\n If True and source_filename is a symlink, target_filename will also be created as\n a symlink.\n\n If False, the file being linked will be copied instead."}
{"_id": "q_19088", "text": "Copies files into directories, according to a file mapping\n\n :param list(tuple(unicode,unicode)) file_mapping:\n A list of mappings between the directory in the target and the source.\n For syntax, @see: ExtendedPathMask\n\n :rtype: list(tuple(unicode,unicode))\n :returns:\n List of files copied. (source_filename, target_filename)\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information"}
{"_id": "q_19089", "text": "Moves a directory.\n\n :param unicode source_dir:\n\n :param unicode target_dir:\n\n :raises NotImplementedError:\n If trying to move anything other than:\n Local dir -> local dir\n FTP dir -> FTP dir (same host)"}
{"_id": "q_19090", "text": "Reads a file and returns its contents. Works for both local and remote files.\n\n :param unicode filename:\n\n :param bool binary:\n If True returns the file as is, ignore any EOL conversion.\n\n :param unicode encoding:\n File's encoding. If not None, contents obtained from file will be decoded using this\n `encoding`.\n\n :param None|''|'\\n'|'\\r'|'\\r\\n' newline:\n Controls universal newlines.\n See 'io.open' newline parameter documentation for more details.\n\n :returns str|unicode:\n The file's contents.\n Returns unicode string when `encoding` is not None.\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information"}
{"_id": "q_19091", "text": "Lists the files in the given directory\n\n :type directory: unicode | unicode\n :param directory:\n A directory or URL\n\n :rtype: list(unicode) | list(unicode)\n :returns:\n List of filenames/directories found in the given directory.\n Returns None if the given directory does not exists.\n\n If `directory` is a unicode string, all files returned will also be unicode\n\n :raises NotImplementedProtocol:\n If file protocol is not local or FTP\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information"}
{"_id": "q_19092", "text": "Create a file with the given contents.\n\n :param unicode filename:\n Filename and path to be created.\n\n :param unicode contents:\n The file contents as a string.\n\n :type eol_style: EOL_STYLE_XXX constant\n :param eol_style:\n Replaces the EOL by the appropriate EOL depending on the eol_style value.\n Considers that all content is using only \"\\n\" as EOL.\n\n :param bool create_dir:\n If True, also creates directories needed in filename's path\n\n :param unicode encoding:\n Target file's content encoding. Defaults to sys.getfilesystemencoding()\n Ignored if `binary` = True\n\n :param bool binary:\n If True, file is created in binary mode. In this case, `contents` must be `bytes` and not\n `unicode`\n\n :return unicode:\n Returns the name of the file created.\n\n :raises NotImplementedProtocol:\n If file protocol is not local or FTP\n\n :raises ValueError:\n If trying to mix unicode `contents` without `encoding`, or `encoding` without\n unicode `contents`\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information"}
{"_id": "q_19093", "text": "Create directory including any missing intermediate directory.\n\n :param unicode directory:\n\n :return unicode|urlparse.ParseResult:\n Returns the created directory or url (see urlparse).\n\n :raises NotImplementedProtocol:\n If protocol is not local or FTP.\n\n .. seealso:: FTP LIMITATIONS at this module's doc for performance issues information"}
{"_id": "q_19094", "text": "Deletes a directory.\n\n :param unicode directory:\n\n :param bool skip_on_error:\n If True, ignore any errors when trying to delete directory (for example, directory not\n found)\n\n :raises NotImplementedForRemotePathError:\n If trying to delete a remote directory."}
{"_id": "q_19095", "text": "Create a symbolic link at `link_path` pointing to `target_path`.\n\n :param unicode target_path:\n Link target\n\n :param unicode link_path:\n Fullpath to link name\n\n :param bool override:\n If True and `link_path` already exists as a link, that link is overridden."}
{"_id": "q_19096", "text": "Read the target of the symbolic link at `path`.\n\n :param unicode path:\n Path to a symbolic link\n\n :returns unicode:\n Target of a symbolic link"}
{"_id": "q_19097", "text": "Checks if a given path is local, raise an exception if not.\n\n This is used in filesystem functions that do not support remote operations yet.\n\n :param unicode path:\n\n :raises NotImplementedForRemotePathError:\n If the given path is not local"}
{"_id": "q_19098", "text": "Replaces eol on each line by the given eol_style.\n\n :param unicode contents:\n :type eol_style: EOL_STYLE_XXX constant\n :param eol_style:"}
{"_id": "q_19099", "text": "Verifies if a filename match with given patterns.\n\n :param str filename: The filename to match.\n :param list(str) masks: The patterns to search in the filename.\n :return bool:\n True if the filename has matched with one pattern, False otherwise."}
{"_id": "q_19100", "text": "Searches for files in a given directory that match with the given patterns.\n\n :param str dir_: the directory root, to search the files.\n :param list(str) in_filters: a list with patterns to match (default = all). E.g.: ['*.py']\n :param list(str) out_filters: a list with patterns to ignore (default = none). E.g.: ['*.py']\n :param bool recursive: if True search in subdirectories, otherwise, just in the root.\n :param bool include_root_dir: if True, includes the directory being searched in the returned paths\n :param bool standard_paths: if True, always uses unix path separators \"/\"\n :return list(str):\n A list of strings with the files that matched (with the full path in the filesystem)."}
{"_id": "q_19101", "text": "os.path.expanduser wrapper, necessary because it cannot handle unicode strings properly.\n\n This is not necessary in Python 3.\n\n :param path:\n .. seealso:: os.path.expanduser"}
{"_id": "q_19102", "text": "A context manager to replace and restore a value using a getter and setter.\n\n :param object obj: The object to replace/restore.\n :param object key: The key to replace/restore in the object.\n :param object value: The value to replace.\n\n Example::\n\n with PushPop2(sys.modules, 'alpha', None):\n pytest.raises(ImportError):\n import alpha"}
{"_id": "q_19103", "text": "turn any valid targets argument into a list of integer ids"}
{"_id": "q_19104", "text": "all ME and Task queue messages come through here, as well as\n IOPub traffic."}
{"_id": "q_19105", "text": "Route registration requests and queries from clients."}
{"_id": "q_19106", "text": "handler to attach to heartbeater.\n called when a previously registered heart fails to respond to beat request.\n triggers unregistration"}
{"_id": "q_19107", "text": "Save the submission of a task."}
{"_id": "q_19108", "text": "save an iopub message into the db"}
{"_id": "q_19109", "text": "Register a new engine."}
{"_id": "q_19110", "text": "Unregister an engine that explicitly requested to leave."}
{"_id": "q_19111", "text": "Second half of engine registration, called after our HeartMonitor\n has received a beat from the Engine's Heart."}
{"_id": "q_19112", "text": "handle shutdown request."}
{"_id": "q_19113", "text": "Purge results from memory. This method is more valuable before we move\n to a DB based message storage mechanism."}
{"_id": "q_19114", "text": "decompose a TaskRecord dict into subsection of reply for get_result"}
{"_id": "q_19115", "text": "Get the result of 1 or more messages."}
{"_id": "q_19116", "text": "go to the path"}
{"_id": "q_19117", "text": "return a standard message"}
{"_id": "q_19118", "text": "subprocess run on here"}
{"_id": "q_19119", "text": "Returns whether a reply from the kernel originated from a request\n from this frontend."}
{"_id": "q_19120", "text": "Annotate a single file.\n\n `cu` is the CodeUnit for the file to annotate."}
{"_id": "q_19121", "text": "Return a CouchDB database instance from a database string."}
{"_id": "q_19122", "text": "Make sure a DB specifier exists, creating it if necessary."}
{"_id": "q_19123", "text": "check packers for binary data and datetime support."}
{"_id": "q_19124", "text": "Return the nested message dict.\n\n This format is different from what is sent over the wire. The\n serialize/unserialize methods converts this nested message dict to the wire\n format, which is a list of message parts."}
{"_id": "q_19125", "text": "Sign a message with HMAC digest. If no auth, return b''.\n\n Parameters\n ----------\n msg_list : list\n The [p_header,p_parent,p_content] part of the message list."}
{"_id": "q_19126", "text": "Serialize the message components to bytes.\n\n This is roughly the inverse of unserialize. The serialize/unserialize\n methods work with full message lists, whereas pack/unpack work with\n the individual message parts in the message list.\n\n Parameters\n ----------\n msg : dict or Message\n The nexted message dict as returned by the self.msg method.\n\n Returns\n -------\n msg_list : list\n The list of bytes objects to be sent with the format:\n [ident1,ident2,...,DELIM,HMAC,p_header,p_parent,p_content,\n buffer1,buffer2,...]. In this list, the p_* entities are\n the packed or serialized versions, so if JSON is used, these\n are utf8 encoded JSON strings."}
{"_id": "q_19127", "text": "Build and send a message via stream or socket.\n\n The message format used by this function internally is as follows:\n\n [ident1,ident2,...,DELIM,HMAC,p_header,p_parent,p_content,\n buffer1,buffer2,...]\n\n The serialize/unserialize methods convert the nested message dict into this\n format.\n\n Parameters\n ----------\n\n stream : zmq.Socket or ZMQStream\n The socket-like object used to send the data.\n msg_or_type : str or Message/dict\n Normally, msg_or_type will be a msg_type unless a message is being\n sent more than once. If a header is supplied, this can be set to\n None and the msg_type will be pulled from the header.\n\n content : dict or None\n The content of the message (ignored if msg_or_type is a message).\n header : dict or None\n The header dict for the message (ignores if msg_to_type is a message).\n parent : Message or dict or None\n The parent or parent header describing the parent of this message\n (ignored if msg_or_type is a message).\n ident : bytes or list of bytes\n The zmq.IDENTITY routing path.\n subheader : dict or None\n Extra header keys for this message's header (ignored if msg_or_type\n is a message).\n buffers : list or None\n The already-serialized buffers to be appended to the message.\n track : bool\n Whether to track. Only for use with Sockets, because ZMQStream\n objects cannot track messages.\n\n Returns\n -------\n msg : dict\n The constructed message.\n (msg,tracker) : (dict, MessageTracker)\n if track=True, then a 2-tuple will be returned,\n the first element being the constructed\n message, and the second being the MessageTracker"}
{"_id": "q_19128", "text": "Send a raw message via ident path.\n\n This method is used to send a already serialized message.\n\n Parameters\n ----------\n stream : ZMQStream or Socket\n The ZMQ stream or socket to use for sending the message.\n msg_list : list\n The serialized list of messages to send. This only includes the\n [p_header,p_parent,p_content,buffer1,buffer2,...] portion of\n the message.\n ident : ident or list\n A single ident or a list of idents to use in sending."}
{"_id": "q_19129", "text": "Receive and unpack a message.\n\n Parameters\n ----------\n socket : ZMQStream or Socket\n The socket or stream to use in receiving.\n\n Returns\n -------\n [idents], msg\n [idents] is a list of idents and msg is a nested message dict of\n same format as self.msg returns."}
{"_id": "q_19130", "text": "Split the identities from the rest of the message.\n\n Feed until DELIM is reached, then return the prefix as idents and\n remainder as msg_list. This is easily broken by setting an IDENT to DELIM,\n but that would be silly.\n\n Parameters\n ----------\n msg_list : a list of Message or bytes objects\n The message to be split.\n copy : bool\n flag determining whether the arguments are bytes or Messages\n\n Returns\n -------\n (idents, msg_list) : two lists\n idents will always be a list of bytes, each of which is a ZMQ\n identity. msg_list will be a list of bytes or zmq.Messages of the\n form [HMAC,p_header,p_parent,p_content,buffer1,buffer2,...] and\n should be unpackable/unserializable via self.unserialize at this\n point."}
{"_id": "q_19131", "text": "Copy a SVG document to the clipboard.\n\n Parameters:\n -----------\n string : basestring\n A Python string containing a SVG document."}
{"_id": "q_19132", "text": "Convert a SVG document to a QImage.\n\n Parameters:\n -----------\n string : basestring\n A Python string containing a SVG document.\n\n size : QSize, optional\n The size of the image that is produced. If not specified, the SVG\n document's default size is used.\n \n Raises:\n -------\n ValueError\n If an invalid SVG string is provided.\n\n Returns:\n --------\n A QImage of format QImage.Format_ARGB32."}
{"_id": "q_19133", "text": "Make an object info dict with all fields present."}
{"_id": "q_19134", "text": "Wrapper around inspect.getsource.\n\n This can be modified by other projects to provide customized source\n extraction.\n\n Inputs:\n\n - obj: an object whose source code we will attempt to extract.\n\n Optional inputs:\n\n - is_binary: whether the object is known to come from a binary source.\n This implementation will skip returning any output for binary objects, but\n custom extractors may know how to meaningfully process them."}
{"_id": "q_19135", "text": "Find the absolute path to the file where an object was defined.\n\n This is essentially a robust wrapper around `inspect.getabsfile`.\n\n Returns None if no file can be found.\n\n Parameters\n ----------\n obj : any Python object\n\n Returns\n -------\n fname : str\n The absolute path to the file where the object was defined."}
{"_id": "q_19136", "text": "Find the line number in a file where an object was defined.\n\n This is essentially a robust wrapper around `inspect.getsourcelines`.\n\n Returns None if no file can be found.\n\n Parameters\n ----------\n obj : any Python object\n\n Returns\n -------\n lineno : int\n The line number where the object definition starts."}
{"_id": "q_19137", "text": "Return the definition header for any callable object.\n\n If any exception is generated, None is returned instead and the\n exception is suppressed."}
{"_id": "q_19138", "text": "Return a header string with proper colors."}
{"_id": "q_19139", "text": "Generic message when no information is found."}
{"_id": "q_19140", "text": "Print the definition header for any callable object.\n\n If the object is a class, print the constructor information."}
{"_id": "q_19141", "text": "Print the docstring for any object.\n\n Optional:\n -formatter: a function to run the docstring through for specially\n formatted docstrings.\n\n Examples\n --------\n\n In [1]: class NoInit:\n ...: pass\n\n In [2]: class NoDoc:\n ...: def __init__(self):\n ...: pass\n\n In [3]: %pdoc NoDoc\n No documentation found for NoDoc\n\n In [4]: %pdoc NoInit\n No documentation found for NoInit\n\n In [5]: obj = NoInit()\n\n In [6]: %pdoc obj\n No documentation found for obj\n\n In [5]: obj2 = NoDoc()\n\n In [6]: %pdoc obj2\n No documentation found for obj2"}
{"_id": "q_19142", "text": "Print the source code for an object."}
{"_id": "q_19143", "text": "Show the whole file where an object was defined."}
{"_id": "q_19144", "text": "Show detailed information about an object.\n\n Optional arguments:\n\n - oname: name of the variable pointing to the object.\n\n - formatter: special formatter for docstrings (see pdoc)\n\n - info: a structure with some information fields which may have been\n precomputed already.\n\n - detail_level: if set to 1, more information is given."}
{"_id": "q_19145", "text": "Start the Twisted reactor in a separate thread, if not already done.\n Returns the reactor.\n The thread will automatically be destroyed when all the tests are done."}
{"_id": "q_19146", "text": "By wrapping a test function with this decorator, you can return a\n twisted Deferred and the test will wait for the deferred to be triggered.\n The whole test function will run inside the Twisted event loop.\n\n The optional timeout parameter specifies the maximum duration of the test.\n The difference with timed() is that timed() will still wait for the test\n to end, while deferred() will stop the test when its timeout has expired.\n The latter is more desireable when dealing with network tests, because\n the result may actually never arrive.\n\n If the callback is triggered, the test has passed.\n If the errback is triggered or the timeout expires, the test has failed.\n\n Example::\n \n @deferred(timeout=5.0)\n def test_resolve():\n return reactor.resolve(\"www.python.org\")\n\n Attention! If you combine this decorator with other decorators (like\n \"raises\"), deferred() must be called *first*!\n\n In other words, this is good::\n \n @raises(DNSLookupError)\n @deferred()\n def test_error():\n return reactor.resolve(\"xxxjhjhj.biz\")\n\n and this is bad::\n \n @deferred()\n @raises(DNSLookupError)\n def test_error():\n return reactor.resolve(\"xxxjhjhj.biz\")"}
{"_id": "q_19147", "text": "Exclude NoSet objec\n\n .. code-block::\n\n >>> coerce(NoSet, 'value')\n 'value'"}
{"_id": "q_19148", "text": "Encodes the stored ``data`` to XML and returns\n an ``lxml.etree`` value."}
{"_id": "q_19149", "text": "Helper function for merge.\n\n Takes a dictionary whose values are lists and returns a dict with\n the elements of each list as keys and the original keys as values."}
{"_id": "q_19150", "text": "Merge two Structs with customizable conflict resolution.\n\n This is similar to :meth:`update`, but much more flexible. First, a\n dict is made from data+key=value pairs. When merging this dict with\n the Struct S, the optional dictionary 'conflict' is used to decide\n what to do.\n\n If conflict is not given, the default behavior is to preserve any keys\n with their current value (the opposite of the :meth:`update` method's\n behavior).\n\n Parameters\n ----------\n __loc_data : dict, Struct\n The data to merge into self\n __conflict_solve : dict\n The conflict policy dict. The keys are binary functions used to\n resolve the conflict and the values are lists of strings naming\n the keys the conflict resolution function applies to. Instead of\n a list of strings a space separated string can be used, like\n 'a b c'.\n kw : dict\n Additional key, value pairs to merge in\n\n Notes\n -----\n\n The `__conflict_solve` dict is a dictionary of binary functions which will be used to\n solve key conflicts. Here is an example::\n\n __conflict_solve = dict(\n func1=['a','b','c'],\n func2=['d','e']\n )\n\n In this case, the function :func:`func1` will be used to resolve\n keys 'a', 'b' and 'c' and the function :func:`func2` will be used for\n keys 'd' and 'e'. This could also be written as::\n\n __conflict_solve = dict(func1='a b c',func2='d e')\n\n These functions will be called for each key they apply to with the\n form::\n\n func1(self['a'], other['a'])\n\n The return value is used as the final merged value.\n\n As a convenience, merge() provides five (the most commonly needed)\n pre-defined policies: preserve, update, add, add_flip and add_s. The\n easiest explanation is their implementation::\n\n preserve = lambda old,new: old\n update = lambda old,new: new\n add = lambda old,new: old + new\n add_flip = lambda old,new: new + old # note change of order!\n add_s = lambda old,new: old + ' ' + new # only for str!\n\n You can use those four words (as strings) as keys instead\n of defining them as functions, and the merge method will substitute\n the appropriate functions for you.\n\n For more complicated conflict resolution policies, you still need to\n construct your own functions.\n\n Examples\n --------\n\n This show the default policy:\n\n >>> s = Struct(a=10,b=30)\n >>> s2 = Struct(a=20,c=40)\n >>> s.merge(s2)\n >>> sorted(s.items())\n [('a', 10), ('b', 30), ('c', 40)]\n\n Now, show how to specify a conflict dict:\n\n >>> s = Struct(a=10,b=30)\n >>> s2 = Struct(a=20,b=40)\n >>> conflict = {'update':'a','add':'b'}\n >>> s.merge(s2,conflict)\n >>> sorted(s.items())\n [('a', 20), ('b', 70)]"}
{"_id": "q_19151", "text": "convert object to primitive type so we can serialize it to data format like python.\n\n all primitive types: dict, list, int, float, bool, str, None"}
{"_id": "q_19152", "text": "Parse and send the colored source.\n\n If out and scheme are not specified, the defaults (given to\n constructor) are used.\n\n out should be a file-type object. Optionally, out can be given as the\n string 'str' and the parser will automatically return the output in a\n string."}
{"_id": "q_19153", "text": "Get a list of matplotlib figures by figure numbers.\n\n If no arguments are given, all available figures are returned. If the\n argument list contains references to invalid figures, a warning is printed\n but the function continues pasting further figures.\n\n Parameters\n ----------\n figs : tuple\n A tuple of ints giving the figure numbers of the figures to return."}
{"_id": "q_19154", "text": "Select figure format for inline backend, either 'png' or 'svg'.\n\n Using this method ensures only one figure format is active at a time."}
{"_id": "q_19155", "text": "Given a gui string return the gui and mpl backend.\n\n Parameters\n ----------\n gui : str\n Can be one of ('tk','gtk','wx','qt','qt4','inline').\n\n Returns\n -------\n A tuple of (gui, backend) where backend is one of ('TkAgg','GTKAgg',\n 'WXAgg','Qt4Agg','module://IPython.zmq.pylab.backend_inline')."}
{"_id": "q_19156", "text": "Configure an IPython shell object for matplotlib use.\n\n Parameters\n ----------\n shell : InteractiveShell instance\n\n backend : matplotlib backend\n\n user_ns : dict\n A namespace where all configured variables will be placed. If not given,\n the `user_ns` attribute of the shell object is used."}
{"_id": "q_19157", "text": "Activate pylab mode in the user's namespace.\n\n Loads and initializes numpy, matplotlib and friends for interactive use.\n\n Parameters\n ----------\n user_ns : dict\n Namespace where the imports will occur.\n\n gui : optional, string\n A valid gui name following the conventions of the %gui magic.\n\n import_all : optional, boolean\n If true, an 'import *' is done from numpy and pylab.\n\n Returns\n -------\n The actual gui used (if not given as input, it was obtained from matplotlib\n itself, and will be needed next to configure IPython's gui integration."}
{"_id": "q_19158", "text": "Start this Tracer.\n\n Return a Python function suitable for use with sys.settrace()."}
{"_id": "q_19159", "text": "Start a new Tracer object, and store it in self.tracers."}
{"_id": "q_19160", "text": "Called on new threads, installs the real tracer."}
{"_id": "q_19161", "text": "Stop collecting trace information."}
{"_id": "q_19162", "text": "Resume tracing after a `pause`."}
{"_id": "q_19163", "text": "Return the line data collected.\n\n Data is { filename: { lineno: None, ...}, ...}"}
{"_id": "q_19164", "text": "check a result dict for errors, and raise CompositeError if any exist.\n Passthrough otherwise."}
{"_id": "q_19165", "text": "Call this at Python startup to perhaps measure coverage.\n\n If the environment variable COVERAGE_PROCESS_START is defined, coverage\n measurement is started. The value of the variable is the config file\n to use.\n\n There are two ways to configure your Python installation to invoke this\n function when Python starts:\n\n #. Create or append to sitecustomize.py to add these lines::\n\n import coverage\n coverage.process_startup()\n\n #. Create a .pth file in your Python installation containing::\n\n import coverage; coverage.process_startup()"}
{"_id": "q_19166", "text": "Return the canonical directory of the module or file `morf`."}
{"_id": "q_19167", "text": "Return the source file for `filename`."}
{"_id": "q_19168", "text": "Update the source_match matcher with latest imported packages."}
{"_id": "q_19169", "text": "Start measuring code coverage.\n\n Coverage measurement actually occurs in functions called after `start`\n is invoked. Statements in the same scope as `start` won't be measured.\n\n Once you invoke `start`, you must also call `stop` eventually, or your\n process might not shut down cleanly."}
{"_id": "q_19170", "text": "Clean up on process shutdown."}
{"_id": "q_19171", "text": "Exclude source lines from execution consideration.\n\n A number of lists of regular expressions are maintained. Each list\n selects lines that are treated differently during reporting.\n\n `which` determines which list is modified. The \"exclude\" list selects\n lines that are not considered executable at all. The \"partial\" list\n indicates lines with branches that are not taken.\n\n `regex` is a regular expression. The regex is added to the specified\n list. If any of the regexes in the list is found in a line, the line\n is marked for special treatment during reporting."}
{"_id": "q_19172", "text": "Return a compiled regex for the given exclusion list."}
{"_id": "q_19173", "text": "Save the collected coverage data to the data file."}
{"_id": "q_19174", "text": "Combine together a number of similarly-named coverage data files.\n\n All coverage data files whose name starts with `data_file` (from the\n coverage() constructor) will be read, and combined together into the\n current measurements."}
{"_id": "q_19175", "text": "Like `analysis2` but doesn't return excluded line numbers."}
{"_id": "q_19176", "text": "Annotate a list of modules.\n\n Each module in `morfs` is annotated. The source is written to a new\n file, named with a \",cover\" suffix, with each line prefixed with a\n marker to indicate the coverage of the line. Covered lines have \">\",\n excluded lines have \"-\", and missing lines have \"!\".\n\n See `coverage.report()` for other arguments."}
{"_id": "q_19177", "text": "Display a Python object in all frontends.\n\n By default all representations will be computed and sent to the frontends.\n Frontends can decide which representation is used and how.\n\n Parameters\n ----------\n objs : tuple of objects\n The Python objects to display.\n include : list or tuple, optional\n A list of format type strings (MIME types) to include in the\n format data dict. If this is set *only* the format types included\n in this list will be computed.\n exclude : list or tuple, optional\n A list of format type string (MIME types) to exclue in the format\n data dict. If this is set all format types will be computed,\n except for those included in this argument."}
{"_id": "q_19178", "text": "Display the HTML representation of an object.\n\n Parameters\n ----------\n objs : tuple of objects\n The Python objects to display, or if raw=True raw HTML data to\n display.\n raw : bool\n Are the data objects raw data or Python objects that need to be\n formatted before display? [default: False]"}
{"_id": "q_19179", "text": "Display the SVG representation of an object.\n\n Parameters\n ----------\n objs : tuple of objects\n The Python objects to display, or if raw=True raw svg data to\n display.\n raw : bool\n Are the data objects raw data or Python objects that need to be\n formatted before display? [default: False]"}
{"_id": "q_19180", "text": "Display the PNG representation of an object.\n\n Parameters\n ----------\n objs : tuple of objects\n The Python objects to display, or if raw=True raw png data to\n display.\n raw : bool\n Are the data objects raw data or Python objects that need to be\n formatted before display? [default: False]"}
{"_id": "q_19181", "text": "Display the JPEG representation of an object.\n\n Parameters\n ----------\n objs : tuple of objects\n The Python objects to display, or if raw=True raw JPEG data to\n display.\n raw : bool\n Are the data objects raw data or Python objects that need to be\n formatted before display? [default: False]"}
{"_id": "q_19182", "text": "Display the LaTeX representation of an object.\n\n Parameters\n ----------\n objs : tuple of objects\n The Python objects to display, or if raw=True raw latex data to\n display.\n raw : bool\n Are the data objects raw data or Python objects that need to be\n formatted before display? [default: False]"}
{"_id": "q_19183", "text": "Display the JSON representation of an object.\n\n Note that not many frontends support displaying JSON.\n\n Parameters\n ----------\n objs : tuple of objects\n The Python objects to display, or if raw=True raw json data to\n display.\n raw : bool\n Are the data objects raw data or Python objects that need to be\n formatted before display? [default: False]"}
{"_id": "q_19184", "text": "Display the Javascript representation of an object.\n\n Parameters\n ----------\n objs : tuple of objects\n The Python objects to display, or if raw=True raw javascript data to\n display.\n raw : bool\n Are the data objects raw data or Python objects that need to be\n formatted before display? [default: False]"}
{"_id": "q_19185", "text": "Reload the raw data from file or URL."}
{"_id": "q_19186", "text": "Execute a command in a subshell.\n\n Parameters\n ----------\n cmd : str\n A command to be executed in the system shell.\n\n Returns\n -------\n int : child's exitstatus"}
{"_id": "q_19187", "text": "Forward read events from an FD over a socket.\n\n This method wraps a file in a socket pair, so it can\n be polled for read events by select (specifically zmq.eventloop.ioloop)"}
{"_id": "q_19188", "text": "Loop through lines in self.fd, and send them over self.sock."}
{"_id": "q_19189", "text": "add commands to parser"}
{"_id": "q_19190", "text": "get config for subparser and create commands"}
{"_id": "q_19191", "text": "custom command line action to show version"}
{"_id": "q_19192", "text": "Return a launcher for a given clsname and kind.\n\n Parameters\n ==========\n clsname : str\n The full name of the launcher class, either with or without the\n module path, or an abbreviation (MPI, SSH, SGE, PBS, LSF,\n WindowsHPC).\n kind : str\n Either 'EngineSet' or 'Controller'."}
{"_id": "q_19193", "text": "Start the app for the stop subcommand."}
{"_id": "q_19194", "text": "import and instantiate a Launcher based on importstring"}
{"_id": "q_19195", "text": "Start the app for the engines subcommand."}
{"_id": "q_19196", "text": "Start the app for the start subcommand."}
{"_id": "q_19197", "text": "Return the consumer and oauth tokens with three-legged OAuth process and\n save in a yaml file in the user's home directory."}
{"_id": "q_19198", "text": "Adds properties for all fields in this protocol message type."}
{"_id": "q_19199", "text": "Unpacks Any message and returns the unpacked message.\n\n This internal method is differnt from public Any Unpack method which takes\n the target message as argument. _InternalUnpackAny method does not have\n target message type and need to find the message type in descriptor pool.\n\n Args:\n msg: An Any message to be unpacked.\n\n Returns:\n The unpacked message."}
{"_id": "q_19200", "text": "Create a new wx app or return an exiting one."}
{"_id": "q_19201", "text": "Start the wx event loop in a consistent manner."}
{"_id": "q_19202", "text": "Create a new qt4 app or return an existing one."}
{"_id": "q_19203", "text": "Is the qt4 event loop running."}
{"_id": "q_19204", "text": "Start the qt4 event loop in a consistent manner."}
{"_id": "q_19205", "text": "Return a blank canvas to annotate.\n\n :param width: xdim (int)\n :param height: ydim (int)\n :returns: :class:`jicbioimage.illustrate.Canvas`"}
{"_id": "q_19206", "text": "Draw a cross on the canvas.\n\n :param position: (row, col) tuple\n :param color: RGB tuple\n :param radius: radius of the cross (int)"}
{"_id": "q_19207", "text": "Return a canvas from a grayscale image.\n\n :param im: single channel image\n :channels_on: channels to populate with input image\n :returns: :class:`jicbioimage.illustrate.Canvas`"}
{"_id": "q_19208", "text": "Returns a unique ID of a given length.\n User `version=2` for cross-systems uniqueness."}
{"_id": "q_19209", "text": "Returns a dictionary from a URL params"}