Product Feedback

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. Autoscaling (AWS) Local Storage supported SSD

    Currently autoscaling local storage only supports Throughput Optimized HDD. We currently use SSD for local storage due to the greatly increased job performance, but this leads to a lot of overprovisioning and unused SSD. Would be great to be able to use the Autoscaling local storage with SSD.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  2. UI should have files/storage browser (i.e. for ADLSgen2)

    Databricks UI should provide user friendly method (= not a command line) to browse files on external file system like ADLS, open and edit text files and to upload and download files. Now user is forced to use external software (storage explorer). The external file storage would be configured by administrator (single per workspace) and ACLs should be taken into account when accessing external storage (aka credentials passthrough)

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Navigation UI  ·  Flag idea as inappropriate…  ·  Admin →
  3. Pool tag edit should be allowed when all dependent clusters are down

    Currently I need to delete pool, add new one and re-attach all clusters to the new pool

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  4. Turn auto-termination on/off without cluster restart

    We want the cluster to be always on (auto-termination disabled) between 9AM and 5PM. After this time cluster should be on demand (auto-termination enabled). This could be realized either via API call or as a schedule defined for the cluster). At the moment, changing this auto-termination setting would require cluster restart (thus killing all jobs on the cluster)

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  5. Running a cell didn't cause notebook to scroll down randomly

    When I run a cell (via Shift + Enter), the notebook often scrolls down well beyond the bottom of the cell of interest, making it difficult to find my cell and the results.

    5 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  6. Add a collaborator role to the available roles for DataBricks

    There needs to be another role in addition to admin so one can better isolate notebooks, jobs, and folders access from the admin role

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Account management  ·  Flag idea as inappropriate…  ·  Admin →
  7. remove notebook from recents

    when I have notebooks with similar names and content, but in different workspaces, it would be useful to be able to remove from recents, when I have done with one of them, so I don't work in the wrong space.

    2 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  8. UX fix

    Hi,

    if you work with multiple people inside the same notebook, the other persons courser is refelceted wrong( jumps to my current edit if the person is idle)
    pls fix
    thx

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Navigation UI  ·  Flag idea as inappropriate…  ·  Admin →
  9. Historical Ganglia snapshot included more metrics

    It's nice that clusters have historical snapshots of the ganglia UI. It currently shows the "load_one" metric for all nodes on the cluster. It would also be nice to add some report metrics to this snapshot -- e.g. mem_report, cpu_report, network_report, and disk_report. This makes it easier to debug issues with jobs that terminate clusters upon completion or failure.
    Thanks!

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Other  ·  Flag idea as inappropriate…  ·  Admin →
  10. Support for JSON representation to exceed 10,000 bytes

    Http post request, JSON representation cannot exceed 10,000 bytes.
    If i have a large request body, how to over come the issue?

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  REST API  ·  Flag idea as inappropriate…  ·  Admin →
  11. Plot options does not support ambiguous column names

    Given I am in Plot Options
    When a dataframe has two columns with the same name
    Then I can use an alias to differentiate the columns

    My situation is that I left joined two tables. Nulls from non-joined rows is taking precedence when columns names are the same. I am unable to specify an alias for the tables in the Plot Options screen.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Visualizations  ·  Flag idea as inappropriate…  ·  Admin →
  12. Not able to add external library glpk , pyhton

    After adding via init script, server is not able to start.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  External libraries / applications  ·  Flag idea as inappropriate…  ·  Admin →
  13. I had a searchable way to access workspaces instead of a non-scrollable menu.

    I will have many databricks workspaces for my clients to search through and manage, possibly 100+. Current list is not scrollable when you have many workspaces to juggle through. I've already run out of space with 8 workspaces. I'd like to search by DBX resource name & Azure subscription it's deployed in.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Other  ·  Flag idea as inappropriate…  ·  Admin →
  14. Add S3 PutObjectAcl Funtion to dbfsutils.

    Our clusters write to cross account S3 Buckets. I already configured BucketOwnerFullControl ACL on spark configuration.
    But need access this output datas from more additional account roles for audit, etc.

    I want you to improve the dbfsutils (or little functions) to able S3 PutObjectACL operation.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Data import / export  ·  Flag idea as inappropriate…  ·  Admin →
  15. New Age Roofers

    Address:
    6506 Elkhurst Dr #10
    Plano, Texas
    75023

    Primary phone:
    469-995-8967

    Website
    http://www.planocontractor.co/

    Primary category:
    Roofing Contractor

    Hours:
    24 hours

    Owner:
    Betty Jenkins

    Business Email:
    info@planocontractor.co

    Keywords:
    Roofing Contractor, Siding Contractor, Window Installation Service, Door Supplier, Gutter Installation, Commercial Services

    Description:
    The Roofer at New Age Roofers Plano TX can make tiny repair work, repair significant problems, and also set up brand-new items for one or many structures. We provide roof examination to establish the security of your roofing system. The team has over a years of experience setting up doors, home windows, and roofing systems. We additionally supply doors…

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  16. Support GPU clusters (g3) in North California region

    When we try to create a cluster with GPU instances, we see that the GPU instance types available are p2 and p3, which are not available in our current installation region (North California).

    For us-west-1 region that is the current account the only GPU instance types supported by AWS are the g3 instance types.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  17. We could have multiple regions in the same account

    Currently we can only have one region supported per account. It would be great if we could be able to add more than one region and choose on which region we will create our clusters

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Account management  ·  Flag idea as inappropriate…  ·  Admin →
  18. Azure - Make NCv3 available in North Europe

    Currently, only NCv1 (K80 GPU's) are available for GPU accelerated ML workflows, these are excruciatingly slow and small compared to NCv3 that run V100s.
    We have all our data in North Europe and are therefore stuck with K80s, putting us at a major competative disadvantage.
    Futhermore, a lot of frameworks have cumbersome multi-gpu training workflows, so it is preferable to use one larger GPU over several smaller ones.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  19. we could recreate the key after it has been (mistakenly) deleted

    After successfully running for few days on AWS we mistakenly deleted the key from AWS.

    Now we are getting the following error when trying to start new clusters:

    ```
    Time
    2019-05-08 12:14:02 CEST
    Message
    Cluster terminated. Reason: Cloud Provider Launch Failure

    A cloud provider error was encountered while launching worker nodes. See the Databricks guide for more information.

    AWS API error code: InvalidKeyPair.NotFound

    AWS error message: The key pair 'dbe-worker-XYZ' does not exist

    ```

    How we can force DataBricks to recreate the key?
    Deleting and adding again the IAM role to the DataBricks didn't help, it still expects the same…

    6 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  20. Add option to upgrade workspace to premium from standard

    There is currently no way to upgrade a workspace from the standard tier to the premium tier. Rather than having to export all notebooks, create a new workspace, port everything over, re-mount storage endpoints, re-add all users, and import the notebooks, it would be preferable to be able to click a button.

    52 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feedback and Knowledge Base